The Tool I Built When Our Team Ran Out of Hours

How I built an internal knowledge and deck generation tool that saved our team hundreds of hours, maybe thousands of hours.


James, my manager and our Head of GTM Ops, pulled me aside for a quick ten-second conversation. He told me one thing: our customer success team is under water, go see what you can do.

I sat with them for a couple of hours that night, incessantly asking about their biggest pain points. Two things stood out immediately.

The first was tribal knowledge. The team knew answers existed somewhere. A Google Drive folder, a Slack thread, a Gong call from six months ago. Finding them was the problem. Institutional knowledge was spread across too many places for anyone to keep track of.

The second was slide decks. QBRs, audits, baseline reports. The recurring deliverables that ate hours every time without fail.

Scoping the Solution

Before writing a single line of code, I had to figure out what I was actually building. The post-sales team was already context-dumped across too many tools. Asking them to open another one was a non-starter. It had to live in Slack.

I scoped building it myself. Vectorize everything in Pinecone, RAG over it with semantic search on top. Straightforward, but two to three weeks minimum before thinking about maintenance, model updates, and keeping data in sync. Anybody can build anything, but the real question is whether you even should.

In our weekly catch-up, James talked about experimenting with Dust months ago. It was a widely horizontal tool we hadn’t quite figured out a use case for. But it was Slack-native, permission-aware, and after an hour of testing, I knew it was the answer.

What We Built

Inputs
Knowledge
Gong
Google Drive
Slack
Notion
Snowflake
CRM
Product usage
Pylon support
Profound MCP
Web research
Firecrawl
Competitive intel
Dust
AI operating layer
Slack-native
Agent 01
Knowledge Retrieval
Ask anything in Slack Answer surfaces instantly
Agent 02
Report Generation
Gamma MCP on Railway Routes through QA Agent
QA Agent
Validates every number before output. No approval, no report.
Answer in Slack
Formatted deck via Gamma

We connected Gong, Google Drive, Slack, Notion, and our Snowflake warehouse, which consolidates all of our analytics data derived from pipelines across our CRM, product usage, support data from Pylon, and more.

Two agents, two distinct jobs.

The first was knowledge retrieval. Ask it anything about a customer, a past conversation, a product question, and it pulls across every connected source to surface an answer in Slack. No hunting necessary.

The second was report generation. I looked at a few vendors but kept coming back to Gamma. They had endpoints to create presentations programmatically, which meant I could turn it into a tool the agent could call directly. I built an MCP server on top of their API, hosted it on my own Railway instance, and plugged it into Dust. I also wired in our own Profound MCP server, which gave the agent access to our full customer data hierarchy, so when it builds a report, it is pulling from the same live data our platform runs on, not a static export.

One thing I cared about from the start was accuracy. These reports were presented to customers. So I built a Quality Assurance agent as a mandatory checkpoint in the workflow. Before any report gets generated, the QA agent validates every number against the raw data it was pulled from. No estimates, no fabricated metrics, no shortcuts. If something does not check out, the report does not get made.

The Results

A customer question that used to take 30 minutes of prep now takes two minutes. A QBR deck that used to take hours gets drafted automatically.

In the six weeks since, the team has generated 541 decks, and roughly half the company is actively using these agents, averaging 46 messages per active user. That represents hundreds, maybe thousands, of hours saved across the entire company.

What started as a post-sales problem now runs across the entire company. Product design pulls customer pain points from call transcripts. Sales managers use it to identify where coaching is needed. AEs build prospect materials from it. New hires use it to onboard.

What I Learned

Data quality sets the ceiling. Dust works with disparate data, but our returns were multiplied because our infrastructure was already clean.

Adoption is not automatic. Some people saw gains immediately. Others barely used it. The difference was always whether they had a specific, recurring pain point that the tool could address. Onboarding and enablement mattered more than I thought.

Treat my role like I would a product. Ask who will actually use what you build, what they need, and what the highest-leverage decision is at each step. The cost of building is rarely just the build, and buying time back is what lets you move on to the next problem.