The MCP Bet

When we started building Beacon — an enterprise workflow assistant that needed to talk to over a dozen different tools and services through Slack — we had a choice. Build custom integrations for each tool, the way everyone else was doing it, or bet on a protocol that was barely six months old.
We bet on MCP. Thirteen tool servers later, it's the best architectural decision we made on that project.
The Problem MCP Solves
If you've built an agent that uses tools, you know the pattern. You define a function. You write a description for the model. You parse the model's output into a function call. You execute the function. You format the result back into something the model can understand. You handle errors.
Now do that for 13 different tools. Each with its own authentication model, its own data format, its own error semantics. You end up with a custom integration layer for each tool, and the code that manages these integrations starts to rival the agent logic itself in complexity.
MCP standardizes this. Every tool server speaks the same protocol. The agent doesn't need to know the implementation details of each tool — it just needs to know what tools are available, what they accept, and what they return. The protocol handles the rest.
Why We Didn't Wait
The honest answer: we were tired of writing integration code.
Before Beacon, we'd built Conductor — an orchestration platform with 55+ tools. Each tool integration was bespoke. Each one had its own client library, its own error handling, its own retry logic. When a tool's API changed, we had to update our integration. When we wanted to add a new tool, we had to write a new integration from scratch.
It worked. But the maintenance burden was growing linearly with the number of tools, and we could see where that trend line was headed.
MCP offered an escape from that trajectory. Instead of N custom integrations, we'd have N tool servers that all speak the same protocol, plus one agent that speaks that protocol. Adding a new tool means standing up a new server, not modifying the agent.
What 13 Tool Servers Looks Like in Practice
Beacon's tool servers cover research, data collection, case tracking, reporting, and a handful of domain-specific utilities. Without getting into specifics that would identify the client's workflow:
- Some servers wrap existing APIs (turning a REST API into an MCP tool server is surprisingly straightforward)
- Some servers wrap databases (the agent can query and update records without knowing the schema details)
- Some servers wrap other AI capabilities (search, summarization, extraction)
- A few servers are pure utility (formatting, file conversion, notification dispatch)
The agent sees all of them as tools with descriptions, parameters, and return types. It doesn't care whether a tool is hitting a database, calling an API, or running a local computation. That abstraction is the whole point.
The Rough Edges
We'd be lying if we said MCP was smooth sailing from day one. Here's what we ran into:
Discovery and documentation. Early MCP tooling didn't have great support for tool discovery — figuring out which tools were available and what they did. We ended up writing our own registry layer on top of the protocol to manage this. This has gotten better since, but it was a pain point initially.
Error propagation. When a tool server fails, how does that failure propagate back to the agent? The protocol handles basic error responses, but the semantics of "this tool is temporarily unavailable" vs. "this tool can't handle your input" vs. "this tool is permanently broken" weren't well-defined early on. We built our own error taxonomy on top.
State management across tool calls. Some workflows require multiple tool calls that share state — "search for X, then take the results and update Y." The protocol doesn't have a native concept of sessions or transactions across tool calls. We handle this in the agent's orchestration layer, but it would be cleaner in the protocol.
Cold starts. Some of our tool servers take a few seconds to initialize. When the agent calls a tool for the first time in a while, there's a noticeable delay. We added a warm-up mechanism that pings all servers on startup, but this is infrastructure work that shouldn't be necessary.
Why We'd Do It Again
Despite the rough edges, the bet paid off for three reasons:
Adding tools got cheap. Our 13th tool server took an afternoon to build. The first one took a week. The protocol standardization meant we'd already solved all the cross-cutting concerns (auth, errors, retries, logging) once. Each new server is just the domain logic.
The agent got simpler. Beacon's core agent code is remarkably clean. It doesn't contain any tool-specific logic. It reasons about what needs to happen, selects tools, calls them, and processes results. All through the same interface. When we need to debug the agent's behavior, we're reading agent logic — not integration code.
Tool servers are reusable. Three of the tool servers we built for Beacon are now used in other projects. They just plug in. That's not something you get with custom integrations tightly coupled to a specific agent.
Should You Bet on MCP?
If you're building a single-tool agent, probably not worth the overhead. Just write the integration.
If you're building an agent that needs three or more tools, especially if you expect that number to grow, MCP starts to pay for itself. The upfront investment in learning the protocol and setting up the infrastructure is real, but it's a one-time cost that amortizes across every tool you add.
If you're building a platform where multiple agents might share tools, MCP is a no-brainer. The alternative — every agent maintaining its own copy of every tool integration — doesn't scale.
We're continuing to build on MCP and contributing back to the ecosystem where we can. The protocol isn't perfect, but it's solving a real problem that was eating our engineering time. Sometimes the best architectural decisions aren't the clever ones. They're the ones that eliminate a whole category of work you don't want to be doing.