
Share:
Benjamin Aronov is a developer advocate at Vonage. He is a proven community builder with a background in Ruby on Rails. Benjamin enjoys the beaches of Tel Aviv which he calls home. His Tel Aviv base allows him to meet and learn from some of the world's best startup founders. Outside of tech, Benjamin loves traveling the world in search of the perfect pain au chocolat.
4 Lessons Learned Building With MCP Tools & Vonage APIs
The Model Context Protocol (MCP) has moved quickly. I remember when I first heard about it last March at FOSSASIA 2025. It’s hard to believe it’s been less than a year, but it already feels like it’s become ubiquitous in programming.
Even though I heard about it early on, it wasn’t until the last few months that I really dove in. I’ve been playing around with the Vonage Documentation and Tooling MCP servers; first locally, then through blog posts, demos, and eventually helping out with the open-source tooling server. This isn’t a recap of MCP features or a guide to getting started: It’s a set of lessons that surfaced while trying to make the tools usable in practice.
Lesson One: An Early Mistake: Agents Default to Code Generation
The first real failure wasn’t a technical bug. It was a flawed mental model.
I connected my IDE to the MCP server, confirmed the tools were registered, and asked the agent to send a WhatsApp message. Instead of calling the existing tool, the agent opened a new file and started writing a Node.js script. It attempted to import the Vonage SDK, configure authentication, and make the API call directly.
Nothing was “wrong” with the model. It was doing exactly what it had been trained to do: solve problems by writing code.
The issue was how I framed the system. The agent was acting as a smarter code generator, instead of as an orchestration layer. The tools already existed. The agent didn’t need to build anything: it needed to select and invoke what was available.
Once I adjusted how I prompted the agent, things settled into place. Instead of asking it to create functionality, I asked it to use functionality.
That change sounds minor, but it’s really a big mental shift.
The takeaway: MCP works best when you stop thinking in terms of scripts and start thinking in terms of tools. If the agent is writing glue code, it usually means something upstream isn’t structured correctly.
Lesson Two: Configuration Was a Real Time-Sink
Once the mental model was corrected, I expected the rest of the work to be straightforward. It wasn’t.
The most time-consuming part of local development was getting the IDE to reliably connect to the MCP server. Most failures looked like “the tool isn’t showing up,” which made it easy to assume the server was broken. It was almost always configuration.
Different IDEs expect the MCP configuration in different locations. For instance, Windsurf looks like VS Code, but it isn’t VS Code, and the config path isn’t where muscle memory suggests it should be. I spent more time than I’d like to admit chasing nonexistent bugs because the server was never launched in the first place.
MCP configs also don’t behave like shell scripts. The client spawns a process directly. You can’t rely on sh -c, cd, or chained commands. If you don’t use absolute paths and explicit commands, failures tend to be silent.
Once everything was configured correctly, the experience was uneventful in the best way. Restart the IDE, refresh the plugins panel, and the tools appear.
The takeaway: MCP follows a familiar programming pattern: the setup is where most of the headaches happen. When something doesn’t work, it’s usually a configuration. Once you’re set up, MCP lets you fly!
Lesson Three: Stdio Works Well Locally, Less Well Elsewhere
MCP’s default stdio transport is a good fit for local development. It’s simple, fast, and avoids exposing ports or credentials. For IDE-based workflows, it does exactly what you want. And that’s exactly why we’ve used it for our Tooling Server. We wanted to get it in developers’ hands quickly and flexibly.
However, some limitations emerge when you try to use the same tools outside an IDE, such as integrating an external system. You can’t make an HTTP request to stdin. There’s no endpoint to secure, and no obvious place to attach authentication.
To bridge that gap, I ended up building a small translation layer that converted JSON-RPC over stdio into HTTP so other systems could interact with the MCP server. It worked, but it was additional infrastructure that didn’t exist in the local-only setup.
Testing also became more awkward. “It works locally” isn’t very meaningful if the agent is calling the tool from somewhere else. There’s currently no clean middle ground for validating MCP tools in isolation without spinning up a full agent session.
The takeaway: MCP makes IDE workflows easy, which is why we started there. For developers comfortable managing their own hosting and authentication, that tradeoff is reasonable. Once you move beyond local usage, stdio brings challenges like auth, translation layers, and testing. These are less about MCP itself and more about where this server sits. Keep your eye out as our MCP offerings expand.
Lesson Four: Planning for Growth: Tool Design and Context Budget
This lesson didn’t come from something breaking. It came from looking at the tooling server and asking, “What will happen when more people start contributing?”
While writing about using Vonage MCP tools and encouraging open-source contributions, it became clear that the repo structure wasn’t designed to grow. Everything lived in a single file. That’s manageable early on, but it doesn’t scale well. Neither for humans nor for agents.
Looking at how other MCP servers handle growth surfaced a more important constraint: every tool you expose consumes part of the model’s context window.
Tool schemas aren’t free. Names, parameters, and descriptions all get injected into the system prompt. As the number of tools increases, you gradually reduce the space the model has to reason about the user’s actual request.
There’s a natural temptation to build flexible, multi-purpose tools: a single send_message that handles every channel and behavior. From an API perspective, that’s tidy. From a model’s perspective, it’s ambiguous and expensive.
Smaller, single-purpose tools tend to work better. They’re easier for the model to select, cheaper to describe, and simpler to reason about. Internally, implementations can still be shared. What matters most is what the agent sees.
The current open-source tooling server doesn’t implement context-aware loading yet, and that’s intentional. But these constraints are already shaping how we’re thinking about a more productized MCP server; one that anyone can use, not just developers.
The takeaway: With MCP, performance isn’t only about execution speed. It’s also about how much context you consume.
Closing Thoughts
Working with MCP changed how I think about day-to-day work. Processes I used to accept as manual or repetitive now become potential tools. Once that shift happens, you start seeing opportunities everywhere; not to automate for the sake of it, but to remove friction in places that slow real work down.
MCP is still young, and many of the rough edges are understandable. But one thing became clear very quickly: the success of an MCP-based system depends not just on the model you choose, but also on how thoughtfully the tools are designed. Clear boundaries, narrow responsibilities, and predictable behavior matter more than clever abstractions. When those pieces are in place, the system becomes easier to trust and easier to extend.
What I find most interesting is how MCP pushes you to think about software as something that’s meant to be used by agents, not just by humans. That changes how you write APIs, how you structure documentation, and how you evaluate whether something is “done.” It’s a different set of tradeoffs, and one that’s still taking shape.
Over the coming weeks, I’ll be experimenting with other MCP tools like Laravel Boost and MCP UI to see how others are approaching these problems. If you’re working with MCP, or just thinking about it, I’d love to hear what you’re building and what you’ve learned along the way. Send me a message on LinkedIn or the Vonage Developer Slack.
Share:
Benjamin Aronov is a developer advocate at Vonage. He is a proven community builder with a background in Ruby on Rails. Benjamin enjoys the beaches of Tel Aviv which he calls home. His Tel Aviv base allows him to meet and learn from some of the world's best startup founders. Outside of tech, Benjamin loves traveling the world in search of the perfect pain au chocolat.