MCP-If-You-Must-Then-Do-It-Like-This

MCP: If You Must, Then Do It Like This…

Posted in

I’ve spent a lot of time speaking with folks about AI and — unfortunately — the Model Context Protocol (MCP). A lot of them have come up to me and said, “I’ve absorbed so much information from this conference, and the one thing I know I need to look at is something called M…C…D? No, MCP!”.

That right there, that’s dangerous.

In German, there is a word for this danger: Dünnbrettbohrer. It’s a marvelous compound of words that means “driller of thin planks,” used colloquially to mean someone who takes the most superficial approach to a problem or task, implying a lack of depth or thoroughness in their work.

Far be it from me to accuse Anthropic of this. When they designed MCP, the idea was to quickly and easily extend chat interfaces with tool functionality (and a whole bunch of other stuff that folks ignore in the protocol!). For that context, it’s actually a good fit for the job (bar some caveats that can easily be fixed).

No, the dünnbrettbohrer of the MCP world are the implementers of the MCP servers themselves. Right now, it’s the peak of the hype cycle of inflated expectations, meaning a lot of people are selling low-code, or no-code, dressed up as MCP — but it’s still the same old shenanigans under the hood.

What I would like to achieve today is to give you simple guidance on when, how, and where to use MCP without shooting yourself in the foot (such as with Github’s latest MCP server disaster, an exploit that left private repository data vulnerable to attackers).

Lesson One: Do You Really Need to Do This in a Chat Interface?

This is a big one. Imagine MCP-utopia to be: Open ChatGPT, then ask it to “collate all your sales data for the quarter and generate a report for my board pack”. Or “look through our CRM and generate a list of all prospects that live in Canada that are also inclined to golf.”

That would be awesome, except that in the above cases, realistically, your chat interface would be a series of specific GPTs (custom chat room, set up for this very purpose, with a bunch of data feeds and tools).

Remember, LLMs can’t handle more than about ten tools before their tool-calls turn to mush.

Consider the example of collating sales data and generating a report. This will require:

  • A chat interface for your Financials. The world runs on Excel, so just imagine it dragging through your CFO and his team’s OneDrive, trying to figure out whether Budget_Q1_2025_Latest.xlsx, Budget_Q1_2025_Latest_LATEST_TO_PRESENT.xlsx, or Budget_Q1_2025_Latest_LATEST_USE_THIS_ONE.xlsx is the most recent.
  • A chat interface that has access to a coding tool (because are you really going to trust your LLM to do math!?). Or — let’s get fancy — a chat with access to an Excel MCP (if they are sentient, this should be considered a war crime), to collate and manipulate the data.
  • A chat interface to a tool that can access and edit PowerPoint files.

The alternative? Well, you could have opened Excel, copied the table to a new sheet, messed around with it and trusted your own math, and then bunged it into PowerPoint with copy and paste.

With this chat-based AI flow, you’d need three chat rooms, where you can shout at the computer and it can pretend to care and apologize. Or dumb software, where you can do it yourself (and still shout at the computer, just in coarser language because it’s not anthropomorphised to have feelings), get the job done, and be sure of the result.

You see, trust is the biggest killer here: do you really trust what the LLM is outputting? Would you put your career on the line for it, given how often AI agents make mistakes? No, of course not.

Ok, back to MCP. The question I’m asking is: When MCP is used for its intended purpose — making a chat interface better at interacting with data — do you really need this process as a chat interface?

Lesson Two: It’s Not Low-Code, It’s Not No-Code, It’s Just Someone Else’s Code

MCP is sold to enterprise architects like Lego. The pitch is simple: Hey! MCP is a plug-and-play tool for your AI chat interface. We can make those tools for you with a few simple clicks!

Now, unless this is a vendor-supplied MCP tool, it’s a custom integration. That means it’s some low-code platform selling you connectors, flowcharts, and pretty graphs, which magically work for the first few months, then inevitably break. Why? Because they are integrations, and integrations mean logic, input connectors, and output connectors. Little boxes in a GUI that promise they’ll do what you ask. However, if at any point the software in that integration chain changes, your integration is stuffed.

So, who owns the integration? Who owns this MCP server, or the creation of it, and what power do they have to un-stuff it? You may think your AI center of excellence (or, in a few years, your platform team — trust me) owns this responsibility. The issue is, they’ll shrug and say, “We need to wait for the MCP vendor to update their connectors…”

Admittedly, that’s not a massive problem — folks are pretty used to waiting for upgrades to catch up with each other. Unfortunately, it is — in general — a problem that shouldn’t need to be solved!

This whole dependency management thing has already been well-solved in the software development community through rigorous processes, automation, and learnings, in a field known as Developer Operations. Well-established, reliable DevOps — where everything is configured and stored as code, is versioned, can be rolled back, and has a clear route to being provisioned and decommissioned. There are whole job sectors for this stuff.

With MCP, all that goes out the window. Not only is it other people’s code, but it’s also other people’s code in a long chain of dependencies in a flow you cannot trust. Or observe. It’s outside of your operations chain because it’s not considered software, even though it is, and it’s not considered infrastructure, even though it is.

The conclusion of the second lesson? If you must dance the MCP-Shuffle, then make sure it’s you who’s building and owning those servers and integrations. Because then you can apply all that lovely best practice that software and DevOps engineers have been so insistent on over the past decade.

Lesson Three: It’s the Processes, Not the APIs

Let’s take the elephant in the room by the horns (that’s called a malaphor, by the way, aren’t I smart?). This is the one I like to call “how-do-we-make-more-money-by-paying-fewer-humans,” which is the absolute quintessence of the AI hype.

MCP is seen as the next step in the inevitable evolution of having fewer pesky people by giving smart puppets pluggable tools in order to do “menial” work. If you must use MCP for this purpose, remember this: It’s not about the APIs — it’s about the process.

When you create an MCP server, or are gifted one by your favourite SaaS vendor, make sure to check if those MCPs are incorporating processes as tools, and not just exposing the vendor API as a tool. Why? Because LLMs don’t know how your organization works.

If you say, “Onboard this customer onto the platform”, which may involve multiple API calls, filling out a form, faxing it to yourself, burying it, digging it up, and burying it again, how is a chatbot meant to know that?

You could just embed those instructions in the prompt. But then, what if the process changes? What if you only need to bury the form once? Now you’re screwed because you need to alter and test the prompt again.

Or — and hear me out — you could encode the process into the tool. This is something that old-school API management vendors would call “API orchestration.” You have a machine-readable, versionable, verifiable, and trackable document that describes and encodes the unhinged processes of your legacy software APIs, making for easy distribution among your integrations.

That is what you want when you are evaluating your MCP tooling — processes, not APIs.

Final Thought: Don’t Let MCP Erode Progress

Writing about this topic always gets me a little angry. As an API management vendor, many of these issues have been solved already. Now, the direction of travel of the MCP hype is eroding so much good work in the automation and integration space, it’s hard not to get a bit salty, because we’re taking a step backwards.

I’ll leave it here. I hope these three lessons help you better utilize MCP (if you must), and think twice about where and how you want to apply all that AI.