I the prospect to create an MCP server for an observability software with a purpose to present the AI agent with dynamic code evaluation capabilities. Due to its potential to remodel purposes, MCP is a know-how I’m much more ecstatic about than I initially was about genAI on the whole. I wrote extra about that and a few intro to MCPs on the whole in a earlier put up.
Whereas an preliminary POCs demonstrated that there was an immense potential for this to be a power multiplier to our product’s worth, it took a number of iterations and several other stumbles to ship on that promise. On this put up, I’ll attempt to seize a few of the classes realized, as I feel that this may profit different MCP server builders.
My Stack
- I used to be utilizing Cursor and vscode intermittently as the principle MCP shopper
- To develop the MCP server itself, I used the .NET MCP SDK, as I made a decision to host the server on one other service written in .NET
Lesson 1: Don’t dump all your knowledge on the agent
In my software, one instrument returns aggregated info on errors and exceptions. The API could be very detailed because it serves a fancy UI view, and spews out giant quantities of deeply linked knowledge:
- Error frames
- Affected endpoints
- Stack traces
- Precedence and traits
- Histograms
My first hunch was to easily expose the API as is as an MCP instrument. In any case, the agent ought to be capable to make extra sense of it than any UI view, and catch on to attention-grabbing particulars or connections between occasions. There have been a number of eventualities I had in thoughts as to how I might count on this knowledge to be helpful. The agent may robotically supply fixes for latest exceptions recorded in manufacturing or within the testing surroundings, let me find out about errors that stand out, or assist me deal with some systematic issues which can be the underlying root reason for the problems.
The essential premise was due to this fact to permit the agent to work its ‘magic’, with extra knowledge probably which means extra hooks for the agent to latch on in its investigation efforts. I rapidly coded a wrapper round our API on the MCP endpoint and determined to start out with a fundamental immediate to see whether or not all the things is working:

We are able to see the agent was good sufficient to know that it wanted to name one other instrument to seize the surroundings ID for that ‘check’ surroundings I discussed. With that at hand, after discovering that there was really no latest exception within the final 24 hours, it then took the freedom to scan a extra prolonged time interval, and that is when issues acquired a bit of bizarre:

What a wierd response. The agent queries for exceptions from the final seven days, will get again some tangible outcomes this time, and but proceeds to ramble on as if ignoring the information altogether. It continues to attempt to use the instrument in several methods and totally different parameter mixtures, clearly fumbling, till I discover it flat out calls out the truth that the information is totally invisible to it. Whereas errors are being despatched again within the response, the agent really claims there are no errors. What’s going on?

After some investigation, the issue was revealed to be the truth that we’ve merely reached a cap within the agent’s capability to course of giant quantities of knowledge within the response.
I used an current API that was extraordinarily verbose, which I initially even thought of to be a bonus. The tip consequence, nonetheless, was that I in some way managed to overwhelm the mannequin. Total, there have been round 360k characters and 16k phrases within the response JSON. This consists of name stacks, error frames, and references. This ought to have been supported simply by wanting on the context window restrict for the mannequin I used to be utilizing (Claude 3.7 Sonnet ought to help as much as 200k tokens), however however the massive knowledge dump left the agent completely stumped.
One technique could be to alter the mannequin to at least one that helps an excellent larger context window. I converted to the Gemini 2.5 professional mannequin simply to check that principle out, because it boasts an outrageous restrict of 1 million tokens. Positive sufficient, the identical question now yielded a way more clever response:

That is nice! The agent was in a position to parse the errors and discover the systematic reason for lots of them with some fundamental reasoning. Nevertheless, we are able to’t depend on the consumer utilizing a selected mannequin, and to complicate issues, this was output from a comparatively low bandwidth testing surroundings. What if the dataset have been even bigger?
To resolve this concern, I made some basic adjustments to how the API was structured:
- Nested knowledge hierarchy: Hold the preliminary response centered on high-level particulars and aggregations. Create a separate API to retrieve the decision stacks of particular frames as wanted.
- Improve queryability: All the queries made to date by the agent used a really small web page dimension for the information (10), if we would like the agent to have the ability to to entry extra related subsets of the information to suit with the restrictions of its context, we have to present extra APIs to question errors based mostly on totally different dimensions, for instance: affected strategies, error sort, precedence and impression and so forth.
With the brand new adjustments, the instrument now constantly analyzes vital new exceptions and comes up with repair strategies. Nevertheless, I glanced over one other minor element I wanted to kind earlier than I may actually use it reliably.
Lesson 2: What’s the time?

The keen-eyed reader could have seen that within the earlier instance, to retrieve the errors in a selected time vary, the agent makes use of the ISO 8601 time length format as a substitute of the particular dates and instances. So as a substitute of together with commonplace ‘From’ and ‘To’ parameters with datetime values, the AI despatched a length worth, for instance, seven days or P7D, to point it desires to examine for errors previously week.
The explanation for that is considerably unusual — the agent won’t know the present date and time! You may confirm that your self by asking the agent that straightforward query. The beneath would have made sense have been it not for the truth that I typed that immediate in at round midday on Could 4th…

Utilizing time length values turned out to be an amazing answer that the agent dealt with fairly nicely. Don’t neglect to doc the anticipated worth and instance syntax within the instrument parameter description, although!
Lesson 3: When the agent makes a mistake, present it tips on how to do higher
Within the first instance, I used to be really stunned by how the agent was in a position to decipher the dependencies between the totally different instrument calls With the intention to present the suitable surroundings identifier. In learning the MCP contract, it discovered that it needed to name on a dependent one other instrument to get the record of surroundings IDs first.
Nevertheless, responding to different requests, the agent would generally take the surroundings names talked about within the immediate verbatim. For instance, I seen that in response to this query: evaluate sluggish traces for this methodology between the check and prod environments, are there any vital variations? Relying on the context, the agent would generally use the surroundings names talked about within the request and would ship the strings “check” and “prod” because the surroundings ID.
In my unique implementation, my MCP server would silently fail on this situation, returning an empty response. The agent, upon receiving no knowledge or a generic error, would merely stop and attempt to remedy the request utilizing one other technique. To offset that habits, I rapidly modified my implementation in order that if an incorrect worth was offered, the JSON response would describe precisely what went flawed, and even present a legitimate record of doable values to save lots of the agent one other instrument name.

This was sufficient for the agent, studying from its mistake, it repeated the decision with the proper worth and in some way additionally averted making that very same error sooner or later.
Lesson 4: Give attention to consumer intent and never performance
Whereas it’s tempting to easily describe what the API is doing, generally the generic phrases don’t fairly enable the agent to understand the kind of necessities for which this performance may apply finest.
Let’s take a easy instance: My MCP server has a instrument that, for every methodology, endpoint, or code location, can point out the way it’s getting used at runtime. Particularly, it makes use of the tracing knowledge to point which software flows attain the precise perform or methodology.
The unique documentation merely described this performance:
[McpServerTool,
Description(
@"For this method, see which runtime flows in the application
(including other microservices and code not in this project)
use this function or method.
This data is based on analyzing distributed tracing.")]
public static async Process<string> GetUsagesForMethod(IMcpService shopper,
[Description("The environment id to check for usages")]
string environmentId,
[Description("The name of the class. Provide only the class name without the namespace prefix.")]
string codeClass,
[Description("The name of the method to check, must specify a specific method to check")]
string codeMethod)
The above represents a functionally correct description of what this instrument does, but it surely doesn’t essentially make it clear what kinds of actions it is likely to be related for. After seeing that the agent wasn’t choosing this instrument up for numerous prompts I believed it will be pretty helpful for, I made a decision to rewrite the instrument description, this time emphasizing the use instances:
[McpServerTool,
Description(
@"Find out what is the how a specific code location is being used and by
which other services/code.
Useful in order to detect possible breaking changes, to check whether
the generated code will fit the current usages,
to generate tests based on the runtime usage of this method,
or to check for related issues on the endpoints triggering this code
after any change to ensure it didnt impact it"
Updating the text helped the agent realize why the information was useful. For example, before making this change, the agent would not even trigger the tool in response to a prompt similar to the one below. Now, it has become completely seamless, without the user having to directly mention that this tool should be used:

Lesson 5: Document your JSON responses
The JSON standard, at least officially, does not support comments. That means that if the JSON is all the agent has to go on, it might be missing some clues about the context of the data you’re returning. For example, in my aggregated error response, I returned the following score object:
"Score": {"Score":21,
"ScoreParams":{ "Occurrences":1,
"Trend":0,
"Recent":20,
"Unhandled":0,
"Unexpected":0}}
Without proper documentation, any non-clairvoyant agent would be hard pressed to make sense of what these numbers mean. Thankfully, it is easy to add a comment element at the beginning of the JSON file with additional information about the data provided:
"_comment": "Each error contains a link to the error trace,
which can be retrieved using the GetTrace tool,
information about the affected endpoints the code and the
relevant stacktrace.
Each error in the list represents numerous instances
of the same error and is given a score after its been
prioritized.
The score reflects the criticality of the error.
The number is between 0 and 100 and is comprised of several
parameters, each can contribute to the error criticality,
all are normalized in relation to the system
and the other methods.
The score parameters value represents its contributation to the
overall score, they include:
1. 'Occurrences', representing the number of instances of this error
compared to others.
2. 'Trend' whether this error is escalating in its
frequency.
3. 'Unhandled' represents whether this error is caught
internally or poropagates all the way
out of the endpoint scope
4. 'Unexpected' are errors that are in high probability
bugs, for example NullPointerExcetion or
KeyNotFound",
"EnvironmentErrors":[]
This allows the agent to elucidate to the consumer what the rating means in the event that they ask, but additionally feed this rationalization into its personal reasoning and suggestions.
Choosing the proper structure: SSE vs STDIO,
There are two architectures you need to use in creating an MCP server. The extra frequent and broadly supported implementation is making your server out there as a command triggered by the MCP shopper. This may very well be any CLI-triggered command; npx, docker, and python are some frequent examples. On this configuration, all communication is completed by way of the method STDIO, and the method itself is operating on the shopper machine. The shopper is liable for instantiating and sustaining the lifecycle of the MCP server.

This client-side structure has one main disadvantage from my perspective: Because the MCP server implementation is run by the shopper on the native machine, it’s a lot more durable to roll out updates or new capabilities. Even when that downside is in some way solved, the tight coupling between the MCP server and the backend APIs it is determined by in our purposes would additional complicate this mannequin when it comes to versioning and ahead/backward compatibility.
For these causes, I selected the second sort of MCP Server — an SSE Server hosted as part of our software providers. This removes any friction from operating CLI instructions on the shopper machine, in addition to permits me to replace and model the MCP server code together with the appliance code that it consumes. On this situation, the shopper is supplied with a URL of the SSE endpoint with which it interacts. Whereas not all purchasers at present help this feature, there’s a sensible commandMCP known as supergateway that can be utilized as a proxy to the SSE server implementation. Which means customers can nonetheless add the extra broadly supported STDIO variant and nonetheless eat the performance hosted in your SSE backend.

MCPs are nonetheless new
There are various extra classes and nuances to utilizing this deceptively easy know-how. I’ve discovered that there’s a large hole between implementing a workable MCP to at least one that may really combine with consumer wants and utilization eventualities, even past these you could have anticipated. Hopefully, because the know-how matures, we’ll see extra posts on Finest Practices and
Wish to Join? You may attain me on Twitter at @doppleware or by way of LinkedIn.
Observe my mcp for dynamic code evaluation utilizing observability at https://github.com/digma-ai/digma-mcp-server