This looks very interesting but I’m not sure how to use it well.
Would you mind sharing some prompts that use it and solve a real problem that you encountered ?
Imagine you're building a support agent for DoorDash. A user asks, "Why is my order an hour late?" Most teams today would build a RAG system that surfaces a help center article saying something like, "Here are common reasons orders might be delayed."
That doesn't actually solve the problem. What you really need is access to internal systems. The agent should be able to look up the order, check the courier status, pull the restaurant's delay history, and decide whether to issue a refund. None of that lives in documentation. It lives in your APIs and databases.
LLMs aren't limited by reasoning. They're limited by access.
EnrichMCP gives agents structured access to your real systems. You define your internal data model using Python, similar to how you'd define models in an ORM. EnrichMCP turns those definitions into typed, discoverable tools the LLM can use directly. Everything is schema-aware, validated with Pydantic, and connected by a semantic layer that describes what each piece of data actually means.
You can integrate with SQLAlchemy, REST APIs, or custom logic. Once defined, your agent can use tools like get_order, get_restaurant, or escalate_if_late with no additional prompt engineering.
It feels less like stitching prompts together and more like giving your agent a real interface to your business.
Disclaimer: I don't know the details of how this works.
Time-to-solution and quality would be my guess. In my experience, adding high level important details about the way information is organized to the beginning of the context and then explaining the tools to further explore schema or access data produces much more consistent results rather than each inference having to query the system and build its own world view before trying to figure out how to answer your query and then doing it.
It's a bit like giving you a book or giving you that book without the table of contents and no index, but you you can do basic text search over the whole thing.
Because you also need proper access controls. In many cases database access is too low level, you need to bring it up a layer or two to know who can access what. Even more so when you want to do more than read data.
This is the motivating example I was looking for on the readme: a client making a request and an agent handling it using the MCP. Along with a log of the agent reasoning its way to the answer.
Yes but the agent reasoning is going to use an LLM, I sometimes run our openai_chat_agent example just to test things out. Try giving it a shot, ask it to do something then ask it to explain its tool use.
Obviously, it can (and sometimes will) hallucinate and make up why its using a tool. The thing is, we don't really have true LLM explainability so this is the best we can really do.
are you saying that a current gen LLM can answer such queries with EnrichMCP directly? or does it need guidance via prompts (for example tell it which tables to look at, etc. ) ? I did expose a db schema to LLM before, and it was ok-ish, however often times the devil was in the details (one join wrong, etc.), causing the whole thing to deliver junk answers.
what is your experience with non trivial db schemas?
So one big difference is that we aren't doing text2sql here, and the framework requires clear descriptions on all fields, entities, and relationships (it literally won't run otherwise).
We also generate a few tools for the LLM specifically to explain the data model to it. It works quite well, even on complex schemas.
The use case is more transactional than analytical, though we've seen it used for both.
I recommend running the openai_chat_agent in examples/ (also supports ollama for local run) and connect it to the shop_api server and ask it a question like : "Find and explain fraud transactions"
So explicit model description (kind of repeating the schema into explicit model definition) provides better results when used with LLM because it’s closer to the business domain(or maybe the extra step from DDL to business model is what confuses the LLM?). I think I’m failing to grasp why does this approach work better than straight schema fed to Llm.
Yeah, think of it as a data analyst. If I give you a Postgres account with all of our tables in it, you wouldn't even know when to start and would spend tons of time just running queries to figure out what you were looking at.
If I explain the semantic graph, entities, relationships, etc. with proper documentations and descriptions you'd be able to reason about it much faster and more accurately.
A postgres schema might have the data type and a name and a table name vs. all the rich metadata that would be required in EnrichMCP.
Cool. Can you give the agent a db user with restricted read permissions?
Also, generic db question, but can you protect against resource overconsumption? Like if the junior/agent makes a query with 100 joins, can a marshall kill the process and time it out?
Yeah to restricted read, still a lot of API work to do here and we're a bit blocked by MCP itself changing its auth spec (was just republished yesterday).
If you use the lower-level enrichMCP API (without SQLAlchemy) you can fully control all retrieval logic and add things like rate limiting, not dissimilar to how you'd solve this problem with a traditional API.
Yep, we can essentially convert from SQLAlchemy into an MCP server.
Auth/Security is interesting in MCP. As of yesterday a new spec was released with MCP servers converted to OAuth resource servers. There's still a lot more work to do on the MCP upstream side, but we're keeping up with it and going to have a deeper integration to have AuthZ support once the upstream enables it.
Currently would have to be done on the SQLAlchemy side, but someone asked to contribute django directly. Let me see if they are still planning to do that and create/link an issue if you want to keep up with it.
You could also build an EnrichMCP server that calls your Django server manually
That's an odd question. If you have a regular ORM how do you handle sensitive data that your user shouldn't know about? You add some logic or filters so that the user can only query their own data, or other data they have permission to access.
> You add some logic or filters so that the user can only query their own data, or other data they have permission to access.
What you are talking about is essentially only row level security (which is important for tenant seperation), while in the case of integrating external service providers, you column level security is a more important factor.
> I know LLMs can be scary, but this is the same problem that any ORM or program that handles user data would deal with.
In most other progams you don't directly plug your database full of PII to an external service provider.
In most other programs you don't have that same problem because the data takes a straight path from DB -> server -> user.
The README repeats an example that makes the user's email available for an agent to query (enabling PII leakage), setting a bad precedent in a space that's already chock-full of vibe coders without any concern about data privacy.
You could implement field-level access controls with attribute decorators that mask PII during serialization, similar to how SQLAlchemy's hybrid_property can transform data before it reaches the agent context.
Not sure exactly what you mean here. Prisma is an ORM for developers working with databases in TypeScript. EnrichMCP is more like an ORM for AI agents. It’s not focused on replacing Prisma in your backend stack, but it serves a similar role for agents that need to understand and use your data model.
Is there anything like this, but for Java?
This looks very interesting but I’m not sure how to use it well. Would you mind sharing some prompts that use it and solve a real problem that you encountered ?
Imagine you're building a support agent for DoorDash. A user asks, "Why is my order an hour late?" Most teams today would build a RAG system that surfaces a help center article saying something like, "Here are common reasons orders might be delayed."
That doesn't actually solve the problem. What you really need is access to internal systems. The agent should be able to look up the order, check the courier status, pull the restaurant's delay history, and decide whether to issue a refund. None of that lives in documentation. It lives in your APIs and databases.
LLMs aren't limited by reasoning. They're limited by access.
EnrichMCP gives agents structured access to your real systems. You define your internal data model using Python, similar to how you'd define models in an ORM. EnrichMCP turns those definitions into typed, discoverable tools the LLM can use directly. Everything is schema-aware, validated with Pydantic, and connected by a semantic layer that describes what each piece of data actually means.
You can integrate with SQLAlchemy, REST APIs, or custom logic. Once defined, your agent can use tools like get_order, get_restaurant, or escalate_if_late with no additional prompt engineering.
It feels less like stitching prompts together and more like giving your agent a real interface to your business.
Do you have a less hypothetical example to share?
Just a basic prompt that makes use of this server and how it responds. Or a simple agent conversation that continues successfully beyond 5 roundtrips.
Why wouldn't we just give the agent read permission on a replica db? Wouldn't that be enough for the agent to know about:
- what tables are there
- table schemas and relationships
Based on that, the agent could easily query the tables to extract info. Not sure why we need a "framework" for this.
Disclaimer: I don't know the details of how this works.
Time-to-solution and quality would be my guess. In my experience, adding high level important details about the way information is organized to the beginning of the context and then explaining the tools to further explore schema or access data produces much more consistent results rather than each inference having to query the system and build its own world view before trying to figure out how to answer your query and then doing it.
It's a bit like giving you a book or giving you that book without the table of contents and no index, but you you can do basic text search over the whole thing.
Because you also need proper access controls. In many cases database access is too low level, you need to bring it up a layer or two to know who can access what. Even more so when you want to do more than read data.
This is the motivating example I was looking for on the readme: a client making a request and an agent handling it using the MCP. Along with a log of the agent reasoning its way to the answer.
Yes but the agent reasoning is going to use an LLM, I sometimes run our openai_chat_agent example just to test things out. Try giving it a shot, ask it to do something then ask it to explain its tool use.
Obviously, it can (and sometimes will) hallucinate and make up why its using a tool. The thing is, we don't really have true LLM explainability so this is the best we can really do.
are you saying that a current gen LLM can answer such queries with EnrichMCP directly? or does it need guidance via prompts (for example tell it which tables to look at, etc. ) ? I did expose a db schema to LLM before, and it was ok-ish, however often times the devil was in the details (one join wrong, etc.), causing the whole thing to deliver junk answers.
what is your experience with non trivial db schemas?
So one big difference is that we aren't doing text2sql here, and the framework requires clear descriptions on all fields, entities, and relationships (it literally won't run otherwise).
We also generate a few tools for the LLM specifically to explain the data model to it. It works quite well, even on complex schemas.
The use case is more transactional than analytical, though we've seen it used for both.
I recommend running the openai_chat_agent in examples/ (also supports ollama for local run) and connect it to the shop_api server and ask it a question like : "Find and explain fraud transactions"
So explicit model description (kind of repeating the schema into explicit model definition) provides better results when used with LLM because it’s closer to the business domain(or maybe the extra step from DDL to business model is what confuses the LLM?). I think I’m failing to grasp why does this approach work better than straight schema fed to Llm.
Yeah, think of it as a data analyst. If I give you a Postgres account with all of our tables in it, you wouldn't even know when to start and would spend tons of time just running queries to figure out what you were looking at.
If I explain the semantic graph, entities, relationships, etc. with proper documentations and descriptions you'd be able to reason about it much faster and more accurately.
A postgres schema might have the data type and a name and a table name vs. all the rich metadata that would be required in EnrichMCP.
Cool. Can you give the agent a db user with restricted read permissions?
Also, generic db question, but can you protect against resource overconsumption? Like if the junior/agent makes a query with 100 joins, can a marshall kill the process and time it out?
Yeah to restricted read, still a lot of API work to do here and we're a bit blocked by MCP itself changing its auth spec (was just republished yesterday).
If you use the lower-level enrichMCP API (without SQLAlchemy) you can fully control all retrieval logic and add things like rate limiting, not dissimilar to how you'd solve this problem with a traditional API.
This is opening a new can of worm of information disclosure, at least one job the AI won't kill is people in security.
MCP is the new IoT, where S stands for security /s
What is the difference between a junior and an agent. Can't you give them smart permissions on a need to know basis?
I guess you also need per user contexts, such that you depend on the user auth to access user data, and the agent can only access that data.
But this same concern exists for employees in big corps. If I work at google, I probably am not able to access arbitrary data, so I can't leak it.
Woah, it generates the SQLAlchemy automatically? How does this handle auth/security?
Yep, we can essentially convert from SQLAlchemy into an MCP server.
Auth/Security is interesting in MCP. As of yesterday a new spec was released with MCP servers converted to OAuth resource servers. There's still a lot more work to do on the MCP upstream side, but we're keeping up with it and going to have a deeper integration to have AuthZ support once the upstream enables it.
Super interesting idea. How feasible would it be to integrate this with Django?
Very! We had quite a few people do this at a hackathon we hosted this past weekend.
That's fantastic to hear. Did they configure django to use sqlalchemy as the ORM or were they able to make it work with django's?
Currently would have to be done on the SQLAlchemy side, but someone asked to contribute django directly. Let me see if they are still planning to do that and create/link an issue if you want to keep up with it.
You could also build an EnrichMCP server that calls your Django server manually
Interesting…
> agents query production systems
How do you handle PII or other sensitive data that the LLM shouldn’t know or care about?
That's an odd question. If you have a regular ORM how do you handle sensitive data that your user shouldn't know about? You add some logic or filters so that the user can only query their own data, or other data they have permission to access.
It's also addressed directly in the README. https://github.com/featureform/enrichmcp?tab=readme-ov-file#...
I know LLMs can be scary, but this is the same problem that any ORM or program that handles user data would deal with.
> You add some logic or filters so that the user can only query their own data, or other data they have permission to access.
What you are talking about is essentially only row level security (which is important for tenant seperation), while in the case of integrating external service providers, you column level security is a more important factor.
> I know LLMs can be scary, but this is the same problem that any ORM or program that handles user data would deal with.
In most other progams you don't directly plug your database full of PII to an external service provider.
In most other programs you don't have that same problem because the data takes a straight path from DB -> server -> user.
The README repeats an example that makes the user's email available for an agent to query (enabling PII leakage), setting a bad precedent in a space that's already chock-full of vibe coders without any concern about data privacy.
You could implement field-level access controls with attribute decorators that mask PII during serialization, similar to how SQLAlchemy's hybrid_property can transform data before it reaches the agent context.
Do you provide prisma alternative ?
Not sure exactly what you mean here. Prisma is an ORM for developers working with databases in TypeScript. EnrichMCP is more like an ORM for AI agents. It’s not focused on replacing Prisma in your backend stack, but it serves a similar role for agents that need to understand and use your data model.
It's also Python.