Over the last couple of months, I’ve been playing around with something that caught my attention in the AI world, MCP, the Model Context Protocol.
If you’ve worked with AI tools like ChatGPT, Claude, or anything that tries to connect to real systems, you’ve probably seen how painful integrations can get. Every tool has its own API, its own authentication flow, and its own data quirks. You want your AI to query a database? That’s one connector. To send an email? Another. To hit an internal service? Completely different game.
We’ve come a long way with AI, but the plumbing behind it still feels manual, inconsistent, and fragile. We shouldn’t still be writing custom glue code every time an AI needs to talk to something.
That’s where MCP comes in.
Model Context Protocol (MCP) is an open-source standard introduced by Anthropic in late 2024. Think of it as the “USB-C of AI integrations.” Instead of every AI tool having its own adapter, MCP defines one universal way for AI models to talk to other systems. Technically, it’s based on JSON-RPC (a simple, well-understood protocol) and defines how an AI client (like ChatGPT or Claude) communicates with an MCP server, which is just any program that exposes tools, data, or APIs in a standardized way. An MCP server can run locally, in the cloud, or inside your enterprise network. Its job is to expose certain actions or data, things the AI can call, like:
Each of these it is called a tool in MCP terminology
At its core, MCP uses a client–server model:
The AI host (like ChatGPT, Claude, or an IDE plugin) runs one or more MCP clients. Each client connects to an MCP server, a program that exposes data or tools (like a database, internal API, or cloud service). The two communicate over JSON-RPC, so all requests and responses are structured and predictable.
When the client connects, they do a handshake, which is a negotiating protocol for versions and capabilities. Once connected, the AI can discover the tools, resources, and prompts that the server exposes. Basically, a “Hey, here’s who I am and what I support.”, the server replies with “Cool, here’s what I can do.”
Once that’s done, the AI knows exactly which tools or resources it can access and what format they expect.
Each MCP server can provide three types of primitives:
Each tool or resource includes a self-describing schema (in JSON Schema format), defining input parameters, data types, and output structures. This ensures structured, predictable, and error-free communication.
Here’s the short version of what happens when an AI connects to an MCP server:
Everything is schema-based, meaning the AI knows exactly what inputs a tool expects and what kind of response it’ll get.
Say a server exposes a tool called database_query. It might advertise an input schema like this:
{
"type": "object",
"properties": {
"query": { "type": "string" }
},
"required": ["query"]
}
and output schema something like:
{
"type": "object",
"properties": {
"rows": {
"type": "array",
"items": { "type": "object" }
}
}
}
When the agent (via its MCP client) wants to use a tool, it sends a structured call:
{
"jsonrpc": "2.0",
"id": 17,
"method": "tools/call",
"params": {
"tool": "database_query",
"args": { "query": "SELECT * FROM customers WHERE id = 123" }
}
}
The server executes that tool (in this case, the SQL query) and responds with:
{
"jsonrpc": "2.0",
"id": 17,
"result": { "rows": [ … ] }
}
Because of strict schemas and version negotiation, communication is type-safe, unambiguous, and predictable.
MCP supports bidirectional messaging. That means the server can also send notifications to the client (e.g., “task complete”, “new data available”) rather than forcing the client to constantly poll. This helps with long-running jobs or event-driven workflows.
Multiple MCP connections can run in parallel. An AI session (or agent) can connect to several MCP servers at once, for example, one for CRM, one for ERP, and your database, all at the same time, and orchestrate calls across them.
Note: MCP itself does not enforce authentication, authorization, or encryption. Those are layered on externally (e.g., via OAuth, tokens, TLS). The protocol focuses purely on structured, safe interchange.
This is where things start to get exciting.
Oracle implemented an MCP server in a pretty clever way. Instead of building a whole new product, Oracle added an MCP server directly into SQLcl, their command-line tool for Oracle Database.
That means if you’re already using SQLcl (and let’s be honest, most DBAs and developers are), you can have an MCP server sitting right there.
When SQLcl runs in MCP mode, it advertises a set of tools that any AI client can discover and use. Some of the key ones include:
When an AI agent calls run-sqlSQLcl connects to the database using the credentials stored locally, runs the query (or script), and returns results or errors back into the MCP response. Critically, the agent never directly touches the database; the MCP server acts as a controlled intermediary.
From the AI’s perspective, these tools are no different than any other MCP function. They come with schemas, input parameters, and defined outputs.
So an AI like Claude or ChatGPT (running in an MCP-enabled host) can connect to SQLcl, browse what tools are available, and start using them all in a controlled, auditable way.
sql -mcp) to start it as an MCP server. There are also examples of configuring SQLcl with Oracle DB Free (Docker) and connecting an AI client (like Claude)
One of the best features of Oracle’s MCP server is that it logs every call. Oracle even keeps this info in DBTOOLS$MCP_LOG, so you can see:
That’s gold for compliance and debugging. Make it a habit to review these logs periodically, just like you would for any automated system.
Let’s say a user asks their AI assistant:
“Show me the top five customers by revenue this quarter.”
1.-The AI client connects to the Oracle MCP server (SQLcl).
2.-It lists available tools and finds run-sql.
3.-The AI generates the SQL query:
SELECT customer_id, SUM(amount) AS total
FROM sales
WHERE order_date BETWEEN '2025-07-01' AND '2025-09-30'
GROUP BY customer_id
ORDER BY total DESC FETCH FIRST 5 ROWS ONLY;
4.-The client sends this structured request via MCP.
5.-SQLcl executes it and returns results in JSON format.
6.-The AI uses that live data in its response, no manual copy-paste required.
This is the power of context injection: live, real-time data flowing directly into AI reasoning.
The more I explore MCP, the more it feels like one of those quiet shifts that will change how we build systems, not overnight, but steadily. MCP is to AI what ODBC was to databases, a common language for connectivity. With Oracle baking MCP right into SQLcl, the database world just took a big step toward becoming AI-native. Developers and DBAs gain a new level of flexibility: AI agents that can securely interact with live data, automate analysis, and even generate insights in real time.
As AI adoption accelerates, protocols like MCP will become the backbone of enterprise automation, bridging intelligence and infrastructure. MCP is still young, but it’s the kind of foundation that sticks.