Skip to content

A 101 into how MCP Is Changing the Way AI Talks to Oracle

Gustavo Rene Antunez Oct 14, 2025 9:34:58 AM
A 101 into how MCP Is Changing the Way AI Talks to Oracle
10:40

Over the last couple of months, I’ve been playing around with something that caught my attention in the AI world, MCP, the Model Context Protocol.

If you’ve worked with AI tools like ChatGPT, Claude, or anything that tries to connect to real systems, you’ve probably seen how painful integrations can get. Every tool has its own API, its own authentication flow, and its own data quirks. You want your AI to query a database? That’s one connector. To send an email? Another. To hit an internal service? Completely different game.

We’ve come a long way with AI, but the plumbing behind it still feels manual, inconsistent, and fragile. We shouldn’t still be writing custom glue code every time an AI needs to talk to something.

That’s where MCP comes in.

So, What Exactly Is MCP?

Model Context Protocol (MCP) is an open-source standard introduced by Anthropic in late 2024. Think of it as the “USB-C of AI integrations.” Instead of every AI tool having its own adapter, MCP defines one universal way for AI models to talk to other systems. Technically, it’s based on JSON-RPC (a simple, well-understood protocol) and defines how an AI client (like ChatGPT or Claude) communicates with an MCP server, which is just any program that exposes tools, data, or APIs in a standardized way. An MCP server can run locally, in the cloud, or inside your enterprise network. Its job is to expose certain actions or data, things the AI can call, like:

  • “Run a SQL query.”
  • “Fetch today’s sales numbers.”
  • “Create a calendar event.”

Each of these it is called a tool in MCP terminology

How MCP Works

At its core, MCP uses a client–server model:

  • MCP Host – The environment where the AI agent runs (e.g., a chat interface, IDE, or application).
  • MCP Client – A connector process that communicates between the AI and external systems.
  • MCP Server – A program (running locally or in the cloud) that exposes data, tools, or APIs to the AI client using the MCP standard.

 

The AI host (like ChatGPT, Claude, or an IDE plugin) runs one or more MCP clients. Each client connects to an MCP server, a program that exposes data or tools (like a database, internal API, or cloud service).  The two communicate over JSON-RPC, so all requests and responses are structured and predictable.

When the client connects, they do a handshake, which is a negotiating protocol for versions and capabilities. Once connected, the AI can discover the tools, resources, and prompts that the server exposes. Basically, a “Hey, here’s who I am and what I support.”, the server replies with “Cool, here’s what I can do.”

Once that’s done, the AI knows exactly which tools or resources it can access and what format they expect.

Each MCP server can provide three types of primitives:

  1. Tools – Actions the AI can perform. Each tool or resource includes a self-describing schema (in JSON Schema format), defining input parameters, data types, and output structures. This ensures structured and predictable communication.
    • Example: run a SQL query, send an email, or trigger an API.
  2. Resources – Data sources that the AI can read for additional context
    • Example: a set of files, database tables, or a knowledge base.
  3. Prompts – Predefined instruction templates that the AI can reuse for specific tasks.
    • Example: a pre-built few-shot prompt for analyzing logs or formatting responses.

Each tool or resource includes a self-describing schema (in JSON Schema format), defining input parameters, data types, and output structures. This ensures structured, predictable, and error-free communication.


Here’s the short version of what happens when an AI connects to an MCP server:

  1. Handshake – The AI and the server introduce themselves and agree on what they both support.
  2. Discovery – The AI lists what “tools” or “resources” the server offers.
  3. Execution – The AI picks one and calls it, passing structured data (JSON).
  4. Response – The server executes it and sends back structured results.

Everything is schema-based, meaning the AI knows exactly what inputs a tool expects and what kind of response it’ll get.

Say a server exposes a tool called database_query. It might advertise an input schema like this:

{
"type": "object",
"properties": {
"query": { "type": "string" }
},
"required": ["query"]
}

and output schema something like:

{
"type": "object",
"properties": {
"rows": {
"type": "array",
"items": { "type": "object" }
}
}
}

When the agent (via its MCP client) wants to use a tool, it sends a structured call:

{
"jsonrpc": "2.0",
"id": 17,
"method": "tools/call",
"params": {
"tool": "database_query",
"args": { "query": "SELECT * FROM customers WHERE id = 123" }
}
}

The server executes that tool (in this case, the SQL query) and responds with:

{
"jsonrpc": "2.0",
"id": 17,
"result": { "rows": [ … ] }
}

Because of strict schemas and version negotiation, communication is type-safe, unambiguous, and predictable.

MCP supports bidirectional messaging. That means the server can also send notifications to the client (e.g., “task complete”, “new data available”) rather than forcing the client to constantly poll. This helps with long-running jobs or event-driven workflows.

Multiple MCP connections can run in parallel. An AI session (or agent) can connect to several MCP servers at once, for example, one for CRM, one for ERP, and your database, all at the same time, and orchestrate calls across them.

Note: MCP itself does not enforce authentication, authorization, or encryption. Those are layered on externally (e.g., via OAuth, tokens, TLS). The protocol focuses purely on structured, safe interchange.

Oracle’s Implementation.- SQLcl as an MCP Server

This is where things start to get exciting.

Oracle implemented an MCP server in a pretty clever way. Instead of building a whole new product, Oracle added an MCP server directly into SQLcl, their command-line tool for Oracle Database.

That means if you’re already using SQLcl (and let’s be honest, most DBAs and developers are), you can have an MCP server sitting right there.

What SQLcl Exposes Through MCP

When SQLcl runs in MCP mode, it advertises a set of tools that any AI client can discover and use. Some of the key ones include:

  • list-connections — discover and list saved Oracle DB connections on the host
  • connect/disconnect — open or close a connection
  • run-sql — execute arbitrary SQL or PL/SQL blocks
  • run-sqlcl — execute SQLcl-specific commands or extensions (e.g. DESCRIBE, INFO, LOAD)
  • Others like DDL, DML, schema introspection, etc.

    When an AI agent calls run-sqlSQLcl connects to the database using the credentials stored locally, runs the query (or script), and returns results or errors back into the MCP response. Critically, the agent never directly touches the database; the MCP server acts as a controlled intermediary.

From the AI’s perspective, these tools are no different than any other MCP function. They come with schemas, input parameters, and defined outputs.

So an AI like Claude or ChatGPT (running in an MCP-enabled host) can connect to SQLcl, browse what tools are available, and start using them all in a controlled, auditable way.

Deployment & Setup

  • To use it, you need SQLcl version 25.2 or newer. andersswanson.dev+2Oracle Blogs+2
  • You configure Oracle DB connections (username, host, service, etc.) saved in SQLcl’s connection store.
  • You launch SQLcl in MCP mode (e.g. sql -mcp) to start it as an MCP server. 
  • Then, AI agents (clients) configured to use that MCP server can connect and issue requests. 

There are also examples of configuring SQLcl with Oracle DB Free (Docker) and connecting an AI client (like Claude) 

Benefits & Caveats

Benefits

  • Seamless AI-DB integration: You no longer have to manually move between AI-generated SQL and database execution.
  • Controlled access: Use database credentials, least-privilege roles, audit logs. Oracle logs all MCP-mediated actions (e.g. in DBTOOLS$MCP_LOG). 
  • Developer ergonomics: Use your existing tools (SQLcl, VS Code extension, etc.) with built-in AI reach.
  • Faster iteration: Query, analyze, refine — all within the AI environment without context switching.

Caveats & Risks

  • Because the AI agent can run SQL, you must be extremely careful with privileges.
  • Avoid giving write access to production data. Oracle recommends using sanitized replicas or limited-scope schemas. 
  • Agents still need human supervision; errors or unexpected SQL generation may require manual review. 
  • Security is complex. You must integrate MCP usage with your existing authentication, identity, and auditing frameworks.
  • It’s a relatively new approach, so operational best practices are still emerging.

Audit Everything

One of the best features of Oracle’s MCP server is that it logs every call. Oracle even keeps this info in DBTOOLS$MCP_LOG, so you can see:

  • What tools were called
  • What SQL was run
  • When it happened
  • Which user triggered it

That’s gold for compliance and debugging. Make it a habit to review these logs periodically, just like you would for any automated system.

Example in Action

Let’s say a user asks their AI assistant:

“Show me the top five customers by revenue this quarter.”

1.-The AI client connects to the Oracle MCP server (SQLcl).

2.-It lists available tools and finds run-sql.

3.-The AI generates the SQL query:

SELECT customer_id, SUM(amount) AS total
FROM sales
WHERE order_date BETWEEN '2025-07-01' AND '2025-09-30'
GROUP BY customer_id
ORDER BY total DESC FETCH FIRST 5 ROWS ONLY;

4.-The client sends this structured request via MCP.

5.-SQLcl executes it and returns results in JSON format.

6.-The AI uses that live data in its response, no manual copy-paste required.

This is the power of context injection: live, real-time data flowing directly into AI reasoning.

Conclusion

The more I explore MCP, the more it feels like one of those quiet shifts that will change how we build systems, not overnight, but steadily. MCP is to AI what ODBC was to databases, a common language for connectivity. With Oracle baking MCP right into SQLcl, the database world just took a big step toward becoming AI-native. Developers and DBAs gain a new level of flexibility: AI agents that can securely interact with live data, automate analysis, and even generate insights in real time.

As AI adoption accelerates, protocols like MCP will become the backbone of enterprise automation, bridging intelligence and infrastructure. MCP is still young, but it’s the kind of foundation that sticks.

Leave a Comment