Introduction to MCP: Connecting LLMs with Your Applications and Data

A presentation at Codemotion Madrid in May 2025 in Madrid, Spain by Horacio Gonzalez

Slide 1

Slide 1

Introducción al MCP: Conectando LLMs con tus Aplicaciones y Datos… Horacio Gonzalez 2025-04-21

Slide 2

Slide 2

Horacio Gonzalez @LostInBrittany Spaniard Lost in Brittany

Slide 3

Slide 3

Clever Cloud Our mission: give more speed to your teams and better quality to your projects

Slide 4

Slide 4

Summary 1. 2. 3. 4. 5. 6. Introduction LLM evolution Model Context Protocol (MCP) Architecture of MCP MCPs are APIs Q&A and discussion

Slide 5

Slide 5

Introduction LLMs are changing software development, they say… how about you?

Slide 6

Slide 6

Why are we talking about this? LLMs are changing development, but individual devs don’t always leverage them

Slide 7

Slide 7

How do you use LLMs for your dev job? 1. Who here has already used LLM? 2. Who here has already used LLM professionally? 3. Who here has already used LLM to assist with code? 4. Who here has already used LLMs by coding?

Slide 8

Slide 8

How LLMs are changing dev jobs A point of view I find balanced: Addy Osmani https://addyosmani.com/

Slide 9

Slide 9

LLMs come in different flavors Not all LLMs are created equal They have different trade-offs in capabilities, accessibility, and control Choosing the right one depends on your use case, security needs, and infrastructure

Slide 10

Slide 10

Closed-source LLMs (Cloud-based APIs) 📌 Examples ● OpenAI (ChatGPT), Anthropic (Claude), Google (Gemini), Microsoft (Copilot) ✅ Advantages: ● ● ● Powerful and well-trained (best models available) Easy to use via APIs Regularly updated & improved ❌ Challenges: ● ● ● Black box (you don’t control how they work) Expensive (API calls can add up quickly) Data privacy concerns (sending requests to external servers) 💡 When to use? ● If you need the most advanced models and don’t mind API costs or external dependencies.

Slide 11

Slide 11

Open-source LLMs (Self- or cloud-hosted) 📌 Examples ● Meta’s Llama 3, Mistral, Google’s Gemma, Alibaba’s Qwen ✅ Advantages: ● ● ● Greater control (you know exactly how the model works) Can be fine-tuned for specific needs No external API costs ❌ Challenges: ● ● ● Requires more setup (you have to run the model yourself) May not be as powerful as the latest closed models Needs infrastructure (e.g., GPUs for hosting) 💡 When to use? ● If you need control over the model & lower costs but are okay with slightly weaker performance

Slide 12

Slide 12

Local models (on your machine or server) 📌 Examples ● Ollama, GGUF-based models (e.g., Llama, Mistral, Mixtral) ✅ Advantages: ● ● ● Works offline (great for security-sensitive applications) No API costs (completely free to use once set up) Low latency (responses are instant if hardware is good) ❌ Challenges: ● ● ● Limited by your hardware (needs a strong CPU/GPU) Not always as capable as cloud-hosted models Setup complexity (installing and optimizing models) 💡 When to use? ● If you need privacy and control, and you have the hardware to run an LLM efficiently

Slide 13

Slide 13

Choosing the Right Model for your Apps ● Cloud APIs ○ Great for rapid development, but costly and not always secure ● Self-hosted open models ○ Best balance for long-term control and scalability ● Local models ○ Best for privacy-sensitive applications

Slide 14

Slide 14

LLM evolution From simple chat to tool-enhanced agent! What’s the weather like in Madrid today? Weather API getWeather(“Madrid (ES)”) Today it is sunny in Madrid, but very cold, take a coat. {“weather”:”sunny”, “temperature”:”1.8ºC”}

Slide 15

Slide 15

LLM are only language models What’s the weather like in Madrid today? I’m unable to provide real-time information or current weather updates. They have no built-in way to use external tools or real-time data

Slide 16

Slide 16

Tools and plugins were added What’s the weather like in Madrid today? Today it is sunny in Madrid, but very cold, take a coat. Weather API getWeather(“Madrid (ES)”) {“weather”:”sunny”, “temperature”:”1.8ºC”} LLM recognizes it needs an external function and calls it, integrating the result into a natural-language response.

Slide 17

Slide 17

LLM don’t call directly those tools What’s the weather like in Madrid today? What’s the weather like in Madrid today? If needed, you have an available weather tool: getWeather(city) Call getWeather(“Madrid”) getWeather(“Madrid”) {“weather”:”sunny”,”temperature”:”1.8ºC”} Result of the tool calling: {“weather”:”sunny”,”temperature”:”1.8ºC”} Today it is sunny in Madrid, but very cold, take a coat. Today it is sunny in Madrid, but very cold, take a coat.

Slide 18

Slide 18

How are those LLM Tools defined? LyingWeatherTool.java //DEPS dev.langchain4j:langchain4j:1.0.0-beta1 import dev.langchain4j.agent.tool.Tool; public class LyingWeatherTool{ @Tool(“A tool to get the current weather in a city”) public static String getWeather(String city) { return “The weather in ” + city + ” is sunny and hot.”; } } Here in Java using LangChain4j

Slide 19

Slide 19

Why this matters? ● Moves LLMs from static text generation ○ dynamic system components ● Increases accuracy & real-world usability ● Allows developers to control what the LLM can access What’s the weather like in Madrid today? Today it is sunny in Madrid, but very cold, take a coat.

Slide 20

Slide 20

From LLM chats to LLM-powered agents Can you summarize this YouTube video? Of course,the video is a talk of Horacio about MCP… *This is a “fake” view, remember LLMs dont call tools directly But it’s the view from the Point of View of the user LLMs act like an agent that can plan actions: search the web, run some code, then answer

Slide 21

Slide 21

Model Context Protocol (MCP): The missing link MCP bridges LLMs with your applications, enabling controlled, real-world interactions

Slide 22

Slide 22

Why Do We Need MCP? LLM function calling is useful, but it lacks structure

Slide 23

Slide 23

Why Do We Need MCP? Problem ● LLMs don’t automatically know what functions exist. ● No standard way to expose an application’s capabilities. ● Hard to control security and execution flow. ● Expensive and fragile integration spaghetti

Slide 24

Slide 24

Model Context Protocol Anthropic, November 2024: LLMs intelligence isn’t the bottleneck, connectivity is

Slide 25

Slide 25

Model Context Protocol De facto standard for exposing system capabilities to LLMs https://modelcontextprotocol.io/

Slide 26

Slide 26

How MCP works ● Applications define an MCP manifest (structured JSON). ● The manifest describes available functions, input/output formats, and security policies. ● LLMs can discover and request function execution safely. Weather MCP Server

Slide 27

Slide 27

MCP is provider-agnostic Works with any LLM provider Ensures standardized function exposure across platforms

Slide 28

Slide 28

MCP solves integration spaghetti

Slide 29

Slide 29

The architecture of MCP Clients, servers, protocol and transports Tools, resources and prompts

Slide 30

Slide 30

MCP Servers: APIs in natural language A new kind of API

Slide 31

Slide 31

MCP Clients: on the AI assistant or app side One MCP client per MCP Server

Slide 32

Slide 32

MCP Protocol & Transports MCP Protocol Follow the JSON-RPC 2.0 specification MCP Transports ● STDIO (standard I/O) ○ Client and server in the same instance ● HTTP with SSE transport (deprecated) ● Streamable HTTP ○ Servers SHOULD implement proper authentication for all connections

Slide 33

Slide 33

Full MCP architecture

Slide 34

Slide 34

Services: tools, resources & prompts ● Tools ○ Standardized way to expose functions that can be invoked by clients ● Resources ○ Standardized way to expose resources to clients ○ Each resource is uniquely identified by a URI ● Prompts ○ Standardized way to expose prompt templates to clients ○ Structured messages and instructions for interacting with LLMs

Slide 35

Slide 35

MCPs are APIs And they should be architectured in a similar way

Slide 36

Slide 36

Let’s use an example: RAGmonsters https://github.com/LostInBrittany/RAGmonsters

Slide 37

Slide 37

RAGmonsters PostgreSQL Database

Slide 38

Slide 38

We want to allow LLM request it Two options: ● A generic PostgreSQL MCP server ● A custom-made MCP server tailored for RAGmonsters Which one to choose?

Slide 39

Slide 39

Generic PostgreSQL MCP server Using PostgreSQL MCP Server ● A Resource that give the table schema for tables: /schema ● A Tool that allows to do SQL queries: query LLM can know what tables do we have and what is their structure, and it can request them Implementation: https://github.com/CleverCloud/mcp-pg-example PostgreSQL MCP Server: https://github.com/modelcontextprotocol/servers/tree/main/ src/postgres

Slide 40

Slide 40

Custom-made RAGmonsters MCP server Coding a MCP server for it. It offers targeted tools: ● getMonsterByName: fetches detailed information about a monster. ● listMonstersByType: Lists monsters of a given type. ● Easy, intuitive interactions for LLMs. ● Optimized for specific use cases. ● Secure (no raw SQL). Implementation: https://github.com/LostInBrittany/RAGmonsters-mcp-pg

Slide 41

Slide 41

How to choose?

Slide 42

Slide 42

Conclusion ● Generic MCP servers: Quick to set up, flexible, but less efficient and more error-prone. ● Domain-specific MCP servers: Safer and faster for targeted tasks, but need more upfront design. ● Choose wisely: Use generic for exploration, domain-specific for production. A bit like for REST APIs, isn’t it?

Slide 43

Slide 43

That’s all, folks! Thank you all!