The Model Context Protocol (MCP) has gained substantial attention in the AI development community, not just for its promise of interoperability but also for its potential in agentic architectures.
Many practitioners though, reduce MCP to a simple tool execution layer, an oversimplification that masks its true depth.
The Surface: Tools Are Just the Beginning
Most developers currently focus on the “tools” component of MCP protocol, which enables interaction with APIs, such as querying databases, sending messages via Slack, or retrieving user information. Not to say that those tools are not useful.
They mimic RESTful API capabilities but through a language model-aware protocol. However, judging MCP only by this layer is like evaluating a smartphone solely by its calculator app!!
The Full Scope of Model Context Protocol MCP: Five Core Capabilities
To further illuminate these capabilities, we will examine some real-world examples and their practical significance:
MCP protocol defines five core capabilities:
Tools: API wrappers to perform actions.
Resources: Shared data elements (e.g., files, URIs) between clients and servers.
Prompts: Parameterized templates for generating LLM.
Sampling: Delegates LLM generation to clients, allowing contextual synthesis and human-in-the-loop workflows.
Roots: Define operational boundaries and constraints for data and actions.
Tools: Operational Action Executors
Example: A Slack MCP server allows an AI agent to list channels, query messages, or send DMs.
Use Case: Ideal for workflow automation bots that interact with teams across communication platforms. Allow easy integration with productivity tools.
Resources: Contextual Anchors for Data Exchange
Resources in MCP are abstract definitions of data elements that clients and servers can exchange. For instance, clients can provide local files or database object references like files: or postgres: URIs.
These references allow the server to work with specific, predefined data sets, ensuring consistency and efficiency with no manual configuration.
Prompts: Server-Side Intelligence, Client-Side Power
Example: A legal department deploys an MCP server that provides templated legal document drafts. A prompt might be “Generate an NDA agreement for [Company A] and [Company B] for a term of [X] months.”
Use Case: Every client in the company (web, mobile, CLI) can use this standard template, eliminating manual drafting and reducing errors.
Prompts are server-side features that deliver parameterized templates to clients. If a server hosts these templates, they can be reused across multiple clients, ensuring consistent, high-quality LLM outputs without custom prompts every time.
The Client Side: More Than Just a Caller
MCP also empowers the client to do more than just relay calls.
Sampling: The Magic of Local LLM Generation
Example: A research tool aggregates data from multiple databases via MCP servers and synthesizes a summary using a local LLM.
Use Case: Maintains privacy, allow custom context, and supports review loops.
Sampling enables the client to generate LLM responses based on collected data, without relying on external processing.
Roots: Clear-Cut Operational Boundaries
Example: A code assistant sets the MCP root to a specific folder.
Use Case: Ensures the server doesn’t access files outside the designated scope.
Roots define where and what the server is allowed to touch, by safeguarding data boundaries and protecting user trust.
A Real-Life Workflow That Combines All Five MCP Capabilities
Let’s put it all together with a practical scenario:
Imagine a software company wants to automate the process of generating daily engineering reports using an AI-powered assistant.
Here’s how each part of MCP makes the workflow both powerful and safe:
Tools – Getting the Right Information
What happens:
The assistant uses built in connections (tools) to fetch data from services like Slack (to see what was discussed) and PostgreSQL (to get information from databases).
Simple Example:
The AI bot pulls messages from the team’s Slack channel to see what’s been worked on, and checks the database for new code changes.
Why this matters:
Without tools, the assistant couldn’t gather all the information needed for a useful report.
Resources – Pointing to the Right Data
What happens:
Instead of uploading or copying files everywhere, the AI can refer directly to important files, like code logs or change reports, using file paths or database links.
Simple Example:
It might use a direct link to the code changes made today, or point to a log file from a database, rather than copying everything around.
Why this matters:
This avoids confusion about “which file” or “which data” to use. Everyone is on the same page, and the assistant always knows what’s current.
Prompts – Using Ready-Made Templates
What happens:
The server gives the assistant a template for the report, like a fill-in-the-blanks form with fields for things like [file_group] or [channel].
Simple Example:
Instead of writing the report from scratch every day, the assistant fills out a report template:
“Today, changes were made in [file_group] and discussed in [channel].”
Why this matters:
This ensures every report is structured the same way, is easy to read, and nothing is accidentally missed.
Sampling – Local Report Generation for Privacy and Customization
What happens:
The assistant writes the report locally using its own AI, combining the data it collected. This can happen on the company’s computers, not sent to a third party.
Simple Example:
The report draft is created on your machine, not in the cloud, so sensitive info stays inside the company.
Why this matters:
Keeps private data safe, lets the team review or edit before sending, and can even tailor the report using company-specific language.
Roots – Setting Boundaries for Security
What happens:
The system is told exactly where it’s allowed to look and what it can touch like which folders or Slack channels. It can’t wander off and access things it shouldn’t.
Simple Example:
The assistant only looks at code in the current “Sprint” folder and messages in the #DevOps Slack channel, not in private messages or HR files.
Why this matters:
Protects sensitive info, respects privacy with no surprises or accidental leaks.
Putting it all together
Without tools, you can’t gather info.
On top of that, without resources, you don’t know which data to use.
Additionally without prompts, you get messy, inconsistent reports.
Without sampling, you lose privacy and flexibility.
At last, without roots, you risk leaking or messing with the wrong data.
All five features combine to give you automation that’s smart, safe, and genuinely useful, not just a fancy API call!
The community gap
Despite this richness, unfortunatelly most client implementations today only support tools.
As a result, this has created a gap in understanding and utility.
Features like prompt retrieval or sampling are often buried in poor UI/UX, making them hard to discover or test. Consequently, the full promise of MCP remains untapped, but the optimistic part is that
as awareness grows and implementations improve, we can expect these capabilities to become more accessible and meaningful for the AI community!!
