I cover the basics of MCP (setup with Cursor, Windsurf, OpenAI Agents, Claude Desktop), but also a few more advanced tricks:
Best practises for browser navigation (using playwright)
A custom Github MCP service so you can add remove collaborators
A custom LiteLLM MCP agent that is capable of handling image responses (like screenshots), Cursor and Windsurf and OpenAI agents cannot do this yet.
Cheers, Ronan
P.S. The scripts are in the ADVANCED-inference repo:
Video Links:
Trelis Links:
🤝 Are you a talented developer? Work for Trelis
💡 Need Technical or Market Assistance? Book a Consult Here
💸 Starting a New Project/Venture? Apply for a Trelis Grant
TIMESTAMPS:
0:00 What is MCP (Model Context Protocol)?
0:46 Table of Contents
1:46 How does MCP work?
5:18 What problem does MCP solve?
9:22 Finding the scripts for this video (https://trelis [dot] com/ADVANCED-inference)
10:03 How to inspect an MCP tool service?
16:56 How to use MCP with Cursor?
20:33 Browser navigation with MCP using Playwright or Puppeteer
23:09 How to use MCP with Github to grant repo access
25:01 How to use MCP with Stripe
27:50 Running a remote MCP server with SSE
30:45 How to use MCP with Windsurf?
32:33 How to use MCP with Claude Desktop App?
35:58 How to use MCP with OpenAI agents?
42:41 How to use MCP with a Custom LLM? (via LiteLLM, and supporting image responses)
55:22 Conclusion
Model Context Protocol (MCP): A Technical Guide to Tool Integration for LLMs
Model Context Protocol (MCP) enables language models to interact with external tools and services through a standardized interface. This article explains how MCP works, its key components, and implementation approaches across different platforms.
Core Concepts
MCP provides a protocol for:
Defining tool specifications and capabilities
Handling tool calls from language models
Processing tool responses and feeding results back to LLMs
Supporting additional features like resources and prompts (though tools are the primary use case)
The protocol can operate in two modes:
Local execution via standard input/output
Remote execution through server-sent events
Technical Implementation
The basic flow works as follows:
Tool Service Setup:
MCP tool service stores API endpoints and available actions
Tool information is injected into LLM context
LLM uses this information to make appropriate tool calls
Tool Call Process:
LLM generates structured tool call based on user input
Call is sent to MCP service for execution
Response is returned to LLM's context window
LLM processes response and continues conversation
Supported Platforms
MCP integration is available on several platforms:
Cursor:
Configure via MCP.json configuration file
Supports multiple concurrent tool services
Requires API keys for services like GitHub/Stripe
Windsurf:
Similar configuration to Cursor
Tools displayed directly in chat interface
Limited by image response handling
Claude Desktop:
Configuration through Desktop Config file
Free tier has context length limitations
Approximately 30 tool limit on free plan (as a consequence of context length limits, which depend on busyness of service)
Current Limitations
Some technical constraints exist:
Image handling: Screenshots and visual feedback not currently supported in Cursor/Windsurf
Context limits: Free tiers may restrict number of concurrent tools
API requirements: Services like GitHub/Stripe require valid API keys/tokens
Custom integration: Requires additional code for direct LLM integration
Custom Integration
For direct LLM integration, developers need to:
Convert MCP tool specifications to LLM-compatible format
Transform LLM tool calls to MCP protocol format
Handle tool responses appropriately
Manage image responses if supported by the LLM
Available Tools
Common MCP tools include:
Browser navigation (Playwright/Puppeteer)
GitHub repository management
Stripe payment processing
Custom tool services
Each tool requires proper configuration and appropriate access credentials for successful integration.
Technical Requirements
To implement MCP:
Node.js/NPM for running services
API keys for relevant services
Environment variables for credentials
Proper configuration files for chosen platform
Additional dependencies based on specific tools
MCP provides a structured way to extend LLM capabilities through external tools, though implementation details vary by platform and specific use case.
Share this post