MCP use and customization
As AI systems get better, they’re still held back by their training data and can’t access real-time information or specialized tools. The Model Context Protocol (MCP) fixes this by letting AI models connect with outside data sources, tools, and environments. This allows smooth sharing of information and abilities between AI systems and the wider digital world. This standard, created by Anthropic to bring together prompts, context, and tool use, is key for building truly useful AI experiences that can be set up with custom tools.
Currently custom tools can be configured using the Model Context Protocol standard to unify prompts, context, and tool use.
MCP Servers can be added to hub Assistants using mcpServers
blocks. You can
explore available MCP server blocks
here.
Below is a quick example of setting up a new MCP server for use in your assistant:
.continue/mcpServers
at the top level of your workspaceplaywright-mcp.yaml
to this folder.playwright-mcp.yaml
and save.Now test your MCP server by prompting the following command:
The result will be a generated file called hn.txt
in the current working directory.
You can set up an MCP server to search the Continue documentation directly from your agent. This is particularly useful for getting help with Continue configuration and features.
For complete setup instructions, troubleshooting, and usage examples, see the Continue MCP Reference.
To set up your own MCP server, read the MCP
quickstart and then create an
mcpServers
block or add a local MCP
server block to your config file:
MCP blocks follow the established syntax for blocks, with a few additional properties specific to MCP servers.
name
(required): A display name for the MCP server.command
(required): The command to run to start the MCP server.type
(optional): The type of the MCP server: sse
, stdio
, streamable-http
args
(optional): Arguments to pass to the command.env
(optional): Secrets to be injected into the command as environment variables.MCP now supports remote server connections through HTTP-based transports, expanding beyond the traditional local stdio transport method. This enables integration with cloud-hosted MCP servers and distributed architectures.
For real-time streaming communication, use the SSE transport:
For local MCP servers that communicate via standard input and output:
For standard HTTP-based communication with streaming capabilities:
These remote transport options allow you to connect to MCP servers hosted on remote infrastructure, enabling more flexible deployment architectures and shared server resources across multiple clients.
For detailed information about transport mechanisms and their use cases, refer to the official MCP documentation on transports.
With some MCP servers you will need to use API keys or other secrets. You can leverage locally stored environments secrets
as well as access hosted secrets in the Continue Hub. To leverage Hub secrets, you can use the inputs
property in your MCP env block instead of secrets
.