Skip to main content
Model Context Protocol (MCP) remote servers let you expose any self-hosted toolchain to Lumo agents without rebuilding or redeploying core logic. A remote server speaks the MCP spec, advertises available tools, and streams observations back to the agent over a secure channel.

When to use remote servers

  • Keep sensitive infrastructure (databases, internal APIs, CLIs) behind your own network perimeter
  • Need to connect to external services, like prompt, tools or databases.

Use a remote server in a task

Reference the registered server when you call the POST /run endpoint. The agent automatically negotiates the MCP session before executing any tool calls.
{
    "task" : "What are my linear issues?" ,
    "model": "gpt-4.1-mini",
    "base_url": "https://api.openai.com/v1/chat/completions",
    "max_steps": 3,
    "agent_type": "mcp",
    "mcp_servers": [
        {
            "command": "npx",
            "args": ["-y", "mcp-remote", "https://mcp.linear.app/sse", "--header", "Authorization: Bearer ${AUTH_TOKEN}"],
            "env": {
                "AUTH_TOKEN": "YOUR-AUTH_TOKEN"
            }
        }
    ]
 
}
When the task runs, Lumo launches each entry in the mcp_servers array, establishes the MCP session, and routes tool calls through the provided command. All credentials stay within your controlled environment.

Response format

The API returns a JSON object containing the final answer, execution steps, and token usage:
{
  "final_answer": "Here are some of your recent Linear issues:\n\n1. [STA-95] Add contact us email proxy - Implement an email proxy service for the contact us form. Status: Backlog.\n2. [STA-97] Integrate with LiteLLM - Status: In Review, Priority: High.\n3. [STA-96] Add Responses API - Status: Backlog, Priority: Medium.\n4. [STA-75] Start with 5 dollar credit - Status: Done.\n5. [STA-93] Add Contact us email - Status: Backlog.\n\nIf you want, I can provide more details or more issues from your list. Would you like that?",
  "steps": [
    {
      "tool_calls": [
        {
          "id": "call_",
          "type": "function",
          "function": {
            "name": "list_issues",
            "arguments": "{\"assignee\":\"me\",\"limit\":20}"
          }
        }
      ],
      "llm_output": "To provide you with a list of your Linear issues, I will retrieve the issues assigned to you. Please hold on a moment while I get this information.",
      "token_usage": {
        "prompt_tokens": 0,
        "completion_tokens": 0,
        "total_tokens": 0
      }
    },
    {
      "tool_calls": [],
      "llm_output": "Here are some of your recent Linear issues:\n\n1. [STA-95] Add contact us email proxy - Implement an email proxy service for the contact us form. Status: Backlog.\n2. [STA-97] Integrate with LiteLLM - Status: In Review, Priority: High.\n3. [STA-96] Add Responses API - Status: Backlog, Priority: Medium.\n4. [STA-75] Start with 5 dollar credit - Status: Done.\n5. [STA-93] Add Contact us email - Status: Backlog.\n\nIf you want, I can provide more details or more issues from your list. Would you like that?",
      "token_usage": {
        "prompt_tokens": 0,
        "completion_tokens": 0,
        "total_tokens": 0
      }
    }
  ],
  "token_usage": {
    "prompt_tokens": 0,
    "completion_tokens": 0,
    "total_tokens": 0
  }
}