The Model Context Protocol (MCP) is an open specification that lets large-language-model (LLM) clients discover and call tools (functions) that run on your computer. Conductor publishes an open-source MCP server that exposes every Conductor API endpoint as an MCP tool.

This means you can ask an AI assistant such as Claude Desktop, Cursor, or OpenAI Code Interpreter to read or write QuickBooks Desktop data on your behalf. For example:

  • “How many invoices did we issue last week?”
  • “Create a new customer called ACME Inc. with email billing@acme.com.”

Quickstart with Claude Desktop

  1. Ensure you have a Conductor secret key (create one in the dashboard if you haven’t already).

  2. Start the MCP server from a terminal:

    # Your secret key
    export CONDUCTOR_SECRET_KEY="sk_..."
    
    # Launch the server optimised for Claude with dynamic tools
    npx -y conductor-node-mcp --client=claude --tools=dynamic
    
  3. In Claude Desktop → Settings → Developer Tools → Edit Config, paste the JSON below into the claude_desktop_config.json file it opens. Replace sk_conductor_... with your Conductor secret key.

    claude_desktop_config.json
    {
      "mcpServers": {
        "conductor-api": {
          "command": "npx",
          "args": [
            "-y",
            "conductor-node-mcp",
            "--client=claude",
            "--tools=dynamic"
          ],
          "env": {
            "CONDUCTOR_SECRET_KEY": "sk_conductor_..."
          }
        }
      }
    }
    

That’s it! The assistant will automatically chain the MCP tools (list_api_endpointsget_api_endpoint_schemainvoke_api_endpoint) to fulfil your requests and show you the JSON response.

Choosing the tool style

You have two options when starting the server:

  1. Explicit tools – one MCP tool per Conductor endpoint. Useful when you know exactly which operations you need and want the most accurate parameter suggestions.
  2. Dynamic tools (--tools=dynamic) – three generic tools that let the LLM search, inspect, and invoke any endpoint on demand. Helpful when you want the entire API surface in a compact form.

You can even combine both approaches or filter the explicit tools with flags like --resource or --operation – see the README for details.

Capabilities & clients

Different LLM clients have different schema limitations. Pass --client=<name> (e.g. claude, cursor, openai-agents) so the MCP server tailors its output accordingly. You can also fine-tune individual capabilities with --capability flags.

Further reading