Access your QuickBooks Desktop data from an LLM with MCP

The Model Context Protocol (MCP) is an open specification that lets large-language-model (LLM) clients discover and call tools (functions) that run on your computer. Conductor publishes an open-source MCP server that exposes every Conductor API endpoint as an MCP tool.

This means you can ask an AI assistant such as Claude Desktop, Cursor, or OpenAI Code Interpreter to read or write QuickBooks Desktop data on your behalf – no manual REST calls required. For example:

  • “How many invoices did we issue last week?”
  • “Create a new customer called ACME Inc. with email billing@acme.com.”

Quick start (Claude Desktop)

  1. Ensure you have a Conductor secret key (create one in the dashboard if you haven’t already).

  2. Start the MCP server from a terminal:

    export CONDUCTOR_SECRET_KEY="sk_..."      # your key
    
    # Launch the server optimised for Claude with dynamic tools enabled
    npx -y conductor-node-mcp --client=claude --tools=dynamic
  3. In Claude Desktop → Settings → Tools → Add local tool, paste the same command (plus the environment variable) or point Claude to the JSON shown in the README. Claude will detect the running MCP server and automatically load the Conductor tools.

That’s it! The assistant will automatically chain the MCP tools (list_api_endpointsget_api_endpoint_schemainvoke_api_endpoint) to fulfil your requests and show you the JSON response.

Using other MCP-compatible clients

Not every tool lets you paste a raw shell command like Claude Desktop does. Some, such as Cursor or the official MCP Playground, expect a JSON manifest that describes how the local server should be started.

If that is the case, create (or extend) the client’s configuration file and add an entry similar to the one below:

title="example-mcp.json"
{
  "mcpServers": {
    "conductor-api": {
      // The executable to launch
      "command": "npx",

      // Extra command-line arguments passed to the executable
      "args": [
        "-y",
        "conductor-node-mcp",
        "--client=claude", // adapt to the client you use
        "--tools=dynamic", // or omit/replace as you like
      ],

      // Environment variables forwarded to the process
      "env": {
        "CONDUCTOR_SECRET_KEY": "sk_live_…",
      },
    },
  },
}

Save the file wherever your client looks for MCP manifests (see its docs) and restart the AI tool. It should automatically spin up the Conductor server when needed and expose the endpoints as tools.

Tip: you can define multiple entries (sandbox, production, etc.) each pointing at a different Conductor environment or with different sets of flags.

Choosing the tool style

You have two options when starting the server:

  1. Explicit tools – one MCP tool per Conductor endpoint. Useful when you know exactly which operations you need and want the most accurate parameter suggestions.
  2. Dynamic tools (--tools=dynamic) – three generic tools that let the LLM search, inspect, and invoke any endpoint on demand. Helpful when you want the entire API surface in a compact form.

You can even combine both approaches or filter the explicit tools with flags like --resource or --operation – see the README for details.

Capabilities & clients

Different LLM clients have different schema limitations. Pass --client=<name> (e.g. claude, cursor, openai-agents) so the MCP server tailors its output accordingly. You can also fine-tune individual capabilities with --capability flags.

Further reading