MCP - Model Context Protocol
Interact with the Conductor API from AI tools like Claude Desktop using the open-source MCP server.
Access your QuickBooks Desktop data from an LLM with MCP
The Model Context Protocol (MCP) is an open specification that lets large-language-model (LLM) clients discover and call tools (functions) that run on your computer. Conductor publishes an open-source MCP server that exposes every Conductor API endpoint as an MCP tool.
This means you can ask an AI assistant such as Claude Desktop, Cursor, or OpenAI Code Interpreter to read or write QuickBooks Desktop data on your behalf – no manual REST calls required. For example:
- “How many invoices did we issue last week?”
- “Create a new customer called ACME Inc. with email billing@acme.com.”
Quick start (Claude Desktop)
-
Ensure you have a Conductor secret key (create one in the dashboard if you haven’t already).
-
Start the MCP server from a terminal:
-
In Claude Desktop → Settings → Tools → Add local tool, paste the same command (plus the environment variable) or point Claude to the JSON shown in the README. Claude will detect the running MCP server and automatically load the Conductor tools.
That’s it! The assistant will automatically chain the MCP tools (list_api_endpoints
→ get_api_endpoint_schema
→ invoke_api_endpoint
) to fulfil your requests and show you the JSON response.
Using other MCP-compatible clients
Not every tool lets you paste a raw shell command like Claude Desktop does. Some, such as Cursor or the official MCP Playground, expect a JSON manifest that describes how the local server should be started.
If that is the case, create (or extend) the client’s configuration file and add an entry similar to the one below:
Save the file wherever your client looks for MCP manifests (see its docs) and restart the AI tool. It should automatically spin up the Conductor server when needed and expose the endpoints as tools.
Tip: you can define multiple entries (sandbox
, production
, etc.) each pointing at a different Conductor environment or with different sets of flags.
Choosing the tool style
You have two options when starting the server:
- Explicit tools – one MCP tool per Conductor endpoint. Useful when you know exactly which operations you need and want the most accurate parameter suggestions.
- Dynamic tools (
--tools=dynamic
) – three generic tools that let the LLM search, inspect, and invoke any endpoint on demand. Helpful when you want the entire API surface in a compact form.
You can even combine both approaches or filter the explicit tools with flags like --resource
or --operation
– see the README for details.
Capabilities & clients
Different LLM clients have different schema limitations. Pass --client=<name>
(e.g. claude
, cursor
, openai-agents
) so the MCP server tailors its output accordingly. You can also fine-tune individual capabilities with --capability
flags.
Further reading
- Full README & CLI options: conductor-node-mcp on GitHub
- Model Context Protocol specification & client list: https://modelcontextprotocol.io