Working with AI and APIs¶
This guide explains how to configure your LLM to recognize product documentation using llms.txt and the MCP server. At the end, you'll find an example demonstrating how to use Agentic AI with product APIs to build your integrations.
Info
If you only want to ask questions (without generating code), you can use the Ask AI feature available on bottom-right corner of Developer Portal.
What is llmstxt?¶
llmstxt is an open standard for exposing machine-readable documentation to LLMs (Large Language Models). It provides a simple endpoint (typically /llms.txt) that lists documentation URLs, API specifications, and other resources in a format that LLMs and agentic tools can ingest to improve their contextual awareness and capabilities.
Product documentation is available at their respective llms.txt endpoints. For example:
https://developer.siemens.com/<product-name>/llms.txt
What is the MCP Server?¶
MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP as a "USB-C port" for AI applications: just as USB-C it offers a standardized way to connect devices to various peripherals, MCP provides a standardized way to connect AI models to different data sources and tools.
In our case we will use langchain-ai/mcpdoc as the mcp server which will fetch the docs and make your LLM/Agent context aware.
Running the MCP locally:
uvx --from mcpdoc mcpdoc \
--urls "ProductName:https://developer.siemens.com/<product-name>/llms.txt" \
--transport sse \
--port 8082 \
--host localhost
Replace <product-name> with the actual product identifier.
Running the mcp inspector locally where you can test the MCP:
npx @modelcontextprotocol/inspector
Tutorial: Configuring Local LLM Awareness of Product APIs¶
The following example demonstrates the setup process for VS Code.
1. Prerequisites¶
Install
uvx(official installation instructions):curl -LsSf https://astral.sh/uv/install.sh | shAccess to your preferred LLM (local or cloud)
- An IDE (e.g., VS Code, PyCharm)
2. Configure Your IDE to run the MCP Server Locally¶
Add the following configuration to your MCP server setup (the exact steps may vary depending on your IDE):
{
"mcp": {
"servers": {
"product-docs-mcp": {
"command": "uvx",
"args": [
"--from",
"mcpdoc",
"mcpdoc",
"--urls",
"ProductName:http://developer.siemens.com/<product-name>/llms.txt",
"--transport",
"stdio"
]
}
}
}
}
Replace <product-name> with the actual product identifier. This configuration instructs MCP to ingest documentation from the specified llms.txt endpoint.
3. Add a Prompt¶
Prompt configuration may differ depending on your IDE (see VS Code example):
For ANY question about <product-name>, use the product-docs-mcp server to help answer:
+ Call the list_doc_sources tool to get the available llms.txt file
+ Call the fetch_docs tool to read it
+ Use this information to answer the question
Replace <product-name> with the actual product name you're working with.
4. Example Queries¶
Understanding the APIs
List the available APIs and describe what they do.
Understanding API workflows
Describe the sequence of API calls required to [achieve specific goal]. Please provide example requests for each API call.
Generating a Script
Can you analyze the documentation and generate a Python script that demonstrates how to [achieve specific goal]? The script should include:
1. Authentication with the API
2. [Step-by-step workflow relevant to your use case]
3. Error handling and logging
4. Output formatting
Happy coding! <3