The chat responses are generated using Generative AI technology for intuitive search and may not be entirely accurate. They are not intended as professional advice. For full details, including our use rights, privacy practices and potential export control restrictions, please refer to our Generative AI Service Terms of Use and Generative AI Service Privacy Information. As this is a test version, please let us know if something irritating comes up. Like you get recommended a chocolate fudge ice cream instead of an energy managing application. If that occurs, please use the feedback button in our contact form!
Skip to content
Building X

Explore our APIs. Develop innovative applications and integrations or extend the functionality of existing applications.

Working with AI and APIs

This guide explains how to configure your LLM to recognize Building X documentation using llms.txt and the MCP server. At the end, you'll find an example demonstrating how to use Agentic AI with Building X Openness APIs to build your integrations.

Info

If you only want to ask questions (without generating code), you can use the Ask AI feature available on bottom-right corner of Developer Portal.

What is llmstxt?

llmstxt is an open standard for exposing machine-readable documentation to LLMs (Large Language Models). It provides a simple endpoint (typically /llms.txt) that lists documentation URLs, API specifications, and other resources in a format that LLMs and agentic tools can ingest to improve their contextual awareness and capabilities.

The Building X Openness APIs documentation is available at:

https://developer.siemens.com/building-x-openness/llms.txt

What is the MCP Server?

MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP as a "USB-C port" for AI applications: just as USB-C it offers a standardized way to connect devices to various peripherals, MCP provides a standardized way to connect AI models to different data sources and tools.

In our case we will use langchain-ai/mcpdoc as the mcp server which will fetch the docs and make your LLM/Agent context aware.

Running the MCP locally:

uvx --from mcpdoc mcpdoc \
    --urls "BuildingX:http://developer.siemens.com/building-x-openness/llms.txt" \
    --transport sse \
    --port 8082 \
    --host localhost

Running the mcp inspector locally where you can test the MCP:

npx @modelcontextprotocol/inspector

Tutorial: Configuring Local LLM Awareness of Building X APIs

The following example demonstrates the setup process for VS Code.

1. Prerequisites

  • Install uvx (official installation instructions):

    curl -LsSf https://astral.sh/uv/install.sh | sh
    
  • Access to your preferred LLM (local or cloud)

  • An IDE (e.g., VS Code, PyCharm)

2. Configure Your IDE to run the MCP Server Locally

Add the following configuration to your MCP server setup (the exact steps may vary depending on your IDE):

{
    "mcp": {
        "servers": {
            "buildingx-docs": {
                "command": "uvx",
                "args": [
                    "--from",
                    "mcpdoc",
                    "mcpdoc",
                    "--urls",
                    "BuildingX:http://developer.siemens.com/building-x-openness/llms.txt",
                    "--transport",
                    "stdio"
                ]
            }
        }
    }
}

This configuration instructs MCP to ingest documentation from the specified llms.txt endpoint.

3. Add a Prompt

Prompt configuration may differ depending on your IDE (see VS Code example):

For ANY question about Building X, use the buildingx-docs-mcp server to help answer:
+ Call the list_doc_sources tool to get the available llms.txt file
+ Call the fetch_docs tool to read it
+ Use this information to answer the question

4. Example Queries

Understanding the APIs

List Building X APIs and describe what they do.

Understanding the process of retrieving device values

Describe the sequence of API calls required to retrieve values from a device. Please provide example requests for each API call.

Generating a Python Script

Can you analyze the Building X documentation and generate a Python console application to retrieve values from a device? The script should follow this flow:

1. Obtain an authentication token using provided credentials.
2. Check which partitions you have access to.
3. List available buildings.
4. Allow selection of a building.
5. List devices in the selected building.
6. Allow selection of a device.
7. List points for the selected device.
8. Allow selection of a point.
9. Display the last 10 historical values for that point.

Happy coding! <3