Data modeling

The Open-FDD data model is built around sites, equipment, and points: the same entities you create, read, update, and delete via the API (CRUD = Create, Read, Update, Delete). The REST API (Swagger at /docs) and the data-model endpoints (/data-model/export, /data-model/import, /data-model/ttl) are the main way to manage this data. The database is the single source of truth; the Brick TTL file is regenerated on every change so FDD rules and Grafana stay in sync.

Concepts (entities):

  • Sites — Buildings or facilities. All equipment and points are scoped to a site.
  • Equipment — Physical devices (AHUs, VAVs, heat pumps). Belong to a site; have points.
  • Points — Time-series references (sensors, setpoints). Link to equipment and site; store external_id, optional brick_type and rule_input for FDD.
  • External representations (Brick v1.3)ref:hasExternalReference mappings from points to BACnet and timeseries systems.
  • SPARQL cookbook — Run SPARQL via POST /data-model/sparql only: config, data model, BACnet, FDD rule mapping, time-series references. Copy-paste queries for validation and UIs.
  • AI-assisted data modeling — Export → LLM or human tagging → import (Brick types, rule_input, polling, equipment feeds). External agents (e.g. Open‑Claw) can use GET /model-context/docs as platform documentation context and GET /mcp/manifest for HTTP discovery; the canonical import schema is the same.
  • LLM workflow (export + rules + validate → import) — One-shot upload: canonical prompt, export JSON, optional rules (cookbook or YAML); validate with schema or Pydantic so import parses on the backend; then PUT /data-model/import and run FDD/tests.

Framework: CRUD is provided by the FastAPI REST API. The data-model API adds bulk export/import and Brick TTL generation; see Overview for the flow (DB → TTL → FDD column_map) and Appendix: API Reference for endpoints.


Table of contents