Cloning and porting

The open-fdd repo should be portable to another lab, another workstation, or another Open-FDD deployment with minimal conceptual changes. Optional bench assets live under openclaw/README.md.

Core portability idea

Same tools, any building — only the knowledge graph changes.

That means the repo carries the reusable process, while the live Open-FDD model carries the site-specific truth.

What should transfer cleanly

What usually changes per environment

  • frontend URL
  • API URL
  • API auth setup
  • site IDs / names
  • BACnet gateway hostnames or IPs
  • active Open-FDD rules directory (under stack/, mounted into containers)
  • Docker/container naming
  • LAN / OT network topology
  • the actual HVAC system, naming conventions, and semantic model shape
  • SPARQL queries or filters needed for that environment

What to do when deploying to another site

  1. Resolve the target frontend/backend/BACnet endpoints from the real launch context.
  2. Confirm auth works from the shell or runtime that will actually run the checks.
  3. Query the Open-FDD model first:
    • sites
    • equipment
    • BACnet devices
    • representative outdoor / plant / air / zone points
  4. Let the model decide what should be checked at that site.
  5. Keep repo docs generic; put site-specific truth into the Open-FDD model instead of hard-coding it into Markdown.

Use this order on a fresh site:

  • verify backend auth and reachability
  • run SPARQL/model sanity checks
  • discover representative operator-relevant points from the model
  • run the daytime smoke suite first (openclaw/bench/e2e/README.md)
  • fix auth/model/BACnet issues found there before trusting the overnight 12-hour run
  • only then move into recurring integrity sweeps and overnight review

Same-bench OpenClaw clone checklist

If OpenClaw is cloned onto another machine for the same current test bench, the new clone should read these first:

  1. Root README.md
  2. openclaw/README.md
  3. OpenClaw context bootstrap
  4. Open-FDD integrity sweep
  5. BACnet-to-fault verification
  6. openclaw/bench/fake_bacnet_devices/README.md
  7. Monitor the fake fault schedule

And it should know these durable facts immediately:

  • the fake devices intentionally inject faults on a UTC schedule
  • the 180°F spike is expected only during the shared out-of-bounds window
  • the correct way to judge that spike is to compare live BACnet RPC reads against openclaw/bench/fake_bacnet_devices/fault_schedule.py
  • the integrity sweep should classify graph drift, auth drift, BACnet drift, and product behavior separately
  • durable reasoning belongs in this repo, not only in local OpenClaw chat memory

Portability goal

A clone of this repo should make it easy for another engineer to answer:

  • Check whether Open-FDD is healthy.
  • Confirm BACnet scraping is working.
  • Verify the building model is usable.
  • Are faults being computed here?
  • Are regressions visible here before they affect a real deployment?

Engineering principle

Keep environment-specific values configurable and keep the verification logic reusable.

In practice, deployment to another site usually looks like this:

  • Open-FDD runs on some other server (often a Linux box on the OT LAN)
  • tooling is cloned onto another machine
  • the tooling is pointed at the target Open-FDD URL, auth, BACnet gateway, and rule/model context for that environment

The tooling should therefore be robust to:

  • different LAN IP schemes
  • different Open-FDD hosts
  • different HVAC systems and point naming
  • different site/equipment modeling shapes
  • different SPARQL needs per deployment

The goal is portability with context, not a one-off lab setup.