Cloning and porting
The open-fdd repo should be portable to another lab, another workstation, or another Open-FDD deployment with minimal conceptual changes. Optional bench assets live under openclaw/README.md.
Core portability idea
Same tools, any building — only the knowledge graph changes.
That means the repo carries the reusable process, while the live Open-FDD model carries the site-specific truth.
What should transfer cleanly
- the test phases (
./scripts/bootstrap.sh --testin open-fdd-afdd-stack and optionalopenclaw/bench/e2e/README.md) - the BACnet fake-device approach (
openclaw/bench/fake_bacnet_devices/README.md) - the overnight review discipline
- the SPARQL validation set (
openclaw/bench/sparql/README.md) - the operator framework (
config/ai/operator_framework.yaml) - the continuous context-backup loop
- the idea of proving telemetry-to-fault correctness rather than only checking page loads
What usually changes per environment
- frontend URL
- API URL
- API auth setup
- site IDs / names
- BACnet gateway hostnames or IPs
- active Open-FDD rules directory (under
stack/, mounted into containers) - Docker/container naming
- LAN / OT network topology
- the actual HVAC system, naming conventions, and semantic model shape
- SPARQL queries or filters needed for that environment
What to do when deploying to another site
- Resolve the target frontend/backend/BACnet endpoints from the real launch context.
- Confirm auth works from the shell or runtime that will actually run the checks.
- Query the Open-FDD model first:
- sites
- equipment
- BACnet devices
- representative outdoor / plant / air / zone points
- Let the model decide what should be checked at that site.
- Keep repo docs generic; put site-specific truth into the Open-FDD model instead of hard-coding it into Markdown.
Recommended first-pass deployment flow for a new building
Use this order on a fresh site:
- verify backend auth and reachability
- run SPARQL/model sanity checks
- discover representative operator-relevant points from the model
- run the daytime smoke suite first (
openclaw/bench/e2e/README.md) - fix auth/model/BACnet issues found there before trusting the overnight 12-hour run
- only then move into recurring integrity sweeps and overnight review
Same-bench OpenClaw clone checklist
If OpenClaw is cloned onto another machine for the same current test bench, the new clone should read these first:
- Root
README.md openclaw/README.md- OpenClaw context bootstrap
- Open-FDD integrity sweep
- BACnet-to-fault verification
openclaw/bench/fake_bacnet_devices/README.md- Monitor the fake fault schedule
And it should know these durable facts immediately:
- the fake devices intentionally inject faults on a UTC schedule
- the 180°F spike is expected only during the shared out-of-bounds window
- the correct way to judge that spike is to compare live BACnet RPC reads against
openclaw/bench/fake_bacnet_devices/fault_schedule.py - the integrity sweep should classify graph drift, auth drift, BACnet drift, and product behavior separately
- durable reasoning belongs in this repo, not only in local OpenClaw chat memory
Portability goal
A clone of this repo should make it easy for another engineer to answer:
- Check whether Open-FDD is healthy.
- Confirm BACnet scraping is working.
- Verify the building model is usable.
- Are faults being computed here?
- Are regressions visible here before they affect a real deployment?
Engineering principle
Keep environment-specific values configurable and keep the verification logic reusable.
In practice, deployment to another site usually looks like this:
- Open-FDD runs on some other server (often a Linux box on the OT LAN)
- tooling is cloned onto another machine
- the tooling is pointed at the target Open-FDD URL, auth, BACnet gateway, and rule/model context for that environment
The tooling should therefore be robust to:
- different LAN IP schemes
- different Open-FDD hosts
- different HVAC systems and point naming
- different site/equipment modeling shapes
- different SPARQL needs per deployment
The goal is portability with context, not a one-off lab setup.