Leading global cybersecurity agencies, including CISA, the FBI, NSA, and the Australian Cyber Security Centre, have issued the first unified guidance on safely integrating artificial intelligence (AI) into operational technology (OT) and critical infrastructure. The document, titled “Principles for the Secure Integration of Artificial Intelligence in Operational Technology,” provides operators with practical guardrails to balance innovation with safety and security.
The guidance represents a major shift from theoretical debate to actionable best practices, acknowledging AI’s potential to enhance efficiency while highlighting the risks it poses to physical safety, reliability, and operational integrity.
Key Principles for AI Integration in OT
The guidance emphasizes a human-in-the-loop approach, ensuring AI serves as an adviser rather than a controller. Operators are encouraged to maintain oversight, validate AI recommendations against physical measurements, and preserve essential manual skills to prevent errors during AI failures or outages.
Notably, the guidance differentiates safety from security, cautioning that large language models (LLMs) should not make direct safety decisions in OT environments. For example, a generative AI misinterpreting sensor anomalies could inadvertently adjust chemical dosing at a water treatment facility, creating immediate safety risks even if cybersecurity controls are intact.
Architecture Recommendations
To minimize new attack vectors, agencies recommend push-based or brokered architectures that move summarized insights from OT systems without granting inbound access to AI systems. Predictive machine learning is encouraged for levels 0–3 of OT operations (e.g., forecasting pump failures or detecting turbine anomalies), while LLMs are better suited for levels 4–5, including documentation, work order generation, and regulatory reporting.
The guidance also stresses the importance of vendor transparency. Organizations should require SBOMs or AIBOMs detailing AI model sources, training data, and hosting locations, ensuring third-party software does not introduce hidden risks or process sensitive data without disclosure.
Human Responsibility Remains Central
Across all recommendations, the guidance reinforces that humans retain ultimate accountability for functional safety. Regular validation of AI outputs, monitoring for model drift, and maintaining operator engagement are critical to ensure AI enhances rather than undermines operational resilience.
Diana Kelley, CISO at Noma Security, notes:
“This guidance provides a clear roadmap for safely integrating AI in OT. Resilience grows when humans and machines work in partnership, and operators must remain in control of critical decisions.”
Next Steps for Critical Infrastructure Operators
Organizations are advised to:
- Review current AI deployments across OT systems.
- Establish or refresh validation procedures for AI outputs.
- Engage vendors on transparency and security expectations before new AI capabilities are deployed.
As AI adoption accelerates, this joint global guidance provides a practical framework for safely harnessing AI while ensuring operational reliability, human oversight, and the integrity of critical infrastructure.