| The Cybersecurity and Infrastructure Security Agency (CISA) along with Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), National Security Agency (NSA), Canadian Centre for Cyber Security, New Zealand National Cyber Security Centre (NCSC-NZ), and United Kingdom National Cyber Security Centre (NCSC-UK) released this Joint Guidance which discusses key cybersecurity challenges and risks associated with the introduction of agentic artificial intelligence (AI) into information technology environments, as well as best practices for securing agentic AI systems. The Joint Guidance also provides actionable recommendations to help organizations anticipate, assess, and mitigate agentic AI-specific risks. |
| Agentic AI systems increasingly operate across critical infrastructure and defense sectors and support mission-critical capabilities. As agentic AI systems play a growing operational role, it is crucial for defenders to implement security controls to protect national security and critical infrastructure from agentic AI-specific risks. Agentic AI can automate repetitive, well-defined and low-risk tasks. However, these additional opportunities come with additional risks. Like other AI services, agentic AI can be misused or misappropriated, leading to productivity losses, service disruption, privacy breaches or cybersecurity incidents. Organizations must therefore anticipate what could go wrong, assess how agentic AI risk scenarios might affect operations and establish ongoing visibility and assurance to maintain confidence in their agentic AI investments. Where possible, organizations should also consider a full spectrum of solutions for repetitive tasks, including reducing or eliminating low-value processes, which may be lower risk compared to agentic AI solutions. |
| The authoring agencies strongly recommend aligning agentic AI risks and mitigation strategies with your organization’s existing security model and risk posture. The authoring agencies further recommend adopting agentic AI with security in mind, assessing its use and never granting it broad or unrestricted access, especially to sensitive data or critical systems. Additionally, organizations should only use agentic AI for low-risk and non-sensitive tasks. |