Industry leaders converge to provide authoritative research, tools, education and certification for AI safety and security.

In an era where artificial intelligence (AI) rapidly reshapes technology and security, CSA launches the AI Safety Initiative, a pioneering effort dedicated to establishing and disseminating trusted security best practices for AI. With an initial focus on Generative AI, our mission is to empower organizations of all sizes with the guidelines, templates, and knowledge they need to deploy AI solutions that are safe, responsible, and compliant. Our objectives include:
We are pleased to share some highlights from our most recent event.
Agentic AI is the next big thing in artificial intelligence. Smart computer programs, that can set goals, make decisions, and learn on their own, without always needing a person to tell them what to do. Old systems just follow rules. But AI agents can change and get better as things around them change. According to Gartner, by 2026, companies using agentic AI for security will find and fix supply chain threats 60% faster than those using older security tools.
Artificial intelligence (AI) is rapidlytransforming various aspects of our lives, driving increased efficiency andautomation. However, this technological advancement also presents significantchallenges to cybersecurity. Cybercriminals, unconstrained by ethicalconsiderations, are increasingly leveraging AI for malicious purposes, withsocial engineering attacks being a prime target. The growing accessibility ofAI tools further exacerbates this issue, making it easier for even lesssophisticated actors to deploy these tactics.
Industry leaders converge to provide authoritative research, tools, education and certification for AI safety and security.