Google, a division of Alphabet Inc., has launched the AI Cyber Defense Initiative to utilize the security potential of artificial intelligence. The new strategy will strive to protect and advance the digital future through the practical application of AI technology and policies.
The AI Cyber Defense Initiative is designed to revolutionise cybersecurity by using AI technology to tackle the so-called "Defender's Dilemma". Cybersecurity threats have become a significant concern for security experts, governments, businesses, and society at large. AI technology can offer cybersecurity professionals a crucial advantage over attackers, but can also be exploited by these same nefarious individuals.
AI technology allows security experts to scale their threat detection, malware analysis, vulnerability detection and repair, as well as manage incident responses. Under the AI Cyber Defense Initiative, Google will launch a new "AI for Cybersecurity" startup cohort that consists of 17 startups from the UK, US, and EU. Under the umbrella of the Google for Startups Growth Academy's AI for Cybersecurity Program, these startups will bolster the transatlantic cybersecurity ecosystem and contribute towards Google's $15 million investment in Europe's cybersecurity training.
Alongside this, Google is investing $2 million in cybersecurity research initiatives and open sourcing Magika, Google's file type identification system powered by AI. Already used to secure products like Gmail, Drive, Safe Browsing, and by the VirusTotal team, the aim is to foster a safer digital environment.
By the close of 2024, Google's investment in data centers in Europe will exceed $5 billion. This will support secure and reliable access to an assortment of digital services, including advanced AI capabilities such as its Vertex AI platform.
To ensure its infrastructure is ready to support AI, Google will also release new tools for defense, start new research and provide AI security training. The Secure AI Framework is Google's initiative to collaborate on securing AI systems and build a more secure AI ecosystem.
To stimulate groundbreaking developments in AI security, Google is allocating $2 million for research grants and strategic partnerships. These will bolster existing cybersecurity research into AI applications, focusing on areas like enhancing code verification, understanding how AI can assist with cyber offense and defense techniques, and developing language models that are more resilient to threats.
The research will be undertaken by scholars at notable institutions such as The University of Chicago, Carnegie Mellon, and Stanford. This will complement Google's on-going efforts to strengthen the cybersecurity ecosystem, which included a $12 million commitment to the New York research system last year.
In conclusion, Google underscores its excitement about AI’s potential in resolving generational security issues while ensuring individuals and businesses enjoy a safe, reliable, and trustworthy digital world.