Google DeepMind's Big Sleep Finds 20 Software Vulnerabilities; Microsoft & Google Unveil AI Cybersecurity Tools
Google DeepMind's Big Sleep project has made a significant breakthrough, uncovering 20 vulnerabilities in widely-used applications. Meanwhile, tech giants Microsoft 365 and Google have separately unveiled early-stage AI-powered cybersecurity tools, Project Ire and a similar Google offering.
The Big Sleep project, led by Google DeepMind, has successfully identified 20 software vulnerabilities, highlighting potential security gaps in popular applications. This discovery underscores the project's goal of using AI to enhance software security.
Microsoft's Project Ire is another notable development in AI-driven cybersecurity. Currently, it operates off-device, analysing data sent to Microsoft's servers. However, Microsoft envisions a future where Project Ire can work on-device, detecting malware directly in a computer's memory. The tool boasts an impressive 98% accuracy rate in identifying malware, with a low 2% false flag rate.
The maintainer of the curl project, Daniel Stenberg, has raised concerns about AI-generated bug reports. He has banned them due to the time and resources wasted on false positives. Stenberg's stance reflects the growing concern among software developers and package maintainers about the increasing volume of AI-generated bug reports.
While AI-powered cybersecurity tools from Microsoft 365 and Google show promise, their real-world implementation is still years away. AI advocates and investors aim to push these tools into full production by the end of 2025, but experts warn that a decade of development may be needed. Meanwhile, software developers grapple with the challenge of AI-generated bug reports, highlighting the need for responsible AI integration in software development.