If you don’t know what Project Zero is and have not been in awe of what it has achieved in the security space, then you simply have not been paying attention these last few years. These elite hackers and security researchers work relentlessly to uncover zero-day vulnerabilities in Google’s products and beyond. The same accusation of lack of attention applies if you are unaware of DeepMind, Google’s AI research labs. Now, these two technological behemoths have joined forces to create Big Sleep, a large-language-model-assisted vulnerability research agent. In what it says is the first example, at least to be made public, of an AI agent discovering a previously unknown, zero-day, exploitable memory-safety vulnerability in widely used real-world software, Big Sleep has come of age.

Google Uses Large Language Model To Catch Zero-Day Vulnerability In Real-World Code

In a Nov. 01 announcement, Google’s Project Zero blog confirmed that the Project Naptime large language model assisted security vulnerability research framework has evolved into Big Sleep. This collaborative effort involving some of the very best ethical hackers, as part of Project Zero, and the very best AI researchers, as part of Google DeepMind, has developed a large language model-powered agent that can go out and uncover very real security vulnerabilities in widely used code. In the case of this world first, the Big Sleep team says it found “an exploitable stack buffer underflow in SQLite, a widely used open source database engine.”

The zero-day vulnerability was reported to the SQLite development team in October which fixed it the same day. “We found this issue before it appeared in an official release,” the Big Sleep team from Google said, “so SQLite users were not impacted.”

AI Could Be The Future Of Fuzzing, The Google Big Sleep Team Says

Although you may not have heard the term fuzzing before, it’s been part of the security research staple diet for decades now. Fuzzing relates to the use of random data to trigger errors in code. Although the use of fuzzing is widely accepted as an essential tool for those who look for vulnerabilities in code, hackers will readily admit it cannot find everything. “We need an approach that can help defenders to find the bugs that are difficult (or impossible) to find by fuzzing,” the Big Sleep team said, adding that it hoped AI can fill the gap and find “vulnerabilities in software before it’s even released,” leaving little scope for attackers to strike.

“Finding a vulnerability in a widely-used and well-fuzzed open-source project is an exciting result,” the Google Big Sleep team said, but admitted the results are currently “highly experimental.” At present, the Big Sleep agent is seen as being only as effective as a target-specific fuzzer. However, it’s the near future that is looking bright. “This effort will lead to a significant advantage to defenders,” Google’s Big Sleep team said, “with the potential not only to find crashing test cases, but also to provide high-quality root-cause analysis, triaging and fixing issues could be much cheaper and more effective in the future.”

Share.
Exit mobile version