Google’s AI agent, “Big Sleep,” has successfully identified and prevented the exploitation of a critical memory corruption vulnerability (CVE-2025-6965) within the SQLite open-source database engine. This marks a significant milestone, as it is believed to be the first time an AI agent has directly thwarted an active, in-the-wild exploitation attempt before malicious actors could capitalize on the flaw. Developed through a collaboration between DeepMind and Google Project Zero, Big Sleep had previously detected another SQLite vulnerability in late 2024, demonstrating its consistent capability in proactive cybersecurity.
Alongside this breakthrough, Google has published a white paper detailing its methodology for constructing secure AI agents, advocating for a hybrid defense-in-depth strategy. This approach integrates traditional, deterministic security controls with dynamic, reasoning-based defenses. The goal is to establish robust perimeters around the AI agent’s operational environment, effectively mitigating risks such as malicious actions resulting from prompt injection, while simultaneously ensuring the transparency and observability of the agent’s operations. This multi-layered security framework acknowledges the inherent limitations of relying solely on either rule-based systems or AI-based judgment for comprehensive protection.
Reference: https://thehackernews.com/2025/07/google-ai-big-sleep-stops-exploitation.html
Nexus Raises $700M, Rejects AI-Only Investment Trend
The venture capital landscape continues shifting as generative AI and LLMs redraw the lines for innovation and investment. Nexus Venture Partners, a leading VC firm with dual operations in India and the US, has just announced a new $700 million fund. Unlike...


