April 1 Roundup: OpenAI Locks In $122B at an $852B Valuation, Anthropic Suffers a Second Leak in a Week, Google Ships Antigravity, Meta Turns AI Inward, Calacanis Bets $500M on Decentralized AI, and Anthropic Signs Australia Data Pact
OpenAI closes the largest private funding round in history — $122 billion at an $852 billion valuation — with Amazon, Nvidia, and SoftBank anchoring the deal. Anthropic's operational security collapses for the second time in a week as Claude Code's full source code leaks via npm. Google launches Antigravity, a full-stack vibe coding agent inside AI Studio. Meta deploys AI to accelerate its own product safety reviews. Jason Calacanis makes a bold $500M personal bet on Bittensor's decentralized AI network. And Anthropic signs a deal with Australia to share AI economic impact data — a quiet move that may reshape how governments track AI adoption.
1. OpenAI Closes the Largest Private Funding Round in History: $122 Billion at $852 Billion
OpenAI on Tuesday announced the close of its record-shattering funding round: $122 billion in committed capital at a post-money valuation of $852 billion. The round was co-led by SoftBank alongside Andreessen Horowitz and D. E. Shaw Ventures, with strategic anchors including Amazon ($50 billion), Nvidia ($30 billion), and SoftBank ($30 billion). An additional $12 billion came from a broader pool of investors, including $3 billion from individual investors — the first time OpenAI opened participation through bank channels.
The numbers are staggering by any measure. ChatGPT now supports more than 900 million weekly active users and over 50 million subscribers. OpenAI said it is generating $2 billion in revenue per month, up from $13.1 billion for all of last year. Microsoft, OpenAI's longtime partner, also participated in the round, though the company did not disclose the size of its investment.
"AI is driving productivity gains, accelerating scientific discovery, and expanding what people and organizations can build. This funding gives us the resources to continue to lead at the scale this moment demands." — OpenAI
But the money comes with pressure. OpenAI has been retreating from costly experiments — shuttering Sora, pulling back on data center plans, and refocusing on revenue-generating products — as it gears up for a potential IPO. The Guardian reported that OpenAI still loses billions of dollars a year and internal forecasts don't project profitability until 2030.
"Moments like this do not come often. The capital being deployed today is helping build the infrastructure layer for intelligence itself." — OpenAI via CNBC
$852 billion is a number that demands justification — and the clock is ticking. OpenAI's $2B/month revenue run rate is impressive, but the company is still deeply unprofitable, and no one has ever taken a company this expensive public while burning cash at this pace. The strategic logic is straightforward: lock in compute infrastructure now, build the platform layer that enterprises can't leave, and reach profitability before investor patience runs out. For enterprises, the signal is clear — OpenAI isn't going anywhere, and its API and enterprise products will get better and cheaper as scale economics kick in. But the real risk isn't OpenAI's survival; it's the gravitational distortion this much capital creates. When one player has $122B and an $852B valuation, it reshapes every partnership, every hiring decision, and every competitive dynamic in the industry. Smaller players feel the squeeze.
2. Anthropic's Operational Security Collapses Again: Claude Code's Full Source Code Leaked via npm
For a company that has built its entire brand around being the careful, safety-first AI lab, Anthropic is having a spectacularly bad week. On Tuesday — less than five days after Fortune revealed that nearly 3,000 internal files had been left in a publicly accessible data store — Anthropic accidentally published the full source code of Claude Code, one of its most strategically important products, in a routine npm package update.
When Anthropic pushed out version 2.1.88 of its Claude Code software package, it accidentally included source maps that exposed nearly 2,000 source code files and more than 512,000 lines of code — essentially the full architectural blueprint for the command-line coding tool that has become a serious competitive threat to OpenAI. Security researcher Chaofan Shou noticed almost immediately and posted about it on X.
"This was a release packaging issue caused by human error, not a security breach." — Anthropic spokesperson to TechCrunch
Developers began publishing detailed analyses almost immediately. One described Claude Code as "a production-grade developer experience, not just a wrapper around an API." The leak exposed the software scaffolding — instructions telling the model how to behave, what tools to use, and where its limits are — but not the underlying AI model itself.
The timing is particularly brutal. Claude Code has become so formidable that the Wall Street Journal reported OpenAI shut down Sora partly to redirect resources toward competing with it. Now competitors have a detailed look at how it's built.
Two major leaks in five days from the company that positions itself as the responsible AI lab is more than embarrassing — it's a brand crisis. The first leak (the Mythos model details and 3,000 internal files in an unsecured data store) could be written off as a CMS misconfiguration. The second (shipping source maps with a production npm package) suggests a systemic process problem. Neither involved a malicious attack; both were preventable with basic DevOps hygiene — artifact scanning, pre-publish hooks, staging environment reviews. For enterprises evaluating Claude for sensitive workloads, the question becomes: if Anthropic can't protect its own crown jewels, how much do you trust their security promises around your data? The technical capabilities of Claude Code remain excellent. But trust is a product feature, and Anthropic just shipped a downgrade.
3. Google Launches Antigravity: A Full-Stack Vibe Coding Agent Inside AI Studio
Google is making its boldest move yet to own the developer workflow. The company launched a completely upgraded "vibe coding" experience in Google AI Studio, powered by a new agent called Google Antigravity, that turns natural-language prompts into production-ready web applications — complete with databases, user authentication, real-time multiplayer features, and deployment pipelines.
The capabilities go far beyond a code generator. Antigravity can build multiplayer games, collaborative workspaces, and shared tools with real-time connectivity. It detects when an app needs a database or login system and automatically provisions Cloud Firestore and Firebase Authentication. Developers can bring their own API keys for external services like payment processors or Google Maps, stored securely in a new Secrets Manager. The agent supports React, Angular, and Next.js out of the box, with deployment to Google Antigravity hosting coming soon.
"The agent now maintains a deeper understanding of your entire project structure and chat history, enabling faster iteration and more precise multi-step code edits." — Google AI Blog
Google also announced built-in Firebase integration for backend services, session persistence across devices (close the browser tab and pick up where you left off), and automatic library management — the agent figures out what dependencies your project needs and installs them without prompting.
Google is playing a different game from OpenAI and Anthropic. While its competitors fight over model benchmarks and API pricing, Google is building a vertically integrated application factory — from prompt to production deployment in a single tool, with its own hosting, database, auth, and secrets management. The Firebase integration is the key moat. Once your app's data lives in Google's stack, switching costs become real. This is Google's cloud revenue play disguised as a developer experience product. For businesses evaluating no-code and low-code platforms, Antigravity in AI Studio is now a serious contender — especially for internal tools, prototypes, and MVPs. The question is whether "vibe-coded" apps can meet production-grade reliability and security standards. Google is betting yes.
4. Meta Turns AI Inward: Using Models to Police Its Own Product Development
While most AI news focuses on external products, Meta made a quieter but potentially more significant announcement on Tuesday: the company is now using artificial intelligence to handle portions of its internal product safety and risk review process. According to a PYMNTS report on Meta's blog post, AI is being deployed to accelerate the review pipeline that evaluates new features and products for potential harms before they ship.
The move comes after a turbulent month for Meta. The company laid off approximately 700 employees across Reality Labs, Facebook, and other divisions — while simultaneously unveiling stock packages for six top executives worth up to $921 million each over five years. Meta also boosted investment in its El Paso, Texas AI data center to $10 billion, aiming for 1-gigawatt capacity. And CEO Mark Zuckerberg is reportedly developing a personal AI agent to help him execute CEO-level tasks.
"Meta is using artificial intelligence to handle some of the tasks that help it build safer products and services." — Meta Newsroom
Separately, a landmark jury verdict found Meta and YouTube liable for platform design that harmed a young user, sending both companies' stock prices down and raising new questions about the regulatory pressure bearing down on Big Tech.
Using AI to review AI products for safety is a recursive loop that should make governance teams both excited and nervous. The efficiency gains are obvious — automated risk review at the speed of development means fewer bottlenecks and faster shipping. But the philosophical question is whether AI can reliably identify the harms that AI products might cause. Meta's track record on content moderation gives reason for skepticism. For enterprise leaders, the broader signal is more valuable: Meta is demonstrating that AI-powered internal governance is becoming a competitive advantage, not just a compliance checkbox. Companies that can review, audit, and approve AI-powered features faster — without sacrificing safety — will ship faster. The companies that figure out human-in-the-loop oversight at AI speed win this decade.
5. Jason Calacanis Bets $500 Million on Bittensor — and Calls TAO the "Bitcoin of AI"
Angel investor and All-In podcast co-host Jason Calacanis dropped a bombshell this week: a personal investment of approximately $500 million in Bittensor's TAO token, the native cryptocurrency of a decentralized AI network. On This Week in Startups, Calacanis predicted a 200x rally for TAO to a market capitalization of $500 billion within five to ten years, framing Bittensor as potentially the "Bitcoin of AI" — a foundational intelligence layer for an AI-native internet.
The bet sent TAO surging. The token hit a 4-month high, with additional catalysts including Nvidia CEO Jensen Huang's recent comments on decentralized AI and Grayscale opening a dedicated Bittensor Trust. Network participation is rising, with over $620 million in subnet staking and 19% of total TAO supply locked up.
On the same week's All-In podcast, the "besties" dedicated significant time to what they called "Anthropic's generational run" and debated whether OpenAI is in "focus mode or panic mode" — questioning the company's rapid product shutdowns and cost-cutting measures just days after closing its massive funding round.
"If Bitcoin was the money layer of crypto and Ethereum became the application layer, TAO bulls believe Bittensor could become the intelligence layer for an AI-native internet." — Bitcoin News
A $500M personal bet from one of Silicon Valley's most connected investors is a strong signal — but investors should separate the thesis from the hype. The decentralized AI thesis has real structural appeal: if centralized labs like OpenAI and Anthropic control the frontier, decentralized networks offer an alternative compute and intelligence market. Bittensor's subnet architecture is genuinely novel. But "the Bitcoin of AI" is marketing language, not technical analysis. The challenges are formidable — network quality control, Sybil resistance, and proving that decentralized inference can compete with centralized labs on actual performance. For enterprises, the practical question is whether any production workload will run on Bittensor in the next 2-3 years. The answer today is probably no. But the infrastructure thesis Calacanis is betting on — that AI compute will decentralize the way money did — deserves monitoring.
6. Anthropic Signs Data-Sharing Pact with Australia to Track AI's Economic Impact
In a move that flew under the radar amid the leak chaos, Anthropic announced it will sign an agreement to share its economic index data with the Australian government. The deal will help Canberra track artificial intelligence adoption across the Australian economy and measure its impact on workers and jobs — a first-of-its-kind arrangement between a frontier AI lab and a national government.
The announcement came as Anthropic opened its fourth office in the Asia-Pacific region, in Sydney, expanding its global footprint despite the ongoing political turmoil at home. The company also recently launched the Anthropic Institute, a new entity focused on AI safety research and policy engagement.
The timing is strategic. While Anthropic battles the U.S. Department of Defense over its "supply chain risk" designation — a fight it recently won a preliminary injunction on, though legal experts say the company remains at serious risk — it's building international relationships that provide both revenue diversification and political cover.
This is a quiet but strategically brilliant move. By giving a sovereign government direct access to its economic impact data, Anthropic is doing two things simultaneously: positioning itself as the transparent, cooperative AI company that governments can trust, and creating an institutional dependency that makes it harder for any future administration to shut it out. If the DoD saga taught Anthropic anything, it's that political relationships are as important as model capabilities. The Australia deal also establishes a template that other governments will want to replicate — making Anthropic the default partner for national AI economic monitoring. For enterprises operating in the APAC region, Anthropic's expanding Asia-Pacific presence signals growing API availability, local compliance support, and potentially better latency for Claude-based workloads.
7. "The AI Doc" Puts Silicon Valley's Biggest Names on the Record — and Diamandis Steals the Show
A new documentary making waves this week — The AI Doc: Or How I Became an Apocaloptimist, now streaming on Apple TV — has become the cultural artifact of the moment for anyone trying to understand the stakes of artificial intelligence. The film features interviews with Sam Altman, Demis Hassabis, Dario Amodei, and other AI leaders, told through the lens of a father-to-be trying to understand the world his child will inherit.
Peter Diamandis features prominently, delivering his trademark abundance message: "We're going to merge with AI. We're going to merge with technology." Business Insider and The New Yorker both reviewed the film extensively, with the latter asking whether AI needs a constitution.
"This is the most extraordinary time ever to be alive." — Peter Diamandis, in The AI Doc, via The New Yorker
Meanwhile, Diamandis' latest Metatrends newsletter continued his thesis on recursive self-improvement, connecting Elon Musk's TeraFab chip fabrication plant, AI-designed chip architectures, and the coming compute abundance. The Eric Schmidt conversation at the Abundance Summit — covering the "92-gigawatt problem" and the path to superintelligence — added further weight to the narrative.
The cultural mainstreaming of AI risk-and-opportunity narratives matters more than most technologists appreciate. When a documentary featuring Altman, Hassabis, Amodei, and Diamandis hits Apple TV and gets reviewed by The New Yorker, AI stops being a tech industry story and becomes a civilizational one. That changes the politics. It changes the regulatory environment. And it changes the talent pipeline — the next generation of engineers, policymakers, and entrepreneurs will be shaped by these cultural artifacts as much as by any research paper. Diamandis' relentless optimism serves as a necessary counterweight to the doomsday narratives, but the most honest reading of the current moment is that both camps are partially right: the opportunity is extraordinary, the risks are real, and the institutions meant to govern this transition are running behind.
Why It All Matters
Today's stories reveal an AI industry operating at two speeds simultaneously. At the capital layer, the scale is almost incomprehensible: $122 billion in a single funding round, $852 billion valuations, $10 billion data centers, $500 million personal crypto bets. The money flowing into AI infrastructure dwarfs anything the tech industry has seen before.
At the operational layer, the cracks are showing. The company that positions itself as the careful, safety-first lab can't stop leaking its own internal files. The company with the biggest war chest still can't turn a profit. The platform company building the most impressive coding tool is also building vendor lock-in through infrastructure dependencies.
For business leaders, the practical takeaways are clear: diversify your AI vendor exposure (don't bet everything on one lab's continued dominance), audit your shadow AI footprint (your employees are already using these tools, whether you've approved them or not), evaluate vibe-coding tools for internal use cases (the productivity gains are real, even if production readiness lags), and build governance frameworks that assume the pace of change will accelerate, not slow down. The institutions that thrive in this environment will be the ones that move fast on adoption while building robust oversight — not the ones that wait for the dust to settle, because it won't.
Need help navigating AI for your business?
Our team turns these developments into actionable strategy.
Contact SEN-X →