Early Anthropic hire raises $15M to insure AI agents and help startups deploy safely

A groundbreaking startup, launched by an early Anthropic team member, has successfully secured $15 million in funding to tackle one of the most critical challenges faced by enterprises today: deploying AI systems without the risk of catastrophic failures that could harm their operations.

The Artificial Intelligence Underwriting Company (AIUC), which officially launched on July 23, integrates insurance coverage with stringent safety protocols and independent audits. This approach offers companies the assurance needed to deploy AI agents — autonomous software capable of executing complex tasks like customer service, coding, and data analysis.

Leading the seed funding round was Nat Friedman, former CEO of GitHub, through his firm NFDG. The round also saw contributions from Emergence Capital, Terrain, and several distinguished angel investors, including Ben Mann, co-founder of Anthropic and former CISO at Google Cloud and MongoDB.

“Enterprises are in a precarious position,” Rune Kvist, co-founder and CEO of AIUC, shared in an interview. “You can either stay on the sidelines and risk becoming obsolete as your competitors forge ahead, or you can dive in and potentially face public backlash for mishaps like your chatbot promoting Nazi propaganda, misrepresenting your refund policy, or discriminating against potential hires.”


AI Scaling Hits Its Limits

Power limitations, increasing token costs, and inference delays are reshaping enterprise AI landscapes. Join our exclusive salon to learn how leading teams are:

  • Leveraging energy as a strategic advantage
  • Designing efficient inference for substantial throughput improvements
  • Achieving competitive ROI with sustainable AI systems

Reserve your spot to stay ahead: https://bit.ly/4mwGngO


The company’s strategy addresses a fundamental trust gap that has emerged with the rapid advancement of AI capabilities. Although AI systems can now perform tasks comparable to undergraduate-level human reasoning, many enterprises hesitate to deploy them due to fears of unpredictable failures, liability issues, and reputational damage.

Creating security standards that evolve at the pace of AI

AIUC’s approach focuses on establishing what Kvist describes as “SOC 2 for AI agents” — a comprehensive security and risk framework tailored specifically for AI systems. SOC 2 is a widely-accepted cybersecurity standard that companies often require from vendors before sharing sensitive data.

“SOC 2 is a cybersecurity standard that outlines the best practices you must implement in sufficient detail for a third party to verify compliance,” Kvist explained. “However, it doesn’t address AI-specific concerns. There are numerous new questions, such as: How are you handling my training data? What about hallucinations? How do you manage these tool calls?”

The AIUC-1 standard encompasses six essential categories: safety, security, reliability, accountability, data privacy, and societal risks. The framework mandates AI companies to implement specific safeguards, from monitoring systems to incident response plans, which can be independently verified through comprehensive testing.

“We conduct extensive testing on these agents, using customer support as a relatable example,” said Kvist. “We attempt to provoke the system into making offensive comments, granting undeserved refunds, issuing excessive refunds, making outrageous statements, or leaking another customer’s data. We perform these tests thousands of times to accurately assess the AI agent’s robustness.”

From Benjamin Franklin’s fire insurance to AI risk management

This insurance-centric approach builds on centuries of history where private markets have outpaced regulation to facilitate the safe adoption of transformative technologies. Kvist often cites Benjamin Franklin’s establishment of America’s first fire insurance company in 1752, which led to the implementation of building codes and fire inspections that mitigated the fires plaguing Philadelphia’s rapid expansion.

“Historically, insurance has proven to be the right model for this, because insurers are incentivized to provide accurate assessments,” Kvist explained. “If they overstate the risks, competitors will offer cheaper insurance. If they understate the risks, they’ll bear the financial burden and potentially face insolvency.”

A similar pattern emerged with automobiles in the 20th century, when insurers established the Insurance Institute of Highway Safety and developed crash testing standards that encouraged safety features like airbags and seatbelts — years before government mandates.

Major AI companies already adopting the new insurance model

AIUC has already begun collaborating with several prominent AI companies to validate its approach. The company partners with unicorn startups Ada (customer support) and Cognition (coding) to facilitate enterprise deployments that had been delayed due to trust issues.

“We assisted [Ada] in securing a deal with a top five social media company by conducting independent risk assessments, which helped finalize the deal by providing the assurance needed to present to their customers,” Kvist said.

The startup is also forming partnerships with established insurance providers to ensure financial backing for its policies. This addresses concerns about entrusting a startup with substantial liability coverage. “The insurance policies will be supported by the balance sheets of major insurers,” Kvist explained.

Quarterly updates versus lengthy regulatory cycles

One of AIUC’s key innovations is the development of standards that keep pace with AI’s rapid development. While traditional regulatory frameworks like the EU AI Act take years to develop and implement, AIUC plans to update its standards quarterly.

“The EU AI Act was initiated in 2021, and while it’s nearing release, it’s being delayed again due to its burdensome requirements four years later,” Kvist noted. “This cycle makes it challenging for traditional regulatory processes to keep up with this technology.”

This adaptability has become increasingly crucial as the competitive gap between U.S. and Chinese AI capabilities narrows. “A year and a half ago, everyone said, ‘We’re two years ahead now, which feels like eight months,'” Kvist observed.

How AI insurance actually works: Testing systems to their limits

AIUC’s insurance policies cover a range of AI failures, including data breaches, discriminatory hiring practices, intellectual property infringements, and incorrect automated decisions. The company determines coverage pricing based on rigorous testing that seeks to identify system weaknesses across various failure modes.

The startup collaborates with a consortium of partners, including PwC (one of the “big four” accounting firms), Orrick (a leading AI law firm), and academics from Stanford and MIT to develop and validate its standards.

Former Anthropic executive departs to address AI trust issues

The founding team brings extensive experience in both AI development and institutional risk management. Kvist was the first product and go-to-market hire at Anthropic in early 2022, prior to ChatGPT’s launch, and serves on the board of the Center for AI Safety. Co-founder Brandon Wang is a Thiel Fellow who previously built consumer underwriting businesses, while Rajiv Dattani is a former McKinsey partner who led global insurance work and served as COO of METR, a nonprofit evaluating leading AI models.

“I believe building AI is incredibly exciting and holds great potential for positive impact worldwide. However, the most pressing question driving me is: How will society manage this technology that’s rapidly evolving?” Kvist said about his decision to leave Anthropic.

The race to ensure AI safety before regulation catches up

AIUC’s launch marks a significant shift in the AI industry’s approach to risk management as the technology transitions from experimental to mission-critical business applications. The insurance model provides enterprises with a balanced path between reckless AI adoption and stagnation due to awaiting comprehensive government oversight.

The startup’s approach could prove pivotal as AI agents become more capable and widespread across various industries. By fostering financial incentives for responsible development while enabling faster deployment, companies like AIUC are constructing the infrastructure necessary for a safe and orderly economic transformation driven by AI.

“We hope that this insurance model, this market-driven approach, will encourage both rapid adoption and investment in security,” Kvist said. “History has shown us that markets can outpace legislation.”

The stakes are incredibly high. As AI systems approach human-level reasoning across more domains, the opportunity to establish robust safety infrastructure may be closing swiftly. AIUC’s strategy is that by the time regulators catch up to AI’s rapid pace, the market will have already established protective measures.

After all, Philadelphia’s fires didn’t wait for government building codes — and today’s AI arms race isn’t waiting for Washington, either.

Recommended Content