Leading generative AI startup Anthropic, co-founded by OpenAI veterans, has raised $450 million in Series C funding round led by Spark Capital.
Anthropic would not disclose how much the round valued its business. But The Information reported in early March that the company was seeking to raise capital at a valuation of more than $4.1 billion. It wouldn’t be surprising if that figure ends up within the ballpark.
Notably, tech giants including Google (Anthropic’s preferred cloud provider), Salesforce (via its Salesforce Ventures wing) and Zoom (via Zoom Ventures) participated in the financing, along with Sound Ventures and other undisclosed VC parties. It seems to indicate a strong belief in the promise of Anthropic’s technology, which uses AI to perform a wide range of conversational and text processing tasks.
“We are thrilled that these leading investors and technology companies are supporting Anthropic’s mission: AI research and products that advance security,” CEO Dario Amodei said in a statement. “The systems we are building are being designed to provide reliable AI services that can positively impact businesses and consumers now and in the future.”
To wit, Zoom recently announced a partnership with Anthropic to “build customer-facing AI products focused on reliability, productivity and security,” following a similar tie-up with Google. Anthropic claims to have more than a dozen customers across industries including healthcare, human resources and education.
Perhaps not coincidentally, the Series C also comes after Spark Capital hired Fraser Kelton, former head of product at OpenAI, as a venture partner. Spark was an early investor in Anthropic. But the VC firm has also redoubled its efforts to seek out early-stage AI startups, particularly in the generative AI space, which remains red hot.
“All of us at Spark are excited to partner with Dario and the entire Anthropic team on their mission to build trusted and honest AI systems,” said Yasmin Razavi, general partner at Spark Capital, who will serve on Anthropic’s board of directors in connection with the Series C. joined in. , said in a press release. “Anthropic has built a world-class technical team dedicated to building secure and efficient AI systems. The overwhelmingly positive response to Anthropic’s products and research is encouraging for the widespread use of AI to unlock a new paradigm to thrive in our societies.” indicates capability.
With the new $450 million tranche, Anthropic’s warchest has grown to $1.45 billion. It almost topped the list of best-funded startups in AI, eclipsed only by OpenAI, which has raised a total of over $11.3 billion to date (according to Crunchbase). Competitor Inflection AI, a startup making AI-powered personal assistants, has secured $225 million, while Adept, another Anthropic rival, has raised about $415 million.
Amodei, former VP of research at OpenAI, launched Anthropic as a public benefit corporation in 2021 with a number of OpenAI employees including Jack Clark, former head of policy at OpenAI. Amodei split from OpenAI following a disagreement over the company’s direction, namely the startup’s increasingly commercial focus.
Anthropic now competes with OpenAI as well as startups such as Cohere and AI21 Labs, which are all developing and producing their own text-generating – and in some cases image-generating – AI systems. But it has big ambitions.
As TechCrunch previously reported, Anthropic plans to — as it described in a pitch deck for investors — create a “next-gen algorithm for AI self-learning.” Such an algorithm could be used to build virtual assistants that could answer emails, conduct research, and generate art, books, and more, some of which we already know about GPT- 4 and flavored with a choice of other large language models.
The next-generation algorithm is the successor to Anthropic’s Chatbot Cloud, which is still in preview but available via an API, which can be used to perform a variety of tasks, including searching, summarizing, writing and coding documents, and answering questions about particular topics. Can be instructed to do any type of work. , In this way it is similar to OpenAI’s ChatGPT. But Anthropic makes the case that the cloud, released in March, is “much less likely to generate harmful outputs,” “easier to interact with” and “[far] more runnable” than alternatives.
Why is Cloud the best from anthropic point of view? In the pitch deck, Anthropic argues that its technique for training AI, which it calls “constitutional AI”, makes it easier to understand a system’s behavior and simpler to adjust the system with “values” defined by its “constitution”. makes. Constitutive AI basically seeks to provide a way to align AI with human intentions, allowing systems to answer questions and perform tasks using a simple set of guiding principles.
In its pursuit of generative AI superiority, Anthropic recently expanded the context window — essentially, the cloud’s “memory” — from 9,000 tokens to 100,000 tokens, with “tokens” representing parts of words.) Perhaps With the largest reference window of any public AI model. Claude can relatively coherently conduct conversations over hours – even days – as opposed to minutes and digest and analyze hundreds of pages of documents.
That progress doesn’t come cheap.
Anthropic estimates that its next-gen model will require on the order of 10^25 FLOPs, or floating point operations — several orders of magnitude larger than even the largest models today. Of course, how this translates into computation time depends on the speed and scale of the system doing the computation. But Anthropic implies (in Deque) that it relies on clusters with “thousands of GPUs” and will need about a billion dollars in spending over the next 18 months.
In fact, Anthropic aims to raise up to $5 billion over the next two years.
“With Series C funding, we expect to grow our product offerings, support businesses that will responsibly deploy the cloud to market, and advance AI security research,” the company wrote in a press release this morning. will increase.” “Our team is focused on AI alignment techniques that allow AI systems to better handle adversarial interactions, follow precise instructions, and generally be more transparent about their behavior and limitations.”