Why Anthropic's Founder Left Sam Altman’s OpenAI

preview_player
Показать описание
Amazon recently announced that it will invest up to $4 billion in Anthropic, one of the buzzy startups building a generative AI chatbot. With the deal, Anthropic secures a major investment from a second Big Tech company, after already receiving $300 million from Google at the end of last year. Google’s investment, which wasn’t reported at the time, gave Google an estimated 10% stake in the company.

In its latest fundraising round in May, Anthropic was valued at nearly $5 billion, according to TechCrunch. An updated valuation was not disclosed for Amazon’s most recent investment nor details about the size of its minority stake.

Amazon becomes the latest major player to invest in up-and-comers in the generative AI world, after ChatGPT maker OpenAI received $13 billion from Microsoft.

Amazon will invest $1.25 billion up front with the possibility for another $2.75 billion later, for a potential total of $4 billion. Amazon declined to provide further details about the deal’s structure.

In our interview with cofounder and CEO Dario Amodei, he lays out his three-tiered fear model in response to a question by Jeremy Kahn about the existential risks posed by A.I. that has safety concerns that even Sam Altman is worried about.

00:00 - Leaving OpenAI To Form Anthropic
00:57 - Creating Claude
03:26 - Setting Constitutional AI
06:18 - Data Privacy And Storage Concerns
07:10 - Government Regulations
08:26 - AI In Robots
09:48 - Existential Risk
11:27 - Open-Source Models
12:29 - Climate Impact
13:24 - AI Risks

Subscribe to Fortune -

Fortune Magazine is a global leader in business journalism with 55 million monthly page views and a readership of nearly 32 million, with major franchises including the Fortune 500 and the Fortune 100 Best Companies to Work For. The new Fortune video channel dives into personal stories from business owners and entrepreneurs becoming successful in business and sharing their tips to help you reach your goals.

#ai #amazon #technology
Рекомендации по теме
Комментарии
Автор

This interview didn't address why Amazon invested billions. Here's my guess: based on the strong emphasis on safety and training and developing the model, this chatbot is most likely going to replace some customer service representatives. Given the example of reading and interpreting a balance sheet, it could be used to clarify billing questions from customers. That is, the chatbot could see a bill from an Amazon customer, hear what the problem is, and try and either explain the billing problem or resolve it. My guess: a significant percentage of Amazon's customer service deals only with billing problems. Also -- just a guess -- Amazon tried either to build their own chatbot or license it from OpenAI and the combination of time needed to develop it and the cost was greater than $4 billion.

posthocprior
Автор

🎯 Key Takeaways for quick navigation:

00:28 🧠 The founders of Anthropic left OpenAI with a strong belief in two things: the potential of scaling up AI models with more compute and the necessity of alignment or safety.
01:28 🛡️ Anthropic's chatbot Claude is designed with safety and controllability in mind, using a concept called "Constitutional AI" for more transparent and controlled behavior.
03:38 🤖 Constitutional AI is different from meta prompting; it trains the model to follow an explicit set of principles, allowing for self-critique and alignment with those principles.
07:42 ⚖️ When discussing AI regulation with policymakers, the advice is to anticipate where the technology will be in 2 years, not just where it is now, and to focus on measuring the harms of these models.
12:37 🌍 Concerns about the climate impact of large-scale AI models are acknowledged, but the overall energy equation—whether these models ultimately save or consume more energy—is still uncertain.

Made with HARPA AI

ThierryQuerette
Автор

Whenever you hear "safety" you should think "censored." And in that sense it is odd that he left because both companies are clearly prioritizing "safety."

davidkey
Автор

imho the constitutional model is very annoying to chat with as it claims to be all knowing, bound by whatever constitution it confined by which is inherently impossible.

Jefemcownage
Автор

Claude's large context window is excellent.

abagatelle
Автор

He’s so much more in touch with his emotions than Altman or Ilya - essential for an aligned AGI. Also this translates through Claude 3 Opus - which can create fiction text, incredibly psychologically complex.

Kai-neks
Автор

I really like the interview. Great questions.

hotdiary
Автор

By YouSum

00:00:23 Pouring more compute improves models indefinitely.
00:00:35 Safety and alignment are crucial in scaling models.
00:01:06 Claude chatbot prioritizes safety and controllability.
00:02:00 Constitutional AI ensures transparent and controllable model behavior.
00:02:12 Claude's large context window allows processing extensive text.
00:03:33 Training AI with principles differs from meta prompting approaches.
00:04:18 Constitutional AI self-critiques to align with set principles.
00:10:11 Concerns about AI risks evolve from bias to existential threats.
00:11:48 Balancing open-source AI benefits with safety concerns is crucial.
00:12:37 Considerations about the environmental impact and energy usage of models.
00:13:35 Optimism tempered with caution about the future of AI technology.

By YouSum

QueenLover-ji
Автор

What a shock, he wants regulations on open-source models that can compete with his company's proprietary offerings.

bobdagostino
Автор

A lot of people now think (or have always thought) that AI _has_ to be embodied in order to attain AGI.

squamish
Автор

When will claude have access to internet ?

joyjitpal
Автор

He says there might be a risk, 10-20 % chance that things go wrong. I wonder what he means about something going wrong. "Mildly" wrong or catastrophe? If it is a catastrophe, 10-20 % is a terribly high chance.

simokokko
Автор

0:45 strong believe 2: you need to set their values

raphael
Автор

Wow, this guy is impressive. Seems to be highly intelligent, but also highly mature with a very "close to reality" view of things i seems to me. Makes me less scared about the AI future

johnny
Автор

Beautiful questions
Beautiful answers
Such a nice interview
Feels like having a good dessert after lunch😅

akvamsikrishna
Автор

if they were confident, why didn't they give a live demo?

yoursubconscious
Автор

Why does everyone have a belief that only the US is creating AI? How does safety align with that?

dawncc
Автор

I'm glad he did. Claude 3 is much better than GPT4

USONOFAV
Автор

💳🤔The risk is those who stop you from not doing things if you don't want to use it... it should be an on-and-off switch.. its the same thing with cash vs plastic or phone swipe 💳 🤔

onlyagreeingsometimes
Автор

I don’t like the idea of a small group of people deciding what the “model’s values are”.

CallSaul