Unreasonably Effective AI with Demis Hassabis

preview_player
Показать описание
It has been a few years since Google DeepMind CEO and co-founder, Demis Hassabis, and Professor Hannah Fry caught up. In that time, the world has caught on to artificial intelligence—in a big way. Listen as they discuss the recent explosion of interest in AI, what Demis means when he describes chatbots as ‘unreasonably effective’, and the unexpected emergence of capabilities like conceptual understanding and abstraction in recent generative models.

Demis and Hannah also explore the need for rigorous AI safety measures, the importance of responsible AI development, and what he hopes for as we move closer towards artificial general intelligence.



Want to share feedback? Have a suggestion for a guest that we should have on next? Why not leave a review on YouTube and stay tuned for future episodes.

Thanks to everyone who made this possible, including but not limited to:

Presenter: Professor Hannah Fry
Series Producer: Dan Hardoon
Editor: Rami Tzabar, TellTale Studios
Production & Editorial support: Emma Yousif
Music composition: Eleni Shaw

Camera Director and Video Editor: Tommy Bruce
Audio Engineer: Darren Carikas
Video Studio Production: Nicholas Duke
Video Editor: Bilal Merhi
Video Production Design: James Barton
Commissioned by Google DeepMind
Рекомендации по теме
Комментарии
Автор

I feel much more comfortable when the CEO of an AI company is more computer scientist than salesman, great interview.

progrob
Автор

Demis is the least hyped AI expert who isn't a contrarian. Although he talks fast, everything he says makes sense.

squamish
Автор

I love this conversation. Denis is super realistic about the field, and Hannah's questions are smart and hit the mark. It's really worth the listener's attention!

fmind-dev
Автор

Drawing on the classical phrase
.. The unreasonable effectiveness of mathematics

steremihai
Автор

Wow you got Hannah Fry to interview you.

evertoaster
Автор

35:45 The idea of an AI system showing evidence of deception is interesting. How do you tell the difference between a hallucination/mistake and a lie?

ryanf
Автор

Alphafold has immense value even though it's not AGI. What else might have immense value without being AGI? Maybe merging the knowledge and sentiment expressed in millions of simultaneous conversations with people around the world into a graph structure, a shared world model, a collective human and digital intelligence by the end of this year?

johnkintree
Автор

Nice interview. Demis really seems a very nice person

princep
Автор

Clicked as soon as I saw Demis, subscribed as soon as I saw Hannah!

noahlane
Автор

I'm not getting it. Who is the audience for this? Fans of Demis? Fans of Fry? Hanna's questions make sure the level of discussion never rises beyond The Guardian. Is that intended?

bobsalita
Автор

HEY DEMIS, HOW ABOUT AI FOR VERIFIED HARDWARE INTEGRITY.

✌️

WillyB-sk
Автор

Soft ball interview. But Demis is always grounded and gives good answers based in reality and not the god like egos of many of the Silicon Valley AI execs. Also love Hannah Fry whatever she does. My favourite applied mathematician stroke TV presenter. Excellent content.

byrnemeister
Автор

35:50 I'm not sure this will work. An AI needs to understand deception because it needs to understand that other people or AI's can be deceptive. And it's hard to have an AI that understands deception without being able to be deceptive. Heck, you may even want an AI to be deceptive, for instance suppose you need an AI agent to protect your confidential information. It needs to be able to lie, even if by omission.

pianoforte
Автор

Interesting and engaging. However, as an academic myself, I see two fellow academics sadly mixing the role as academics and commercially interested people. The discussion on open source is particularly revealing, where Hassabis first says "we have open sourced pretty much everything including the transformers paper", following up with (true) claims that today's models cannot be considered unsafe. This is, however, automatically only true in terms of not open sourcing today's models except for profit motives. Google and openai are quite closed source compared to for example Meta, which is obvious to everyone in the field. Still these claims unfortunately are made without reflection from either of them. From excellent previous endeavors, I generally trust Hannah Fry, but she has an academic and journalistic duty to arrest these claims, but no criticism is posed. This makes me question the honesty and it's hard not to view the interview as a commercial. This kind of "non criticism" is fair game I guess among commercial actors. But they are posing as academics, introducing themselves with their academic titles such as "professor". Using the title of "academics/independent critical persons" and acting as a persons with commercial interests is unfortunate. Please, in the future, state your conflicts of interest at the beginning of the discussion along with your presentations as commercially disinterested academics, and stay honest to the audience and yourselves throughout the discussion when slightly bending the truth, e. g. motives for keeping the models closed, etc. It's OK as long as you are honest about being commercial actors. It's not OK to pose as pure academics, while acting commercially.

HenrikSahlinPettersen
Автор

Great conversation, thanks to all involved.

juandesalgado
Автор

I love this! Brilliant interview. Hannah Fry and Demis are a great pairing.

aiforculture
Автор

Hey Demis why don't you drop everything and work on a reboot of black and white with peter molyneux? Thanks!

oimrqs
Автор

Man, Demis is cool - so grounded in reality but he hasn't lost sight of the big (or maybe Planck scale) picture. I have a lot of confidence in both him and Dario at Anthropic.

TuringTestFiction
Автор

32:30, you're almost talking about containing an artificial creature at that point, one that should have rights like not to be imprisoned for existing.

dennisestenson
Автор

16:26 The man who says the truth.love that.

Mehrdadkh