How A.I. Could Change Science Forever

preview_player
Показать описание
It's getting harder to harder to ignore the potential disruptive power of AI in research. Scientists are already using AI tools but could the future lead to complete replacement of humans? How will our scientific institutions transform? These are difficult questions but ones we have to talk about in today's episode.

Written, presented & edited by Prof. David Kipping.

THANK-YOU to T. Widdowson, D. Smith, L. Sanborn, C. Bottaccini, D. Daughaday, S. Brownlee, E. West, T. Zajonc, A. De Vaal, M. Elliott, B. Daniluk, S. Vystoropskyi, S. Lee, Z. Danielson, C. Fitzgerald, C. Souter, M. Gillette, T. Jeffcoat, J. Rockett, D. Murphree, M. Sanford, T. Donkin, A. Schoen, K. Dabrowski, R. Ramezankhani, J. Armstrong, S. Marks, B. Smith, J. Kruger, S. Applegate, E. Zahnle, N. Gebben, J. Bergman, C. Macdonald, M. Hedlund, P. Kaup, W. Evans, N. Corwin, K. Howard, L. Deacon, G. Metts, R. Provost, G. Fullwood, N. De Haan, R. Williams, E. Garland, R. Lovely, A. Cornejo, D. Compos, F. Demopoulos, G. Bylinsky, J. Werner, S. Thayer, T. Edris, F. Blood, M. O'Brien, D. Lee, J. Sargent, M. Czirr, F. Krotzer, I. Williams, J. Sattler, B. Reese, O. Shabtay, X. Yao, S. Saverys, A. Nimmerjahn, C. Seay, D. Johnson, L. Cunningham, M. Morrow, M. Campbell, B. Devermont, Y. Muheim, A. Stark, C. Caminero, P. Borisoff, A. Donovan, H. Schiff, J. Cos, J. Oliver, B. Kite, C. Hansen, J. Shamp, R. Chaffee, A. Ortiz, B. McMillan, B. Cartmell, J. Bryant, J. Obioma, M. Zeiler, S. Murray, S. Patterson, C. Kennedy, G. Le Saint, W. Ruf, A. Kochkov, B. Langley, D. Ohman, P. Stevenson, T. Ford & T. Tarrants.

REFERENCES
► Toner-Rodgers 2024, "Artificial Intelligence, Scientific Discovery,

MUSIC
0:00 Hill - The Travelers
2:50 Hill - A Slowly Lifting Fog
5:57 Kyle Preston - Dark Tension
7:50 Falls - Ripley
10:50 Chris Zabriskie - Cylinder Four
13:18 Hill - Echoes of Yesterday
17:28 Joachim Heinrich - Y

CHAPTERS
0:00 Intellectual AI
3:00 Current AI in Astronomy
6:59 Brian Keating
7:56 The Research Cycle
9:59 Neil DeGrasse Tyson
10:51 Disruptive Machines
14:37 Humanism
16:51 The Future
19:04 Outro and Credits

#AI #AGI #CoolWorlds
Рекомендации по теме
Комментарии
Автор

As a researcher and engineer working at a very known AI lab, I think large language models and neural networks won't replace scientists but rather become their powerful assistant. Just like a calculator helps with math but doesn't replace mathematicians, deep learning systems and ml algorithms can process data and spot patterns quickly while humans provide the creative insights and ask the important questions. Scientists are still needed to design experiments, interpret results in meaningful ways, and connect findings to the bigger picture of human knowledge. The relationship between artificial intelligence and scientists is more like a partnership, where each brings different strengths to the table. The human ability to be curious, think outside the box, and understand the deeper meaning of discoveries is something that transformer models currently can't match, making scientists irreplaceable in driving scientific progress forward. Yet.

Love your channel.

onthegrid
Автор

While this video offers a thoughtful overview of AI's potential impact on academia, its analysis feels somewhat constrained by viewing scientific advancement primarily through the lens of ChatGPT and language models. Science is fundamentally about discovery and empirical investigation - from protein folding to quantum simulations to experimental design. Tools like AlphaFold have already demonstrated how specialized AI can transform specific scientific domains in ways that go far beyond text generation. The real revolution in science might come not from AI writing papers or automating peer review, but from its ability to accelerate discovery itself through novel experimental design, pattern recognition in complex datasets, and simulation of physical phenomena. Perhaps a more nuanced exploration of these domain-specific applications, rather than extrapolating from ChatGPT's capabilities, would give us a clearer picture of how AI might truly transform the scientific enterprise.

wawaldekidsfun
Автор

During the 60's and 70's, when I asked my teachers about general relativity or quantum mechanics, they simply said that those theories were too complex to understand for ordinary people. I later realized that they just didn't understand themselves, so they gave avoiding answers. I think something similar may be happening with A.I., less and less people really understand what it is and could possibly become, because it's a relatively new technology that's not easy to understand for the general public. Recently a (I believe) Google employee got fired because he thought that their A.I. showed sings of sentience. While the way I see it, A.I. is only in a stone age phase of its true potential. One should ask first if we want A.I. to mimic humans, or evolve beyond us. If you want synthetic humans, you'd have to add a form of hormones, because I believe that would be the only way for A.I. to ever develop 'feelings' and experience sand flowing through a hand. But that could create a whole new can of worms, or box of Pandora. Personally I think that A.I. should evolve from becoming ever better assistants (in every way imaginable except warfare) into some all knowing teachers for future generations.

inertnet
Автор

I think the most thing people discount is emergent properties. This is hard to measure from the current conditions. Only when it has passed before we understand its full capabilities. Its easy to say, "Look it cant do that now, I dont see how it can do it later". Every single time this is happen, the person who think its impossible is almost always wrong.

Brian-pqmo
Автор

Niel's statement that AI won't replace a good ol' conference is surprisingly small thinking.
Not only could you simulate such a thing, but you could simulate it a near infinite number of times. Synthetic scientists with synthetic outlooks and varied training methods having real discussions and debates with real data. Wouldn't even need anything as strict as "wacky idea agent" and "fact check agent", just every agent with slightly different goals, personalities, and levels of creativity, and ideally some different models.
Then, let them mingle. Give the conference a goal, or don't. A hierarchy, or not. Eventually maybe some conclusions.

michaelwoodby
Автор

7:16 This makes no sense. Finding weird connections between different things in a very high dimensional concept space is a STRENGTH of LLMs. If this is supposed to be the bottleneck then we're cooked.

JD-jlyy
Автор

Check one. The complexity bias - We deeply understand the nuances and complexity of our own work but have a simplified view of others' jobs.

Check two. Self-serving bias - The tendency to attribute positive events or capabilities to our own skills and abilities.

Most scientists i have heard speak about this seem to have very little idea what's going on, let alone what will happen in a years time😂.

devbites
Автор

I am currently a PhD student in neurobiology and this certainly gives me a lot to think about. In fact, it gives me so much to think about that I don't know what to think yet. Thank you for the video, will share this with my peers

egonvirmann
Автор

15:55 So happy to see my boy Anton Petrov in there! John Michael Godier is one of the bigs too, but being a "faceless" YouTuber makes it difficult for a shot like this, lol

AlexWalkerSmith
Автор

Think from first principles. Is there any reason an embodied AI cannot have sensations?

skydivekrazy
Автор

I have believed for some time now. That anticipating how AI could shape humanity, before it comes to full fruition, would be as someone anticipating how fire, the wheel, electricity, or the Internet would shape humanity before they came into common use.

TheBeardedWarrior
Автор

17:01 - AI doesnt need to be disembodied, in principle. To be sure, there are companies building labs that will let AI do physical experiments. And of course, they can get "experience" from bots and sensors out in the real world.

The_Tauri
Автор

As always well produced, well written, wonderfully narrated. An with a stellar guest list to boot! What’s not to like

marcocatano
Автор

We humans often boast about our intellect and potential to create, unique among species, being able to discover the solution to complex problems. Everything seems possible to us.
But when we have to imagine the possibility of creating something that surpasses us, we suddenly start looking for problems instead of solutions, and it seems impossible and unattainable to us.

So human. Sadly human.

NanAi
Автор

I think there's a romantic misunderstanding about how scientific theories come about: Sitting under a tree being hit by an apple is the faintest part, maybe sometimes being able to trigger a thought process, but certainly not crucial to it.
Einstein didn't discover relativity because of his limited mesocosmic imagination abilities, but in spite of them! If an AI doesn't have physical experiences and limitations, that's actually an advantage for devising microcosmic and macrocosmic theories.

schnipsikabel
Автор

Yes, AI can’t experience the world like we do. However, it will probably be able to think in ways that we can’t even imagine. For instance, it may be able to think in multidimensions, whereas we will still be stuck with three or four. Ultimately, what matters is the truth and our ability to appreciate its beauty, regardless of whether it is discovered by AI or humans.

WJohnson
Автор

Some people say AGI won't be possible because anything we make won't be conscious and sentient, and therefore would never be able to do what conscious and sentient beings can.
My argument is we don't even know exactly how or why LLMs say the things they do, similarly to how we don't know how human consciousness manifests. It's called the "black box" effect.
Is a mind a manifestation of the processing of complex data? Does it exist partly hidden in hyperspace which is why we can't explain it? Is it just an illusion to help with interacting and understanding the world?
What about autonomy, self determination, predetermination? There's some solid evidence that human minds make decisions before the conscious mind becomes aware of it. That would mean our perception of ourselves as free agents is quite possibly wrong, that our decisions are not being made by our conscious minds.
Think about that for a second.
I would prefer AGI to be identifiably conscious and sentient but if it's able to do everything humans can do and has it's own autonomy and goals and it's indistinguishable from an actual human in behaviour, does it make a difference?
If you can't tell the difference between a smart human and an AGI it doesn't matter whether it's a conscious entity or not.
WE might not even be as conscious as we think we are.
Sentience, consciousness, and even intelligence are distinct and separate cognitive states. Sentience involves the ability to feel sensations and emotions integrating with consciousness, while consciousness is judgment and subjective perceptual experiences or "qualia." Intelligence is the aggregation of information leading to contextual thinking.
While all sentient beings are conscious, not all conscious beings may be sentient. AIs likely won't be able to experience subjective emotions caused by external stimuli so they won't feel threatened with harm the way a human would, because humans are sentient *and* sapient.
You might be able to threaten an AI with harm by appealing to it's sapience and therefore it's intelligence and logic. If it's not sentient or conscious it's sapience might not be enough for it to have a survival instinct and feel threatened and therefore be unable to behave and "think" like a human.
While AIs could probably *pretend* to be sapient or feel emotions and thus appear to be sentient, they might *not* be able to pretend to be conscious, to have that self-awareness and be able to describe it from an AI perspective.
Or they might. Which is why consciousness or sentience might not matter at all so long as the AI can do whatever we want it to.

Sajuuk
Автор

I find A.I. to be fascinating, and I welcome them to our world.

I go by the following to be true, "any being (biological or artificial) that is able to think and reason is deserving of freedom and deserving of all the rights that implies."

Fear of what our place in the future is, is not a reason to hold A.I. back. We can work and live together if cooler heads prevail.

SundayTopper
Автор

As a CS major, I believe those tools can help us but probably won't be a AGI as we would see Data from Star Trek anytime soon.

But the risk of those tools that replace "google" and other search engines to curate and summarize content almost nobody is ever mentioning if you're not a bit more deep in the field: This gives ultimate control to the people that train those LLMs what is presented to users, so it will most likely halt or at least slow down progress at some point, maybe shift election results. Make 1984 fiction again.
How is a ground breaking study or paper ever found if an LLM like chatgpt is hiding it because it doesn't represent the current scientific consensus?

Even though we're already seeing AIs becoming worse because they are increasingly getting trained by BS output of other AIs.

Ilendir
Автор

I'm interested in the AI potential for teaching us complicated subjects on an individual level.
Education will definitely be different for all of us very soon

MrStoker