How Will AI Impact High Performance Computing? | Jonathan Frankle, MosaicML

preview_player
Показать описание
In this episode of HPC Tech Talks: Wondering if AI will impact high performance computing? Tony Rea and Jonathan Frankle discuss MosaicML, the democratization of AI resources and the work Jonathan is doing for the Kempner Institute at Harvard University.

(00:32) What kind of work is taking place at the Kempner Institute and at MosaicML
(02:15) What is the lottery ticket hypothesis?
(03:37) State of AI development and access to best practices for people getting started
(05:20) Open source foundational models accessible to everyone, and what are foundational AI models
(7:28) Is AI going to end the world? Is bias now being used as a strategy?
(09:06) Will AI models work more accurately in the future?
(10:57) Making Deep Learning accessible to everyone
(13:21) Do you need expensive infrastructure to develop AI models? Can regular people develop AI models?
(13:50) How much does it cost to train a model to do Stable Diffusion, (generating an image from a caption) from scratch today?
(17:34) Democratization of model development and lowering the barrier, the cost and also the barrier of expertise needed to develop these models
(19:30) Are enterprises less likely to adopt openness and efficiency than academic institutions?
Рекомендации по теме
Комментарии
Автор

People talk about the harms of AI models being wrong, and often neglect the harms of AI models being right - they serve as an informational force-multiplier which, in many contexts, is dangerous. Having a tool that, with perfect accuracy, can identify people in photos is something that I would trust neither law enforcement nor private citizens with, and many other applications fall into similar categories. Language processing that can, with extremely high accuracy correlate anonymous writings with specific authors has huge free speech implications regardless of bias. AI models that could automate the geolocation of photos from background details is catastrophically ruinous for privacy. Etc.
It seems to me that there are options besides the Terminator strawman and the case where the problem is that the model is wrong.

thatotherdavidguy