This ChatGPT Skill will earn you $10B (also, AI reads your mind!) | ML News

preview_player
Показать описание
#mlnews #chatgpt #llama

ChatGPT goes around the world and is finally available via API. Stunning mind-reading performed using fMRI and Stable Diffusion. LLaMA weights leak and hilarity ensues. GTC23 is around the corner!

ERRATA: It's a 4090, not a 4090 ti 🙃

OUTLINE:
0:00 - Introduction
0:20 - GTC 23 on March 20
1:55 - ChatGPT API is out!
4:50 - OpenAI becomes more business-friendly
7:15 - OpenAI plans for AGI
10:00 - ChatGPT influencers
12:15 - Open-Source Prompting Course
12:35 - Flan UL2 20B
13:30 - LLaMA weights leaked
15:50 - Mind-Reading from fMRI
20:10 - Random News / Helpful Things
25:30 - Interview with Bryan Catanzaro

References:
GTC 23 on March 20

ChatGPT API is out!

OpenAI becomes more business-friendly

OpenAI plans for AGI

ChatGPT influencers

Open-Source Prompting Course

Flan UL2 20B

LLaMA weights leaked

Mind-Reading from fMRI

Random News

Links:

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Рекомендации по теме
Комментарии
Автор

ERRATA: It's a 4090, not a 4090 ti 🙃

OUTLINE:
0:00 - Introduction
0:20 - GTC 23 on March 20
1:55 - ChatGPT API is out!
4:50 - OpenAI becomes more business-friendly
7:15 - OpenAI plans for AGI
10:00 - ChatGPT influencers
12:15 - Open-Source Prompting Course
12:35 - Flan UL2 20B
13:30 - LLaMA weights leaked
15:50 - Mind-Reading from fMRI
20:10 - Random News / Helpful Things
25:30 - Interview with Bryan Catanzaro

YannicKilcher
Автор

At this point, both Google and Facebook are more "open" than OpenAI. Might as well be called ClosedAI.

someoneelse
Автор

I’ve watched this entire video even though it came out 3 minutes ago and I’d just like to say it was fantastic, start to finish, you’ve really outdone yourself with this one. Keep up the great work

yourmomsboyfriend
Автор

I missed ML News. Thanks for taking the time to get back to it.

SanjayVenkat-cegj
Автор

Wow, that humble statement of Bryan from NVIDIA. And I totally agree with your assessment of the EU in regards to AI. Germany is the epicentre of the efforts to prevent AI from happening in the EU, just for long enough until it's too late to catch up and we are totally dependend on US + China.

lyricsdepicted
Автор

Another Monday, another ML News show, another rant about ClosedAI 😄 Thx, for the video Yannic. It's always a pleasure watching you. Btw, I am already integrating ChatGPT in our Math-app. Let's see how well it works.

test
Автор

Hi Yannick, thanks for the showcase. The video was about a real offer job and then I explain it and contextualize inside the video. I also explain what it is prompt engineering and where you can learn for totally free. Honestly, I think that you haven't gone much deeper into the video than the thumbnail and title. 😉
And of course, neither on my channel where I try to be give useful information. Glad to chat with you whenever you want.

XavierMitjana
Автор

Dear Yannic,

I hope this message finds you well. I wanted to take a moment to express my gratitude for the incredible work you do in the field of AI and for the invaluable resources you provide through your videos.

As a student of a Computer Science Training NGO, research is a vital part of our learning process and your videos have been instrumental in helping us learn about AI technologies that we may not have otherwise been exposed to. Your dedication and commitment to advancing the field of AI are truly admirable and inspiring.

I also wanted to let you know that I have shared your videos and raffle link with my entire cohort. I believe that your insights and expertise can benefit others as much as it has helped me.

Once again, thank you for your contributions to the field of AI and for being an inspiration to students like myself. I wish you continued success in all your endeavors.

Truly Thank You....P.S

gerardmanyeli
Автор

Thank you for the great ML news episode, Yannic. Keep up the great work!

iambinarymind
Автор

Your sarcasm appreciated very much <3

ralfgustav
Автор

I didn't recognize you in the interview! Who is this guy without the aviators and with a beard?

ScottVanKirk
Автор

Hahaha, the comment on Chat influencers is so spot on, loved it. We need more ML News, great job Yannic !

darnokjarud
Автор

Nvidia needs to shove as much VRAM as they can in future cards.

TechCarnivore
Автор

Wow, great episode! I share your excitement about recent technological advancements. Thanks for cutting through the clutter.

reversefulfillment
Автор

Regulations are essential, but bad regulations can be harmful (just usually less than no regulation whatsoever)

tiagotiagot
Автор

So glad you have resumed your regular programming!

tednoob
Автор

The reason LLMs keep getting better is, if you think of them as a pipeline, when we go deeper and wider, we are adding in models. Here is a classification model. This section does regression. This section is an encoder. This section is an attention head. So really, larger models are building cognitive architecture. When we add an agent to keep history, perform actions, and build context, we have even more complex systems. When a system is designed and built by a machine, only the results are interesting. There's no work for the system's architect. However, if we can find areas for computational reductionism, they get interesting in themselves to computer scientists once again.

dr.mikeybee
Автор

OMG love the influencer roast, maddd funny ma g

mansamusa
Автор

There are two main components to intelligence. There is probabilistic reasoning which transformers do very well considering their training sets. There is also natural language understanding which is everything to do with context. It is context building, context filtering, context recognition, and creation. It involves search, summarization, filtering, salience, intention, and goal fulfilment. Outside of this there are agency features like perception, and actions. We have all the pieces. We need to build and optimize cognitive architectures, and we need to fix our training sets. Fortunately, we can use existing LLMs and knowledge bases to do much of the work to categorize, assess, and generally cleanup training sets.

dr.mikeybee
Автор

I absolutly love every bit of media bryan catanzaro comes in, he is just a spring of happiness. also how can you look old and young at the same time, youthful face and grey hair. I am jealous

thedude