Controlling Blender with my voice using LLM

preview_player
Показать описание
Experimenting with Googles' new Gemini 2.0 Flash Experimental to control Blender with my voice!

Tools used:
Рекомендации по теме
Комментарии
Автор

this is exactly like my art director when he comes to my cubicle

pushBAK
Автор

This is the wildest thing I have seen someone do with AI

onhel
Автор

I think this is a brilliant proof of concept of how useful these things can be in the future. Really excited about this - thanks for the demonstration

ElKulte
Автор

I was expecting this kind of interaction with Apps years from now.
Amazing proof of concept!

AArmstrongC
Автор

14:21 "This is not very impressive"

Bruh this is the most impressive and jaw dropping thing I have seen this year

ThisIsWhereTheFunBegins
Автор

I remember around 2005 I was animating with Adobe Flash thinking how time-consuming this is and one day I will be able to speak into a microphone and the app will be able to animate based on what I say. Can you imagine the grin on my face watching this?

RedfieldmediaCoUk
Автор

@3:40
🤖: What’s my purpose?
💁🏼‍♂️: You write Python programs for Blender 4.3
🤖: oh my god 😨😰
💁🏼‍♂️: Yeah, welcome to the club pal

erikn
Автор

This is awesome! Early stages, but very impressive how you got this to work a little bit.

MrWoundedalien
Автор

I love how at the end you say "this isn't impressive". No sir, this is the most impressive thing I've seen for quite some time 🤯

JamieDunbar
Автор

That's actually impressive. As a developer I've been successfully experimenting with programming using chat and voice, and one thing I've encountered is that it's very important to formulate the commands correctly and precise, which could be a challenge for a non-english speaker, like myself, or you. I don't do blender, but I am very tempted to set everything up like you have in the video and repeat your experiment.
Merry Christmas 😃

afzender-bekend
Автор

Yeah, that is already more than I can do in Blender. Hopefully the Blender geniuses maybe add a Gemini API or otherwise make it work how you suggested. Please post another video when you figure out more. Subbing.

HikingWithCooper
Автор

this is so cool and a great ad for google, I didn't know google had ai this impressive yet

RanMC
Автор

This is the workflow I've been waiting for all my life. Its obviously very imperfect now, but in a year or two this will be so much more seamlessly integrated and fun to use.

jackgrothaus
Автор

I already like playing around in blender alot, this just makes it a thousand times more fun.

anhartoliver.
Автор

This is the role AI should have in art. Not creating the art itself, but as a powerful tool that both increases the ceiling for those mastering the craft, and also lowering the entry floor for newcomers.

desertdweller
Автор

this is one of the coolest things ive ever seen in my life. I use to dream about stuff like this when i was a kid. im 26. i thought i would never see something like this in my lifetime, and im 26, and it's pretty much here

xanperna
Автор

You can probably output the generated code into your blend file through the Gemini API.
But it would require you to write an extra script that's running on your computer to take in that data and put it into that blend file.

You can probably ask Gemini about it and figure it out! 😉

SubjektDelta
Автор

I have never used blender and have no interested in doing so but seeing what you are able to do here with the ai as a sparring partner is just incredible!! wow. I wonder when I will be able to use this type of workflow in music production.

samuelbreuer
Автор

Something to note: you can use the thing above called system instructions, and it basically makes it so that it is always following that, and it makes it so that it doesn’t crash!

davidcampos
Автор

Man people with knowledge but lost hands or fingers to something would be happy to see this!!!

BhoopalanIlayalwar
join shbcf.ru