Figure Status Update - AI Trained Coffee Demo

preview_player
Показать описание
Figure 01 has learned to make coffee ☕️
Рекомендации по теме
Комментарии
Автор

What's really impressive was the guy drinking the coffee straight from the machine without burning himself.

loganl
Автор

It's about time someone made a fully automatic coffee machine

Manatek
Автор

For those that may not have a strong understanding of AI but are interested: If this is indeed end to end neural networks, that would mean the entire process was created using models that understood motor movement, balance, and dexterity. Another model for the vision - the man set a coffee machine on the table and the robot identified it. Then another model for the audio - he asked for a cup of coffee and it translated that to an objective and movement. This is just a guess, I do not know their architecture. However, if all of that was trained in 10hrs then it is incredibly impressive.

michaelhaidar
Автор

What smooth movements. The eyes refuse to believe that this is a real material robot, and not computer graphics.

USER-ruzer
Автор

Welcome to the future! Just figured out and I’m in love with this technology. Want to have my own Figure 🤩😍

christie
Автор

The human: make me a coffee
Figure: turns to Keurig: make him a coffee.

Dryer_Safe
Автор

Imagine having this robot in your kitchen, on night, in this position while waiting for the coffee to be done. Creeeep

flavbmusic
Автор

That's impressive! It understood a voice command, recognized the objects, and was able to manipulate them to complete the task. I noticed you placed the cup in the coffee maker for it, so I guess it isn't quite dexterous enough to do that yet. I think having pressure sensors on the fingertips might help it to do things like that. More multimodal inputs seems to help the AI. Keep up the good work. Once we get faster processors I bet the movements will be faster and more fluid. Would be nice to have 2 DOF in the neck so the robot could move its head to look at what it is doing, and then people would intuitively know what the bot is focused on. I think it's a little off putting for people when the bot just stares straight ahead all the time.

JMeyer-qjpv
Автор

Oh maaan i want this one!!! And the design... Realy want this coffee machine now.

napalmqero
Автор

So the only object the robot needed to recognize was the k-cup sitting isolated on the table, the handle, and the start button. The human had to place and retrieve the cup. What I want from my personal coffee-making robot: it gets the mug out of the cabinet full of breakable mugs, puts it in the coffee-maker, selects the particular roast I want from the cabinet, which might require shifting several other boxes around, and/or opening a new box from the pantry, pulling out a K-cup, replacing the box, loading and running the coffee maker, pulling the k-cup out and throwing it away, pulling a splenda packet out of the bowl, opening them and adding them to the coffee, locating the creamer in the refrigerator and adding the precise amount to the coffee, stirring the coffee, bringing me the mug in bed without spilling a drop, locating the empty mug later wherever I happen to leave it, bringing it back to the kitchen, washing and drying it, and placing it back in the cabinet for tomorrow.

The robot can't make coffee yet. It can pick-and-place one part in another purpose-built robot that can make coffee and turn that robot on.

dpwhittaker
Автор

Pretty cool that the robot learned to make coffee just by observation! I wonder, did it require thousands of examples for training, or was it a one-off learning? The devil is indeed in the details. Also, its dexterity was quite impressive. It seemed to react to the situation in real time, which adds another layer of sophistication.

sausagemash
Автор

I am amazed that this robot learned an average household tasks. Something that would apply to most households. We are getting closer ❤❤❤

TheForestGlade
Автор

My robot dispenses coffee into my mouth with a romantic kiss.

TastyAsparagus
Автор

Getting there. Impressive for sure.

The coffee test is Steve Wozniak's test:

Successfully entered an unfamiliar residential environment, located the kitchen, and autonomously navigated the space, including:

a. Identifying and avoiding obstacles.
b. Adapting to different lighting conditions and surfaces.

Demonstrated the ability to identify and manipulate various kitchen tools, appliances, and ingredients, such as:

a. Recognizing coffee makers or machines, coffee filters, coffee grinders, and kettles.
b. Identifying coffee beans or grounds, water sources, and optional items like sugar, milk, or creamer.
c. Operating appliances and tools, such as turning on the coffee maker, grinding coffee beans, and pouring water.

Exhibited the capability to follow a sequence of tasks to prepare a cup of coffee, including:

a. Retrieving and preparing the necessary tools, appliances, and ingredients.
b. Following a logical order of steps to make the coffee.
c. Adjusting to variations in coffee-making equipment or processes based on the available tools and appliances.

Successfully completed The Coffee Test, resulting in a properly prepared cup of coffee, within a specific time frame not exceeding 20 minutes, which is comparable to an average human performing the same task.

jorgegoyco
Автор

Guys, look at the history of the founder. He builds companies on hype waves and then exits for hundreds of millions at the peak of hype. The product/service never actually becomes a thing. His previous company Archer made VTOL taxis, but nothing ever came out of it.
It is all about making hype so he can sell the company. Nothing long-term about this.
There is groundbreaking AI coming, but not from these types of companies.

jimj
Автор

This would have been impressive a couple of years ago. Its dexterity is cool, but there's still barely any intention/ understanding. It would have been more impressive if it picked up the mug after recognizing it was done. All this shows is it knows have to out round things in round holes and press a one button machine

person
Автор

0:10 only now I noticed that when he said the phrase 'hey Figure 1' to him, his screen started to light up

posttoska
Автор

Scary. By 2050 it might even learn to pick up the cup too.

MrVidification
Автор

In a hundred years, they will also learn how to make cocoa and hot chocolate. A small step for a robot, but a big step for humanity.

bojankunstelj
Автор

Keep up the good work and keep us updated with more videos.

JigilJigil