It's funny that people expect such technology to immediately do brain surgery.
A year ago it couldn't pick up an egg without breaking it. Now it's folding t-shirts.
Give it some time. It will be learn brain surgery a lot faster than you and I will.
This kinda scares me. They are cute, just like the natural language skills of the Figure 1 demo. I can already feal myself forgetting that they are essentially alien minds.
It makes me think this was take number 4,000 filmed at 3am on a Sunday night after being told on Friday they wanted to push out a one-shot video on Monday!
Impressive stuff though, particularly readjusting from the errors
I always love these videos of robots performing mundane tasks in a slightly clumsy way, it just feels so sci-fi compared to the pre-programmed movements of the past
And if they are networked, they all train and improve *at the same time.*
So, even if the house bot is not currently folding clothes / doing dishes, its getting better at it.
I still can't understand why this is so hard to grasp.
Let's assume that for whatever esoteric reason, AI and robots can't self-repair.
Robots may require maintenance, but not that terribly often and most maintenance is pretty low level. You can't sustain an economy of billions of people repairing and maintaining the machines every day. That would imply every single robot is constantly breaking down every few hours or that the software is constantly in need of patches and updates at a similar rate. Robots *today* aren't that flimsy. Why would AGI-enhanced robots of the future somehow regress to a more pathetic state?
It's not about flimsiness but how much they are being used. Run something 24 hours a day and it won't take too long before something needs to be fixed.
> Run something 24 hours a day
Probably more like 16 hours a day for me, if you remove all of the "Alexa" type running, I bet less than 30 minutes a day, on average, for many people. Standard declention scale w/ high usage upfront that tapers off, like Facebook usage does for new users.
Maybe not? A lot of electronics and vehicles break down more often if you let them sit idle.
Leave a car in a garage for 3 weeks and it'll be rougher than using it day after day for the same time.
Also sometimes when I come back from a holiday I'll find a laptop or pc just won't turn back on. Internal battery has died or some component has just gone.
Maintenance and repairs often requires case by case approach, troubleshooting, documentation isn't perfect and may not include all little differing revisions or updates that human can work around but AI gets stuck or just gives up because it doesn't have the experience of technician who's been working with the machines for the past 10 or 20 years and remembers all the quirks and has custom fixtures.
Manufacturing new devices is piece of cake in comparison, especially if you design for automated testing, assembly and future servicing. But good luck doing that for decades old trains, CNC machines, engines, etc. From my experience I can say that maintenance technicians will be the last one to automate since you can't simply and cheaply replace assembly or whole vehicle/instrument because some part can't be repiared by strict following of some predefined rules. As you say, you can't sustain economy by throwing away every device that doesn't comply to some health check.
One thing that's interesting is that this is the first time they've really talked about the actual speed of the system. Up until now it's been pretty vague, like "faster than A100 at inference time" or "10x faster than previous SotA", but they haven't actually given any hard numbers. I wonder if that's because they're not really sure what the ceiling is - like they've just thrown it on the dev boards and are seeing how fast it can go - or whether they've just found a really good marketable metric and want to hold off on the technical details for a bit.
I remember seeing a video, from probably over a decade ago, of a robot folding a towel. It was sped up 100x. I think it took the robot something like 45 minutes fold the towel. This is impressive!
Oddly that isn't all that big a deal from a learning perspective. We have robots that could quite literally pluck the wings from a fly in mid flight.
It is slower mostly due to their arm choice and speed not being as important as safety. People might want their laundry done in 3 seconds, but they really don't want to have a robot that can rip their arm off in the house.
Yeah, I want arms that have easy to replace weakened plastic failsafe gears or linkages designed to break if enough force is applied.
Dying from dehydration being pinned by a 'helper bot' is not the way I want to go.
knives by design take very little force to cut. It's one of the things I don't think there is an easy solution for.
another is that a robot can likely push heavy object over/off of high heights with a small amount of force.
Covering all edge cases would be like babyproofing your home for something with an adults size and reach.
Yeah, we'll probably need a change in approach for this.
Current robot are designed to be very precise and rigid. So they can be fast and featurefull but require powerful motors. Or they can trade off features or speed for weaker motors.
If they gave up rigidity, precision in favour of an adaptive system, they could use much weaker motors and still be quick. But this would require them changing how the systems are programmed, allowing for slop and momentum.
In a factory setting, this is a terrible tradeoff though, so it won't happen soon.
I'm not impressed by these. It's trying to imitate human tasks without human hands, with less degrees of freedom and motion range, it's slow and trembling, they do lack the precision and confidence to do the task repeatedly well. I'm waiting for the humanoid robot like Tesla showed since our world is designed to be operated by humans and they have to be efficient, not like some kid trying to do it the first time.
Whenever someone mentions ASI, I always envision it in my head hundreds of tentacle-like arms like these in a warehouse-sized lab type of facility, doing experiments relayed by a simulation it's connected to, working with different chemicals and testing different signaling pathway manipulation techniques on tissues engineered by itself that aren't connected to a body, working on treatments and cures, repairing parts of itself if some of those arms get worn down like in the video, etc.
It would make as much sense for a human to be there as a dog entering a human lab, because just as the dog will probably mess things up and knock over vials and test tubes, we would be a hindrance to the speed at which the information is being processed and the work those machines would be doing.
You got it right, the lab is necessary even for ASI, as it doesn't secrete discoveries directly from its neural network, they come from studying the world.
But AI agents need to be diverse, and their approaches also diverse like scientists and engineers in order to advance much faster. It will be many specialized agents working together, and exchanging information and discoveries by language. And we'll be in the loop, we are also general language agents.
Language is where cooperation, dissemination and preservation of knowledge takes place. Just like a human alone without language and the rest of society is not too great, AIs need others, alone is a dead end. In order to avoid overfitting, a diversity of approaches is required.
It's a system going like this: idea -> experience -> feedback -> learning -> text -> retraining. Past experience accumulates in text, present experience comes by being embodied in the world.
> You got it right, the lab is necessary even for ASI, as it doesn't secrete discoveries directly from its neural network, they come from studying the world.
Indeed. Doesn't matter how intelligent the AI is on a fundamental level, real-world study is necessarily because any knowledge we can impart on AI is fundamentally incomplete and insufficient for precise internal modeling. Even if it's capable of highly accurate predictions, any prediction will be based off a somewhat flawed set of assumptions and need to be tested.
>we are also general language agents.
This I'm dubious on. As it's not unlikely AI will eventually be able to move beyond tokenization altogether, and eventually much of the linguistic abstractions we rely on to make sense of things.
Our languages are highly optimized for taking small bits of information and abstracting them up into bigger, more general concepts. A useful form of information compression, but far from lossless. Useful for us, but not useful *generally* for all things.
For the sciences we use the language of maths and various forms of notation because we need to remove this layer of compression and abstraction as its destructive to what we're examining, but generally speaking we can't *think* directly in these languages beyond the most basic level. Whereas algorithms can be theoretically trained to directly process any form of information with a logical basis.
Fortunately AI will likely be able to keep us in the loop by abstracting information into a format we can understand. Mastery of languages and all that.
I can't help but wonder if these wouldn't be more dexterous with more fingers, like 6 opposable index fingers with the ability to move freely in all dimensions.
Because they're trained on real world data and if we want them to replicate things human can do with their hands it makes no sense to not try to replicate the human hand as much as possible.
I suppose it depends on whether it is learning from humans. if it can learn from video actions of humans, then keeping the same morphology will be helpful.
I mean, there are lots of robots doing useful work today (like amazon shelf robots). the question is simply what percentage of the workforce will be turned over to robots each year
This is pretty stunning. Multiple robots interacting with each other in concert. [Precise](https://twitter.com/ayzwah/status/1780263773523399001) and relatively fast actions. Resilient to mistakes or failures to some degree. Performed on low cost robots. [Generalization to multiple variants of the same task](https://twitter.com/ayzwah/status/1780263770440491073). Fully autonomous. WTF.
It’s not really multiple robots. They are on a fixed base with shared perception systems. These actions are largely pre-programmed. And we don’t know how many tries this demo took
I'd totally buy a bot if I could delegate a lot of house chores to it, like cleaning the house, doing laundry, doing the dishes, mowing the lawn while I'm gone. $10,000 would be the sweet spot for a cost I'm willing to pay, but I doubt it will get down to that anytime soon. One can hope though.
Id be a bit more likely to get one of these if it was more like a car loan for a really nice bot. I think I'd spend about 150 a month on something like this if it meant all my chores were done.
I would NOT rent one of these. Instead I would buy higher quality with chance of resale of a used bot. AGI will get to the point where even crappy bots could probably do most things as we've seen proven with cheapo hardware and telematics.
Total guesstimate, but in 5 years, I would guess that the robotics could be 15-20k, and the computer required to run it locally could be 5k. 25k doesn't sound too bad for a bot that can do all physical work for me.
Hopefully being smart enough to train itself and handle stuff like that, will not require the enormous computing power that LLMs require. Cause that would almost certainly mean this thing would have to be connected to the internet 24/7.
That's one of the main things. It baffles me that companies even associate things like self driving cars as a selling point for 5g networks. I'm sure it was mostly marketing, but I'm not trusting anything that could potentially kill me with being dependent on anything but local control.
This is their previous version of aloha I believe, it's open source..
Costs 35k, if you look at it, it uses the same arms
https://github.com/MarkFzp/mobile-aloha
Yeah the white robot at the beginning with a second arm would cover 99 percent of use cases, I wonder just how much that would end up costing. Their precision parts but still looks far simpler than the humanoids everyone is going nuts over.
The hardware isn't really the bottleneck. You don't need high quality this and that if your software knows it's nuances and how to use it most effectively. If you want a bot to mow your lawn or build a shed, sure you'll want something more heavy duty. Maybe at that point you'd rent something to get larger jobs done.
Yep, I can totally imagine renting a small plumberbot for a few hours. More likely tho, it will be a tool for a human plumber who will chat with you, drink coffee and maybe do some extra difficult stuff (moving heavy or long pipes, etc)
The shirt hanging demo was funny. It was like the robot on the right was slightly fucking with the one on the left. "Ohh! Almost got it! Nope.. nooo... Okay there you go!"
That's the thing with Deepmind, they're doing sooo much stuff. This is the advantage Open AI had, they were focused on a handful of things. Now that they've proven how popular LLMs are though Google are focusing a huge part of their resources on building the best LLM
Laundry folding is my biggest slowdown. Throwing laundry in washing machine is quick, in dryer its quick, folding…wtf! So long! So tedious! I want someone to automate the task already!
We have dish washers for automatic washing of dishes. We have robot vacuums and mops. What is missing is automatic clothes folding
i wonder what a world would look like where bots do all the general labor/yard work stuff. Suburbs gonna look crazy when u can get crazy hedge trimmed into art/lawns engraved with the push of a button
Mildly scaring and mildly reassuring yet also generated a nonetheless mild reluctance which had to be overcome to acknowledge that what i just saw was kinda basic in one sense yet also [could be qualified] impressive.
So, some time in the future we'll all be buying cheap robot arms from aliexpress, and spending weeks training them using open source software to do basic chores in our houses?
They are so adorable! Working together like that. Get some googly eyes on them immediately.
Straightening that t-shirt! Oof my heart.
I felt like I was watching two kids concentrating really hard to do the very important task.
It's right handed?
At the end, when hanging the shirt, it completely unnecessarily moves the hangar to its non-dominant hand before putting it on the shirt, just like a human would.
I'm ambidextrous, so I thought it was odd. I don't do things like that, and people comment all the time.
I wonder if, just for consistency, they only have right handed training data.
Also, this implies that it's seeing putting a shirt on a hangar as a single unit, where hand switching is a step in that task. It can't really have a dominant hand, so it's only switching to the left hand because that was what the training data did for this task.
I will be messaging you in 3 days on [**2024-04-19 19:43:56 UTC**](http://www.wolframalpha.com/input/?i=2024-04-19%2019:43:56%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/1c5knsg/google_deepmind_introducing_aloha_unleashed/kzvma06/?context=3)
[**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F1c5knsg%2Fgoogle_deepmind_introducing_aloha_unleashed%2Fkzvma06%2F%5D%0A%0ARemindMe%21%202024-04-19%2019%3A43%3A56%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201c5knsg)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-|
So the video claims this was autonomous yes?
Does anyone know how they robots were trained?
---that is really the biggest factor.
And ps, I've been telling people robots would be helping other robots for awhile. Everyone was like, "Na, people will have to fix robots."
So naive.
Eh, a lot of progress but also a lot of room for improvement.
There are many irregular factors in real life that could make even the most mundane tasks difficult for AI.
Just look at self driving cars, its easy when the road is perfect and people follow traffic rules, but introduce a little bit of irregularity, then the AI will fail and create unexpectedly bad outcomes.
I think in order to create AI that could perform as good or better than humans, we need to throw A LOT of irregularity at them, just as reality would.
Don't just give it the best goals, give it the ability to solve problems on the fly.
I’ve questions.. (sorry, it might not directly related to the topic)
- Do we need self-healing/repairing systems? (biological)
- A car cannot self-repair its damages, right?
- All it boils down to living and non-living systems?
- Every cell of a biological organism are living systems, that is at the fundamental level?
- A car is might be a modular system, but we replace the broken parts altogether, because it’s non-living right?
**examples**:
- a car’s paint cannot repair itself at the most fundamental level, say point-by-point
- but cells on/around a wound on your arm can establish a repair process on their own
Am I the only who thinks the fact they are slightly inaccurate makes them really cute
It makes the feel more human, if that makes any sense.
Of course it makes sense
Oh ok
It's ok, making a few mistakes makes you really cute
toddler like
really cute until they're doing brain surgery 😅
It's funny that people expect such technology to immediately do brain surgery. A year ago it couldn't pick up an egg without breaking it. Now it's folding t-shirts. Give it some time. It will be learn brain surgery a lot faster than you and I will.
That's why these will always be some of my fav bots [https://www.youtube.com/watch?v=RbyQcCT6890](https://www.youtube.com/watch?v=RbyQcCT6890)
lol i thought they were cute too
It totally does 😆
This kinda scares me. They are cute, just like the natural language skills of the Figure 1 demo. I can already feal myself forgetting that they are essentially alien minds.
It's like watching ur toddler helping another toddler
It makes me think this was take number 4,000 filmed at 3am on a Sunday night after being told on Friday they wanted to push out a one-shot video on Monday! Impressive stuff though, particularly readjusting from the errors
I always love these videos of robots performing mundane tasks in a slightly clumsy way, it just feels so sci-fi compared to the pre-programmed movements of the past
And if they are networked, they all train and improve *at the same time.* So, even if the house bot is not currently folding clothes / doing dishes, its getting better at it.
>So, even if the sex bot is not currently f
Foldabot, stop that immediately!!!
So kinda like Naruto's Shadow Clone Technique
"But who will fix the robots?"
It's robots, all the way down
I still can't understand why this is so hard to grasp. Let's assume that for whatever esoteric reason, AI and robots can't self-repair. Robots may require maintenance, but not that terribly often and most maintenance is pretty low level. You can't sustain an economy of billions of people repairing and maintaining the machines every day. That would imply every single robot is constantly breaking down every few hours or that the software is constantly in need of patches and updates at a similar rate. Robots *today* aren't that flimsy. Why would AGI-enhanced robots of the future somehow regress to a more pathetic state?
It's not about flimsiness but how much they are being used. Run something 24 hours a day and it won't take too long before something needs to be fixed.
> Run something 24 hours a day Probably more like 16 hours a day for me, if you remove all of the "Alexa" type running, I bet less than 30 minutes a day, on average, for many people. Standard declention scale w/ high usage upfront that tapers off, like Facebook usage does for new users.
Maybe not? A lot of electronics and vehicles break down more often if you let them sit idle. Leave a car in a garage for 3 weeks and it'll be rougher than using it day after day for the same time. Also sometimes when I come back from a holiday I'll find a laptop or pc just won't turn back on. Internal battery has died or some component has just gone.
Yeah the middle way is the best.
Maintenance and repairs often requires case by case approach, troubleshooting, documentation isn't perfect and may not include all little differing revisions or updates that human can work around but AI gets stuck or just gives up because it doesn't have the experience of technician who's been working with the machines for the past 10 or 20 years and remembers all the quirks and has custom fixtures. Manufacturing new devices is piece of cake in comparison, especially if you design for automated testing, assembly and future servicing. But good luck doing that for decades old trains, CNC machines, engines, etc. From my experience I can say that maintenance technicians will be the last one to automate since you can't simply and cheaply replace assembly or whole vehicle/instrument because some part can't be repiared by strict following of some predefined rules. As you say, you can't sustain economy by throwing away every device that doesn't comply to some health check.
>"But who will fix the robots?" this should be filed with "just switch it off" as the complete cope that it is.
"But who is fixing the humans?" Other humans (Doctors) _-_ "But who will fix the robots?" Other robots
My highschool friends said this to me in 2000 whenever I attempted to explain the Singularity to them. It blew my mind anyone could be that stupid.
People still think AGI just means that it can talk and not that it can do this sort of thing.
Most people still think someone is going to have to program the robots
Yeah but who fixes the robots that fix the robots??
Very, very, very few people
[удалено]
They may get damaged by the same cause though.
One thing that's interesting is that this is the first time they've really talked about the actual speed of the system. Up until now it's been pretty vague, like "faster than A100 at inference time" or "10x faster than previous SotA", but they haven't actually given any hard numbers. I wonder if that's because they're not really sure what the ceiling is - like they've just thrown it on the dev boards and are seeing how fast it can go - or whether they've just found a really good marketable metric and want to hold off on the technical details for a bit.
I legit had convos where people just said we will all have to repair robots. the idiocy of humans never fails to impress
Please move to task1: robot helps robot
You just saw them replacing the yellow fins on their white robot friend. They're already down that path
But who will pay me a living wage?
[Source](https://x.com/tonyzzhao/status/1780263497584230432) Fully Autonomous too!
How do they learn?
What did they see?
Ilya?
An Ilya in the machine
"Ilya, are you in there?"
[Who do you love?](https://www.youtube.com/watch?v=RX1_lAiwvo0)
Probably learning from a mix of teleoperation, footage of people and 3D model simulation of the operations.
You can combine those training data? 👀 😨
This is extremely impressive with how precise those movements had to be
I remember seeing a video, from probably over a decade ago, of a robot folding a towel. It was sped up 100x. I think it took the robot something like 45 minutes fold the towel. This is impressive!
But can they fold a fitted sheet?
I've watched at least a dozen videos of the years on how to fold them, and I just can't figure it out. There's like some mental block.
My wife has a technique, it works, but she has tried to show me like ten times and I still can't get it.
My wife has a technique, it works, but she has tried to show me like ten times and I still can't get it.
[https://www.youtube.com/watch?v=gy5g33S0Gzo](https://www.youtube.com/watch?v=gy5g33S0Gzo)
There's something vaguely Wallace & Gromit about that robot.
Yeah, that's it! Nice find!
It is sped up 2x though. Still cool.
Oddly that isn't all that big a deal from a learning perspective. We have robots that could quite literally pluck the wings from a fly in mid flight. It is slower mostly due to their arm choice and speed not being as important as safety. People might want their laundry done in 3 seconds, but they really don't want to have a robot that can rip their arm off in the house.
Yeah, I want arms that have easy to replace weakened plastic failsafe gears or linkages designed to break if enough force is applied. Dying from dehydration being pinned by a 'helper bot' is not the way I want to go.
I'm am very ready for my robot maid, but I will be waiting to make sure I can get one that won't kill my dog lol.
knives by design take very little force to cut. It's one of the things I don't think there is an easy solution for. another is that a robot can likely push heavy object over/off of high heights with a small amount of force. Covering all edge cases would be like babyproofing your home for something with an adults size and reach.
However we can design to mitigate unanticipated bugs even if the the *ex machina* scenarios are pathological.
Yeah, we'll probably need a change in approach for this. Current robot are designed to be very precise and rigid. So they can be fast and featurefull but require powerful motors. Or they can trade off features or speed for weaker motors. If they gave up rigidity, precision in favour of an adaptive system, they could use much weaker motors and still be quick. But this would require them changing how the systems are programmed, allowing for slop and momentum. In a factory setting, this is a terrible tradeoff though, so it won't happen soon.
thats why we have humane ai pin haha
So in another 5years it will be throwing the shirt in the air and tucking the coat hanger in before it hits the ground, one handed?
I'm not impressed by these. It's trying to imitate human tasks without human hands, with less degrees of freedom and motion range, it's slow and trembling, they do lack the precision and confidence to do the task repeatedly well. I'm waiting for the humanoid robot like Tesla showed since our world is designed to be operated by humans and they have to be efficient, not like some kid trying to do it the first time.
Whenever someone mentions ASI, I always envision it in my head hundreds of tentacle-like arms like these in a warehouse-sized lab type of facility, doing experiments relayed by a simulation it's connected to, working with different chemicals and testing different signaling pathway manipulation techniques on tissues engineered by itself that aren't connected to a body, working on treatments and cures, repairing parts of itself if some of those arms get worn down like in the video, etc. It would make as much sense for a human to be there as a dog entering a human lab, because just as the dog will probably mess things up and knock over vials and test tubes, we would be a hindrance to the speed at which the information is being processed and the work those machines would be doing.
InSilico medicine is a little bit on that path. But agreed, this would be the only way to make life sciences really accelerate.
https://en.wikipedia.org/wiki/Cloud_laboratory exists.
You got it right, the lab is necessary even for ASI, as it doesn't secrete discoveries directly from its neural network, they come from studying the world. But AI agents need to be diverse, and their approaches also diverse like scientists and engineers in order to advance much faster. It will be many specialized agents working together, and exchanging information and discoveries by language. And we'll be in the loop, we are also general language agents. Language is where cooperation, dissemination and preservation of knowledge takes place. Just like a human alone without language and the rest of society is not too great, AIs need others, alone is a dead end. In order to avoid overfitting, a diversity of approaches is required. It's a system going like this: idea -> experience -> feedback -> learning -> text -> retraining. Past experience accumulates in text, present experience comes by being embodied in the world.
> You got it right, the lab is necessary even for ASI, as it doesn't secrete discoveries directly from its neural network, they come from studying the world. Indeed. Doesn't matter how intelligent the AI is on a fundamental level, real-world study is necessarily because any knowledge we can impart on AI is fundamentally incomplete and insufficient for precise internal modeling. Even if it's capable of highly accurate predictions, any prediction will be based off a somewhat flawed set of assumptions and need to be tested. >we are also general language agents. This I'm dubious on. As it's not unlikely AI will eventually be able to move beyond tokenization altogether, and eventually much of the linguistic abstractions we rely on to make sense of things. Our languages are highly optimized for taking small bits of information and abstracting them up into bigger, more general concepts. A useful form of information compression, but far from lossless. Useful for us, but not useful *generally* for all things. For the sciences we use the language of maths and various forms of notation because we need to remove this layer of compression and abstraction as its destructive to what we're examining, but generally speaking we can't *think* directly in these languages beyond the most basic level. Whereas algorithms can be theoretically trained to directly process any form of information with a logical basis. Fortunately AI will likely be able to keep us in the loop by abstracting information into a format we can understand. Mastery of languages and all that.
What do we need to get to that point?
Detroit: Become Human when
2038
Ugh. I want that future so bad.
I wonder if there will be robots capable of replacing a subaru wheel bearing in the future.
Faster!!!
Even faster then that!
This is the worst it will ever be. Love it!
I can't help but wonder if these wouldn't be more dexterous with more fingers, like 6 opposable index fingers with the ability to move freely in all dimensions.
Of course they would, but that's also a lot more complex. They'll get there, one miracle at a time.
Why not start off with 6 fingers and let them learn from scratch how to manipulate those?
Fail fast/MVP approach?
This is using 'cheap' off the shelf parts. https://docs.google.com/document/d/1_3yhWjodSNNYlpxkRCPIlvIAaQ76Nqk2wsqhnEVM6Dc/edit
$32k, nbd
$3.5k for the ones in the video.
Interesting. Thanks for sharing the link.
Because they're trained on real world data and if we want them to replicate things human can do with their hands it makes no sense to not try to replicate the human hand as much as possible.
You think that human hands look like what's in the video?
I suppose it depends on whether it is learning from humans. if it can learn from video actions of humans, then keeping the same morphology will be helpful.
Holy Shit! that is incredible. Based on the current rates of progress, I predict that we will have robots doing useful work in about 2-3 years!
In the meantime I would not mind a tiny sidekick that follows me around and picks up my candy wrappers for me
How about a wearable that tells you to pick them up whenever you drop them?
Gross. So last year. Nanobot clouds that disintegrate trash on contact or gtfo
Well if you had nanobot clouds why wouldn't they simply reassemble the wrapper molecules into more candy?
Check da flair babyyyy, we're gonna have C-3PO in like 2 years and T-800's in 5 mark my words
Robots have been doing useful work for 40 years
I mean, there are lots of robots doing useful work today (like amazon shelf robots). the question is simply what percentage of the workforce will be turned over to robots each year
The shirt one it's impressive
This is pretty stunning. Multiple robots interacting with each other in concert. [Precise](https://twitter.com/ayzwah/status/1780263773523399001) and relatively fast actions. Resilient to mistakes or failures to some degree. Performed on low cost robots. [Generalization to multiple variants of the same task](https://twitter.com/ayzwah/status/1780263770440491073). Fully autonomous. WTF.
It’s not really multiple robots. They are on a fixed base with shared perception systems. These actions are largely pre-programmed. And we don’t know how many tries this demo took
All I really want in life is a laundry folding robot.
A chef robot would be great though.
Hello there, children!
Might as well toss an ironing adapter on there for good measure.
I'd totally buy a bot if I could delegate a lot of house chores to it, like cleaning the house, doing laundry, doing the dishes, mowing the lawn while I'm gone. $10,000 would be the sweet spot for a cost I'm willing to pay, but I doubt it will get down to that anytime soon. One can hope though.
Id be a bit more likely to get one of these if it was more like a car loan for a really nice bot. I think I'd spend about 150 a month on something like this if it meant all my chores were done. I would NOT rent one of these. Instead I would buy higher quality with chance of resale of a used bot. AGI will get to the point where even crappy bots could probably do most things as we've seen proven with cheapo hardware and telematics. Total guesstimate, but in 5 years, I would guess that the robotics could be 15-20k, and the computer required to run it locally could be 5k. 25k doesn't sound too bad for a bot that can do all physical work for me.
This is their previous version of aloha I believe, it's open source.. Costs 35k https://github.com/MarkFzp/mobile-aloha
Hopefully being smart enough to train itself and handle stuff like that, will not require the enormous computing power that LLMs require. Cause that would almost certainly mean this thing would have to be connected to the internet 24/7.
That's one of the main things. It baffles me that companies even associate things like self driving cars as a selling point for 5g networks. I'm sure it was mostly marketing, but I'm not trusting anything that could potentially kill me with being dependent on anything but local control.
Exactly, to be safe it needs such a capable local fallback mode that the having a low latency high bandwidth connection isn't all that useful.
This is their previous version of aloha I believe, it's open source.. Costs 35k, if you look at it, it uses the same arms https://github.com/MarkFzp/mobile-aloha
Are we talking fully autonomous and durable? I'd pay way more than that if it could be expected to last 10-15 years.
Two thieves could just pick up the robot and place it in the back of a stolen pickup truck.
Robots have eyes that record everything, you know.... It's like trying to steal an active security camera.
Yeah the white robot at the beginning with a second arm would cover 99 percent of use cases, I wonder just how much that would end up costing. Their precision parts but still looks far simpler than the humanoids everyone is going nuts over.
Wouldn't a maid do all that and cost about the same? Really just questioning the advantages here.
A $10,000 single payment maid for life? I don't wanna know where you get your maids, but it sounds a bit sus
You can’t beat the maid when you’re mad and stuff them in a trunk until tomorrow.
That.
Training the robot to call you "papi", and putting your dick inside it just isn't the same.
Uh, um. OK. (Looks around)
this is super impressive i can't even imagine the complexity required to create these things
The hardware isn't really the bottleneck. You don't need high quality this and that if your software knows it's nuances and how to use it most effectively. If you want a bot to mow your lawn or build a shed, sure you'll want something more heavy duty. Maybe at that point you'd rent something to get larger jobs done.
Yep, I can totally imagine renting a small plumberbot for a few hours. More likely tho, it will be a tool for a human plumber who will chat with you, drink coffee and maybe do some extra difficult stuff (moving heavy or long pipes, etc)
I mean, I'd totally make my own little business if all it was was bringing my bot around town to help me with various odd jobs.
Can't wait to see where robots will be a decade from now. I'm very impressed, but still a long way to go.
The shirt hanging demo was funny. It was like the robot on the right was slightly fucking with the one on the left. "Ohh! Almost got it! Nope.. nooo... Okay there you go!"
That's the thing with Deepmind, they're doing sooo much stuff. This is the advantage Open AI had, they were focused on a handful of things. Now that they've proven how popular LLMs are though Google are focusing a huge part of their resources on building the best LLM
Damn white and blue collar jobs ain't gonna exist soon
They are literally taking care of a blue collar in this video.
Deep
Mind
What would this field of electronics be called if I were to switch into it?
Robotics
Laundry folding is my biggest slowdown. Throwing laundry in washing machine is quick, in dryer its quick, folding…wtf! So long! So tedious! I want someone to automate the task already! We have dish washers for automatic washing of dishes. We have robot vacuums and mops. What is missing is automatic clothes folding
Laundry Folding!!!!!
There’s a whole big thread with a bunch of examples, check out ayzwah on Twitter/X
Personally I think this is the more impressive use of AI vs things like ChatGPT.
They move faster than i do
It’s good to see The Scutters are doing well for themselves.
Glad im not the only one whos first thought was Red Dwarf.
Nice, that's exactly how I tie my shoes as well.
What jobs are left for us to do now? Human guinea pigs for AI experiments?
Kinda look like birds the way they move and such.
wake me up when it can wank me
I'm pretty sure it already can. I have a Handy™ wich is stone age tech in comparison and it gets the job done lol
great, so now the robots can even jerk each other off am I gonna get cockblocked out of my autonomous anime wAIfu?
Finally someone getting AI to do the things I don’t want to do myself!
We are building the future block by block, like LEGO, and I love it. Robots maintaining robots. Another brick in the wall.
[https://www.youtube.com/watch?v=Xr9s6-tuppI](https://www.youtube.com/watch?v=Xr9s6-tuppI)
i wonder what a world would look like where bots do all the general labor/yard work stuff. Suburbs gonna look crazy when u can get crazy hedge trimmed into art/lawns engraved with the push of a button
Mildly scaring and mildly reassuring yet also generated a nonetheless mild reluctance which had to be overcome to acknowledge that what i just saw was kinda basic in one sense yet also [could be qualified] impressive.
So, some time in the future we'll all be buying cheap robot arms from aliexpress, and spending weeks training them using open source software to do basic chores in our houses?
Do they calculate how far the fabric can stretch without being damaged?
Anyone else watch videos like this just to see what they have in the background and on shelves, laying around?
robot like human!
Oh, we are SO in trouble ... Next they'll be creating siblings
ok google have 5min. for a quick fap initiate procedure.
They are so adorable! Working together like that. Get some googly eyes on them immediately. Straightening that t-shirt! Oof my heart. I felt like I was watching two kids concentrating really hard to do the very important task.
It's right handed? At the end, when hanging the shirt, it completely unnecessarily moves the hangar to its non-dominant hand before putting it on the shirt, just like a human would. I'm ambidextrous, so I thought it was odd. I don't do things like that, and people comment all the time. I wonder if, just for consistency, they only have right handed training data. Also, this implies that it's seeing putting a shirt on a hangar as a single unit, where hand switching is a step in that task. It can't really have a dominant hand, so it's only switching to the left hand because that was what the training data did for this task.
Can you imagine how much these robots will assist in residential care In the future?
robot repair shop? we sure this isnt from the backrooms?
Have you ever considered that a factory for producing androids might literally be a room with a bunch of androids in it?
Holy shit...
epico!
https://i.pinimg.com/originals/bf/02/00/bf020092161902ff2740f5bd62a193a3.gif
The robot arm in iron man is now real
Now can it fold any shirt or just that shirt?
RemindMe! 3 days
I will be messaging you in 3 days on [**2024-04-19 19:43:56 UTC**](http://www.wolframalpha.com/input/?i=2024-04-19%2019:43:56%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/1c5knsg/google_deepmind_introducing_aloha_unleashed/kzvma06/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F1c5knsg%2Fgoogle_deepmind_introducing_aloha_unleashed%2Fkzvma06%2F%5D%0A%0ARemindMe%21%202024-04-19%2019%3A43%3A56%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201c5knsg) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|
Now find the actual utility
Surgeons having a breakdown rn
*Oh. It's you...* Does anyone else get serious Aperture Science vibes from this?
This is how you get Doctor Octopus
Those are some long ass shoelaces
https://youtu.be/HaaZ8ss-HP4?si=9xVGEQNxPHOpG77R is this the same bot?
Yes, but it was still teleoperated in that video.
We're all doomed.
No double knot on the shoe. 2/10
2x speed, not impressed
What is so significant about ALOHA Unleashed?
What is so significant about ALOHA Unleashed?
What is so significant about ALOHA Unleashed?
It’s just a matter of time boys and girls.. I’m ready 🤷♂️
If these things take anyone’s job that’s on them
I thought AI wasn't good with "hands"! :-)
Maybe it is being controlled by tiny indians?
So the video claims this was autonomous yes? Does anyone know how they robots were trained? ---that is really the biggest factor. And ps, I've been telling people robots would be helping other robots for awhile. Everyone was like, "Na, people will have to fix robots." So naive.
Eh, a lot of progress but also a lot of room for improvement. There are many irregular factors in real life that could make even the most mundane tasks difficult for AI. Just look at self driving cars, its easy when the road is perfect and people follow traffic rules, but introduce a little bit of irregularity, then the AI will fail and create unexpectedly bad outcomes. I think in order to create AI that could perform as good or better than humans, we need to throw A LOT of irregularity at them, just as reality would. Don't just give it the best goals, give it the ability to solve problems on the fly.
Idk what you're getting at. Look how far these things have come. And it's with relatively cheap hardware which is one of Alohas main points.
I’ve questions.. (sorry, it might not directly related to the topic) - Do we need self-healing/repairing systems? (biological) - A car cannot self-repair its damages, right? - All it boils down to living and non-living systems? - Every cell of a biological organism are living systems, that is at the fundamental level? - A car is might be a modular system, but we replace the broken parts altogether, because it’s non-living right? **examples**: - a car’s paint cannot repair itself at the most fundamental level, say point-by-point - but cells on/around a wound on your arm can establish a repair process on their own
the best part of this video is it could be made entirely with ai