Tesla’s AI Day 2022 is scheduled for September 30th, just a week from today (October 1st for Australians). As the second part of their Artificial Intelligence event grows closer, it’s worth considering what we’re about to see and outlining some expectations for what we should expect from the biggest mystery.. the working prototype of TeslaBot.
While AI at Tesla encompasses everything from their FSD software to Autobidder on their Megapacks, the single biggest unknown, is their humanoid robot, first announced over a year ago, on August 20, 2021.
Companies have been working on humanoid robots for decades, so to imagine what’s possible in just a year is difficult, but this is Tesla, so expectations are high.
Here’s what Tesla needs to do to impress me at AI Day 2022 with TeslaBot.
Design and meetng the spec sheet
When TeslaBot was announced, we saw a stationary mannequin that was designed to do one thing – let us know Tesla’s intention for a form factor. The design is sleek, more human-like than we’d seen before. Sure there are humanoid robots, but these are typically large and cumbersome and noisy.
One of Tesla’s first tests will be to shrink the electronics in the TeslaBot to fit inside the envelope provided by the humanoid form. It’s easy to imagine the first working prototype is slightly larger and can be reduced in size over time, before Tesla really commercialised it for other businesses or even private citizens to purchase it.
During the announcement, Elon Musk revealed that the TeslaBot would have the following specs:
- Height: 5 foot 8 inches tall, or 172 cm.
- Weight: 125 pounds or 56 kilograms.
- Speed: 5 Miles per hour or 8 kilometres per hour.
- Carrying Capacity: 45 pounds or 20.4 kilograms
- Deadlift: 150 pounds or 68 kilograms
- Arm extend lift: 10 pounds or 4.5 kilograms
In most companies, they’d have a working prototype before making claims about capabilities, but I believe Tesla are so confident in their computer simulations that they likely didn’t have a TeslaBot working to the point where these capabilities were actually tested. Now a year on, they should have, so it’ll be very interesting to see if they meet or exceed these values.
A face of data or personality
I think Tesla needs to show the display on the head working, as this will give us further insight into how Tesla thinks about Humanoids.
When you interact with the TeslaBot will it have eyes, will it blink, will it smile, or will it be a more generic, non-threatening light sequence, similar to when you speak to voice assistants like Amazon’s Alexa.
There will be times where having the display be useful, like displaying remaining battery life, or the progress on tasks like – pallets unloaded today 3/15. Tesla says it will first use their TeslaBot in their own Gigafactories, so applications like this make the most sense in that context, but the longer-term vision for the bot is much broader.
If TeslaBot is to accommodate human/robot interactions, say in education, aged care or customer service, then having the ability to visually express themselves with digital eye movements, or moving lips (particularly for deaf people who lip-read), will be important. While it’s possible Tesla will anthropomorphize their robot and create something like Sonny from the movie iRobot, that’s likely much further away, with the immediate need to make the robot functionally walk first, before they can run.
Seeing a working display, will be important to helping us understand how Tesla thinks about how humans and robots should and could interact.
Walking human-like and not falling over
The reason TeslaBot will exist in the world and millions of dollars are being spent on its development, is so it can be deployed to tasks that are boring, repetitive and/or unsafe for humans. This means the humanoid form is used to enable the robot to replace humans in specific roles.
One of the most basic tasks the TeslaBot is likely to be deployed to is to move items (likely car parts) around the factory and therefore walking between the inbound deliveries and the production line, is critical.
If the bot walks like a robot, fine, maybe we can get passed that, but given Tesla’s focus on AI, I would expect that the TeslaBot has learned how to walk naturally, very human-like, by leveraging reinforcement learning.
There’s a great video by Hybrid Robotics out of UC Berkeley that shows a bipedal robot learning to walk, but then being able to accommodate different floor surfaces and different payloads. These start with computer simulations and training and are then applied to the real world. If this is what’s happening at universities a year ago, I’d imagine, well-resourced, incredibly talented engineers at Tesla can meet and beat this capability.
It’d be really nice to avoid another Cybertruck window smashing incident, so I really hope the robot does not fall down, at least not by accident. It is possible that Tesla shows an ability for the TeslaBot to recover in the unlikely event that it does fall over.
If the TeslaBot can do this, it would provide confidence the robot is fairly self-sufficient and would be unlikely to require human intervention, even in the worst-case scenario.. important if you’re trying to replace humans, just like understanding its running out of battery and going to find the nearest charger.
Battery Life and Charging
There’s been no detail shared on the battery life expectations for TeslaBot, but that’s an incredibly important figure to understand just how useful the Bot can be. If the TeslaBot is designed to replace the work done by a human on an 8 hour shift, 8 hours would be a decent objective to have. Eventually, it’d be nice to think the Bot could far exceed what a human could do, remembering that humans typically take breaks for lunch etc inside that envelope.
The length of time TeslaBot can operate depends on two key things. The first is the battery capacity available and the second being how efficient the electronics are.
It is not known which cell technology Tesla will use in the TeslaBot. It’s possible they use the most power-dense cells they have, the 4680s, which would mean they could use the fewest amount of cells, for great battery life. There are definitely challenges with using 4680s.. firstly, they’re in short supply, going into Texas-made Model Ys, soon to go in the Tesla Semi and next year going into the Cybertruck and potentially even the Roadster (although more likely 2024).
This means adding the TeslaBot to the list demand for 4680s is unlikely, paired with the fact the larger diameter of the 4680 would make it really difficult to fit inside the thinner parts of the humanoid design. Like in Tesla’s vehicles, you’d want the majority of the weight to be low to the ground, but there’s not a lot of room for large cylindrical cells in the legs.
It’s possible that as Tesla transitions more of its vehicles to use the new 4680s, that supply of their 2170 cells, manufactured by partners, could be a great candidate for the TeslaBot. The question is, do these have enough energy to power the TeslaBot for more than a couple of hours at a time?
In the factory setting, Tesla could have hundreds of these bots that work in shifts, charging/working/charging/working and with enough Bots in the workforce, battery life may not matter as much as we think it does, but the consequences of more bots are more cost, remembering the whole goal is to be less cost and more productivity than a human.
There’s an outside chance that Tesla uses a completely different battery form factor for the TeslaBot to facilitate great energy density while accommodating the tight form factor they set themselves with the humanoid robot, but the effort required makes this seem incredibly unlikely.
Tesla has to talk about how the bot recharges itself. That in itself is an expectation, that it could self-charge if it requires a human, the value proposition kind of breaks down. In theory, there are many options here, but the most likely seems to be that the feet of the bot would be able to stand on a dock and recharge, similar to how our robot vacuums do it.
I expect Tesla’s work on Superchargers to pay dividends when it comes to the charging infrastructure, however when we’re talking about charging infrastructure inside buildings, this needs to be a simple solution that doesn’t require large capital works to achieve.
A recharge time of 30-60 minutes would be really inviting, a recharge window overnight is certainly less exciting.
Navigating a dynamic environment with Computer Vision
When Tesla announced they were making a robot, most people kind of accepted they’d be able to do it, despite having no experience in building robots (except the robot on wheels known as their cars).
The reason I, and many others believe Tesla has a reasonable chance at creating something really capable here, is due to their work on AI, as part of developing their autonomy stack for the cars. What we see from the FSD Beta videos is Tesla’s ability to combine multiple camera feeds, send it to the FSD computer for processing and infer a lot about the world around it.
If Tesla takes this same principle and much of the same hardware stack, and applies it to a robot, it’s conceivable that the robot will be able to understand the world around it in a way we’ve never seen from a robot before. Most competitors use an instruction set that is hand-crafted by programmers which are very fragile because the second you have an unexpected variation, the outcome will quickly become unpredictable.
TeslaBot will use a very different, using the camera to create a view of the environment around it, monitoring for change and adapting accordingly. In the car context, Tesla refers to something called driveable space, or in Ashok’s latest video, they progressed to use an occupancy network. In the robot context, this would effectively be walkable space and isn’t done in two dimensions, but 3.
What I want to see is Tesla demonstrate their ability to adapt to a changing environment. Throw a broom down in the path the robot was walking on and have it step over the broom, or better yet, pick it up, so others don’t have to.
I’d love to see a demonstration of it looking at a series of objects, selecting the right one, based on the task list provided, and then delivering that to a location, perhaps through a series of doors. These doors could be open, closed, with different handles etc, to show the diversity of applications the TeslaBot could deal with.
The real key here is to demonstrate Tesla’s strength in computer vision, that is, to understand objects in a 3D space, then the challenge is finding what’s possible in terms of tasks and how difficult it is to train TeslaBot in new environments and tasks.
Learning, Training and the Marketplace of ideas
I expect that Tesla has to go into detail about how TeslaBot learns. What does it take to train TeslaBot for new skills and is it exclusively Tesla, or is there a platform where others can help.
TeslaBot could have a learning mode, where it observes you perform a task, let’s say get a drink out of the fridge and place it on the coffee table. It may ask you to perform the task a number of times until it’s able to perform the task for you.
Another scenario is that this learning mode requires you to pose the robot and the software can animate and automate the movements between the poses to achieve the task.
Another possibility is that Tesla releases a TeslaBot Motion Cap suit where those businesses who need to train the bot in regular tasks, wear the suit, perform the task, then the data is fed back to Tesla to then have the bot learn how to perform the task. While this certainly sounds like lot of work, let’s say this takes 10 hours of work, but it would then enable the robot to replace thousands of hours per year and accommodate a variety of differences in the environment and work 24/7.
I would love to see Tesla allow 3rd parties to develop skills for the bot and sell them on a marketplace. Others looking to have their bot perform the same task could skip the training time and simply purchase the skill through their Tesla app, while Tesla takes a cut along the way.
The Next Date
At AI Day 2021, we got the TeslaBot announcement. At AI Day 2022, we’re going to find out a lot more information about the bot and see a working prototype. What we don’t know is the next date in the timeline, is it for Tesla to build a hundred prototypes and trial them in the factory during 2023, before making them commercially in 2024?
Will they ever sell directly to consumers and if they do, what’s an expected price point? It’s worth remembering the Boston Dynamics Spot costs US$75,000 and won’t have nearly the functionality of a humanoid robot. If you can replace a worker in a factory and it’ll work around the clock without breaks, holidays, sick pay, superannuation, overtime etc then it’s likely going to be worth a lot more to a business.
I love the idea of consumers buying TeslaBot for $10-$50k, but in reality, I think we’re many, many years away before that happens.
To impress me at AI Day 2022, Tesla needs to show a TeslaBot prototype that:
- Is the form factor shown a year ago
- Walks like a human
- Has a digital face (shows data as well as eyes etc)
- Can understand some level of voice command (even if it’s just stop)
- Leverages Computer Vision to select one object from many
- Adapts to a dynamic environment
- Avoids Humans
- Opens doors
- Walks up/down stairs
- Can deal with different floor surfaces
- Can get up if it falls over
- Can charge itself
- Lasts longer than 2 hours on a charge
If TeslaBot can identify that an object needs to be moved, but is too heavy for a single bot and can team up with other bots to lift it (as humans would do), that will be amazing.