When Rodney Brooks talks about robotics and artificial intelligence, you should listen. He currently serves as the Panasonic Professor Emeritus of Robotics at MIT, and has co-founded three major companies, including Rethink Robotics, iRobot, and his current endeavor, Robust.ai. Brooks also directed MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade starting in 1997.
In fact, he likes to predict the future of artificial intelligence Keep scorecard On his blog about how successful he is.
He knows what he’s talking about, and he thinks it might be time to put an end to the hype surrounding generative AI. Brooks thinks it’s an impressive technology, but perhaps not as compelling as many suggest. “I’m not saying that LLMs aren’t important, but we have to be careful,” he says. [with] “How do we evaluate them,” he told TechCrunch.
The problem with generative AI, he says, is that although it is perfectly capable of performing a certain set of tasks, it cannot do everything a human can do, and humans tend to overestimate its capabilities. “When a human sees an AI system performing a task, they immediately generalize it to similar things and estimate the efficiency of the AI system; “Not just performance in that area, but competency related to that,” Brooks said. “They are often overly optimistic because they are using a model of a person’s performance on a task.”
The problem, he added, is that generative AI is not human or even human-like, and it is a mistake to try to assign human capabilities to it. He says people consider it so capable that they want to use it in nonsensical applications.
Brooks offers his latest company, Robust.ai, a robotic system for warehouses, as an example. Someone recently suggested to him that it would be nice and efficient to tell his warehouse robots where to go by building an LLM for his system. However, in his estimation, this is not a reasonable use case for generative AI and would actually slow things down. Instead, it is much easier to connect robots to a stream of data coming from your warehouse management software.
“When you have 10,000 orders that just came in and you have to ship them in two hours, you have to optimize that. Language is not going to help. It’s just going to slow things down,” he said. “We have massive data processing and massive AI optimization and planning. That’s how we get orders done quickly.”
Another lesson Brooks has learned when it comes to robotics and artificial intelligence is that you can’t try to do too much. You must solve a solvable problem where robots can be easily integrated.
“We need automation in places where things have already been cleaned. So my company’s example is we do a good job in warehouses, and warehouses are already constrained. The lighting doesn’t change with those big buildings. There’s no stuff lying around that people pushing carts are going to bump into. There’s no plastic bags floating around. And it’s largely not in the best interest of the people who work there to be malicious toward the robot,” he said.
It’s also about robots and humans working together, Brooks explains, so his company designed these robots for practical purposes related to warehouse operations, rather than building a robot that looks like a human. In this case, it looks like a shopping cart with a handle.
“So the form factor we’re using is not humanoid robots moving around — although I’ve built and delivered more humanoid robots than anyone else. These robots look like shopping carts,” he said. “It has a leash, so if there is a problem with the robot, anyone can grab the leash and do whatever they want with it.”
After all these years, Brooks knows it’s about making technology accessible and purpose-built. “I always try to make the technology easy for people to understand, so we can roll it out at scale, and we always look at the business case; ROI is also very important.
Even so, Brooks says we have to accept that there will always be odd cases when it comes to AI that are difficult to fix, and that can take decades to resolve. “Without carefully considering how an AI system is deployed, there will always be a long tail of special cases that will take decades to discover and fix. Ironically, all of these fixes are AI that is self-sufficient.”
There is this misconception, Brooks adds, mostly because Moore’s LawThere will always be exponential growth when it comes to technology — the idea that if ChatGPT 4 is this good, imagine what ChatGPT 5, 6, and 7 will be like. He sees the flaw in that logic, which is that technology doesn’t always grow exponentially, despite Moore’s Law.
He uses the iPod as an example. In some iterations, the storage size doubled from 10GB to 160GB. If it had continued on that trajectory, he figured we would have an iPod with 160TB of storage by 2017, but of course we didn’t. The models sold in 2017 actually came in 256GB or 160GB because, as he pointed out, no one needed more than that.
Brooks admits that MBAs could help at some point with domestic robots, as they can perform specific tasks, especially with an aging population and not enough people to care for them. But even that, he says, may come with a set of unique challenges.
“People say, ‘Oh, big language models will make robots able to do things they can’t do.’” That’s not where the problem is, he said. “The problem of being able to do things has to do with control theory and all kinds of other hard math optimization,” he said.
This could eventually lead to robots with language interfaces that are useful for people in care situations, Brooks explains. “It’s not useful in a warehouse to ask an individual robot to go out and get one thing for one order, but it might be useful for home nursing care for people to be able to say things to the robots,” he said.