You are currently viewing MIT robotics pioneer Rodney Brooks thinks people are vastly overestimating generative AI |  TechCrunch

MIT robotics pioneer Rodney Brooks thinks people are vastly overestimating generative AI | TechCrunch

When Rodney Brooks talks about robotics and artificial intelligence, you should listen. Currently Panasonic Emeritus Professor of Robotics at MIT, he also co-founded three key companies, including Rethink Robotics, iRobot and his current venture, Robust.ai. Brooks also directed MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade, beginning in 1997.

In fact, he likes to make predictions about the future of AI and keeps a scorecard on his blog of how well he’s doing.

He knows what he’s talking about and thinks it might be time to put the brakes on the screaming noise that is generative AI. Brooks thinks it’s impressive technology, but perhaps not as capable as many assume. “I’m not saying LLMs aren’t important, but we have to be careful [with] how we rate them,” he told TechCrunch.

He says the problem with generative AI is that while it’s perfectly capable of a set of tasks, it can’t do everything a human can, and people tend to overestimate its capabilities. “When a person sees an AI system performing a task, they immediately generalize it to things that are similar and make an assessment of the AI ​​system’s competence; not just the performance in that regard, but the competence around that,” Brooks said. “And they tend to be overly optimistic, and that’s because they’re using a model of a person’s performance on a task.”

He added that the problem is that generative AI is not human or even human-like, and it is wrong to try to ascribe human abilities to it. He says people see it as so capable that they even want to use it for applications that don’t make sense.

Brooks offers his latest company, Robust.ai, a warehouse robotics system, as an example of this. Someone recently suggested to him that it would be great and efficient to tell his warehouse robots where to go by building an LLM for his system. However, in his view, this is not a reasonable use case for generative AI and would actually slow things down. Instead, it is much easier to connect the robots to a data stream coming from the warehouse management software.

“When you have 10,000 orders that just came in that you need to ship in two hours, you have to optimize for that. Language will not help; it will just slow things down,” he said. “We have massive data processing and massive AI optimization and scheduling techniques. And that’s how we fulfill orders quickly.”

Another lesson Brooks learned when it comes to robots and AI is that you can’t try to do too much. You need to solve a solvable problem where robots can be integrated easily.

“We need to automate places where things are already cleaned. So the example with my company is that we do quite well with warehouses, and warehouses are actually quite limited. Lighting does not change with these large buildings. There are no things lying on the floor because people pushing carts would bump into them. No floating plastic bags. And it’s largely not in the interest of the people who work there to be malicious towards the robot,” he said.

Brooks explains that it’s also about robots and humans working together, so his company designed these robots for practical purposes related to warehouse operations, as opposed to building a humanoid robot. In this case, it looks like a shopping cart with a handle.

“So the form factor we’re using isn’t humanoids walking around – even though I’ve built and delivered more humanoids than anyone else. They look like shopping carts,” he said. “There’s a rudder, so if there’s a problem with the robot, a person can grab the rudder and do whatever they want with it,” he said.

After all these years, Brooks understood that it was about making the technology accessible and purpose-built. “I always try to make the technology easy for people to understand and therefore we can implement it at scale and always look at the business case; return on investment is also very important.”

Even so, Brooks says we have to accept that there will always be hard-to-solve outliers when it comes to AI that could take decades to resolve. “Without careful consideration of how an AI system is implemented, there is always a long queue of special cases that take decades to detect and fix. Paradoxically, all of these fixes are completed with AI.”

Brooks adds that there’s this misguided belief, largely thanks to Moore’s Law, that there will always be exponential growth when it comes to technology — the idea that if ChatGPT 4 is so good, imagine what ChatGPT 5 will be, 6 and 7. He sees this flaw in this logic that technology does not always grow exponentially, despite Moore’s Law.

He uses the iPod as an example. Over several iterations, it actually doubled the memory size from 10 to 160 GB. If it had continued on this trajectory, he figured we’d have an iPod with 160TB of storage by 2017, but of course we didn’t. Models sold in 2017 actually came with 256GB or 160GB because, as he pointed out, nobody really needed more than that.

Brooks acknowledges that LLMs could help at some point with home robots where they could perform specific tasks, especially with an aging population and not enough people to care for them. But even that, he says, can come with its own set of unique challenges.

“People say, ‘Oh, big language models will make robots able to do things they can’t do.'” That’s not the problem. The problem with being able to do things is related to control theory and all sorts of other hardcore mathematical optimizations,” he said.

Brooks explains that this could eventually lead to robots with useful language interfaces for people in caregiving situations. “It’s not useful in a warehouse to tell an individual robot to go out and pick up one thing for one order, but it might be useful in home care for people to say things to the robots,” he said.

Leave a Reply