
Humanoid robots have promised a silent revolution in homes for years: Human-shaped machines capable of performing everyday tasks, interacting with their environment, and coexisting With people as if they were an advanced device. The race to build “universal assistants” has arrived in Spain and already has an interesting set of capabilities, but one of the companies that stands out the most is Figure AI.
This became a company in California, founded by Brett Adcock One of the visible faces From the new generation of mechanical assistants.
His proposal was clarified about it Three generations of robots: Figure 01, its first prototype; Figure 02, a more advanced model focusing on microprocessing; and Figure 03, the latest developments designed for large-scale production and future homes.
Their goal is for these robots to integrate into everyday life without friction. And its danger, as has just been discovered, The power they hide beneath their metal frame.
At the height of interest in humanoids – also driven by advances in generative artificial intelligence and improvements in autonomous navigation – a new scandal is once again focusing on the safety of these machines. This time it comes from within the company itself.
The former security official denounces that Figure 02 could crack a human skull
On November 21, 2025, Robert Gruendel – the company’s former head of product safety – filed a federal lawsuit in California accusing Figure AI of ignoring important warnings about the actual power of its robots. According to his statement, the model that lies at the center of the problem is… Figure 02, A human capable of generating enough force to crack the skull of an adult In case of direct impact.
Grundel recounts that during internal tests, one of the robots The refrigerator was accidentally hit with enough force to impact itIt is an incident that, if it occurred against a person, would have had dire consequences. He also claims that both CEO Brett Adcock and chief engineer Kyle Edelberg downplayed these risks even after a multi-million dollar funding round, and that their proposed security roadmap was immediately eliminated.
Shortly after his warnings were lifted, Grundel was fired. He asserts that this is retaliation, while the company defends that the decision is due to poor performance. Beyond the labor conflict, the complaint points to a fundamental problem: the possibility that a harmless, domestic robot could do so Executing movements with lethal force.
What does the complaint say?
The complaint stated that, From the beginning, there were no formal security measures, Incident reports or identified risk assessments. The only external security officer was a contractor with no robotics experience.
According to Grundel, managers tolerate failure, They ruled out the presence of key controls such as an emergency stop button at an earlier stage (arrested) because they “didn’t like it aesthetically” and modified the security plan before presenting it to investors, which he interprets as a potentially fraudulent practice.
What Shape AI claims
The company completely rejects the former engineer’s version. In its official response, Amnesty International confirmed that the accusations are “false” and that it plans to defend itself “firmly” in court. According to the company, Robert Groendel’s dismissal had nothing to do with safety warnings, but rather with alleged poor performance in his position.
The company insists in its statements that its team works to strict standards and that safety is part of the design of each of its models. It also highlights that the complainant presents a distorted picture of the internal process by stating, in his opinion, It does not reflect the reality of development For robot Figure 02, the human was questioned in the complaint.
AI defends itself by saying that it is immersed in an accelerated process of innovation, but… It categorically denies cutting basic controls or intentionally ignoring risks in its robots. For the company, the lawsuit is an attempt to discredit its business at a time of high media coverage.
The arrival of human beings into the home opens an urgent discussion about security
Figure AI aims to deploy tens of thousands of its models — including the more advanced Figure 03 — in real-world environments in the coming years. But this case is revealing The risks of accelerating the adoption of robots without a clear external oversight framework. To move weight, manipulate objects, and operate with precision, these robots need powerful motors… and that same power can pose a danger if something goes wrong or if the design doesn’t include redundant brakes and controls.
The case of Figure 02 illustrates this tension between innovation and security. If a robot capable of living together in a room can, under certain conditions, generate enough force to cause serious injury, then talk of regulations, audits and responsibilities becomes inevitable. The industry is moving quickly, but legislation and safety protocols still lag behind.