Figure AI Faces Whistleblower Suit Over ‘Skull‑Fracturing’ Robots

Robotics
Figure AI Faces Whistleblower Suit Over ‘Skull‑Fracturing’ Robots
A former safety engineer has sued Figure AI, alleging he was fired after warning executives that the startup’s humanoid robots were powerful enough to fracture a human skull and that safety plans were weakened after a large funding round.

Ex‑safety engineer sues Figure AI, raising stark safety concerns

Figure AI, a high‑profile developer of humanoid robots, has been hit with a federal whistleblower lawsuit in which a former principal safety engineer says he was dismissed after warning company leaders that the machines could cause lethal harm — including force sufficient to fracture a human skull. The complaint, filed in the Northern District of California, says the plaintiff raised documented safety objections internally and alleges the company later altered the safety roadmap shown to investors.

What the suit alleges

The plaintiff, identified in media reports as a lead product‑safety engineer, says he repeatedly warned senior managers that the company’s humanoid prototypes could exert dangerously large forces. Among the examples cited in filings is an incident in which a robot malfunction reportedly carved a quarter‑inch gash into a steel refrigerator door — a concrete episode the suit uses to argue the risk was real and foreseeable. The complaint says the engineer brought his concerns to executives, including the CEO and the chief engineer, and that he was terminated shortly after sending what he described as a comprehensive, final safety complaint.

Claims about investor presentations and a 'gutted' safety plan

Separately, the suit alleges that a detailed safety plan the engineer prepared for prospective investors was later weakened or changed before the funding round closed — a round that valuation reports say valued the company at roughly $39 billion. The plaintiff’s lawyers say that diluting a safety roadmap shown to investors could amount to misleading disclosures; the company denies the allegations and says the engineer was fired for performance reasons. The dispute raises questions about how engineering risk disclosures are handled during rapid fundraising cycles.

Why this matters for robotics and public safety

Humanoid robots are physically powerful devices. When actuators, leverage and control software combine, a limb moving at speed can deliver large transient forces. Biomechanical studies and forensic data show that skull‑fracture thresholds vary, but controlled laboratory testing and impact studies put typical fracture forces in the low thousands of newtons for adult skulls under many conditions — numbers that are achievable with unconstrained industrial actuators if no power‑and‑force limits, guards or validated safety modes are in place. Those technical realities are why safety engineering and independent validation are central to turning humanoid robots from demos into machines that can work around people.

Standards and accepted engineering controls

Industry practice for safe human–robot interaction relies on layered risk reduction: careful system‑level risk assessments, mechanical design choices that limit impact energy, control‑system power‑and‑force limiting, sensors and reliable separation or stop functions, and documented validation against international standards. The ISO family of robot‑safety standards — historically ISO 10218 and the technical guidance in ISO/TS 15066 — codify how to assess risks for different body regions and define acceptable approaches such as power & force limiting and speed & separation monitoring. Recent revisions have continued to integrate collaborative‑robot guidance into the core industrial‑robot framework, reflecting growing real‑world use cases where robots and people share space.

Legal and regulatory contours

On the legal side, the complaint combines wrongful‑termination and whistleblower themes with a possible investor‑disclosure angle; if a safety plan shown to investors was materially altered, it could draw scrutiny from regulators or give rise to civil claims. The case also arrives at a moment when legislators and regulators are sharpening their focus on AI and robotics. Lawmakers have recently proposed or advanced measures aimed at protecting employees who report AI‑related safety risks, and debate continues over how much transparency and external auditing should be required for high‑risk systems. Those policy efforts matter because internal channels alone can leave systemic risks unaddressed when commercial pressures are high.

Company response and immediate fallout

Figure AI has publicly disputed the former employee’s account, saying the termination was for performance reasons and characterising the allegations as false. The claimant’s lawyer has argued that California law protects workers who report unsafe practices and that the courts will need to weigh both the factual record of the lab incidents and whether any retaliatory motive existed. For investors and customers, the suit increases headline risk for a company that has drawn major funding interest in recent rounds. Media outlets reporting the filing have noted that the lawsuit may be among the first whistleblower cases tied explicitly to humanoid‑robot safety.

What this means for the industry

  • Operational caution: Startups pushing hard on hardware and autonomy will need to double down on robust, independently audited safety validation to reassure investors and regulators.
  • Disclosure expectations: Fundraising processes that include technical briefings must guard against selective presentation of mitigations; auditors and counsel increasingly scrutinize claims about operational readiness.
  • Policy momentum: The complaint reinforces policy debates about whistleblower protections for AI and robotics workers and whether new regimes are needed to surface latent safety risks.

How the story could develop

Watch for several signals in the coming weeks: whether Figure responds with detailed engineering rebuttals or third‑party test data; whether the plaintiff files additional exhibits or manifests internal emails and reports; and whether regulators or standard bodies comment or open inquiries. The legal process could also surface granular technical facts about how the firm tested torque, impact and fail‑safe behaviours — facts that could matter for industry norms if they become public through litigation. For researchers, policymakers and safety engineers, the case is a reminder that the technical challenges of building physically capable robots are inseparable from organisational practices and the incentives that shape them.

Final thought

Humanoid robotics sit at the intersection of software, hardware and human vulnerability. Turning impressive demos into safe, useful machines requires not only better algorithms and actuators but a culture of rigorous safety engineering and transparent governance. This lawsuit is likely to be an early test of how the market, the law and standards interplay when physical risk and commercial momentum collide.

— Mattias Risberg, Dark Matter. Based in Cologne, reporting on robotics, safety and technology policy.

Mattias Risberg

Mattias Risberg

Cologne-based science & technology reporter tracking semiconductors, space policy and data-driven investigations.

University of Cologne (Universität zu Köln) • Cologne, Germany