Automation Turned Deadly: MCAS and the Max
How an automated safety feature became part of a catastrophe
When two Boeing 737 MAX airliners crashed within months of each other in 2018–2019, investigators traced a common thread to the aircraft’s flight‑control software. A system designed to help the plane handle differently shaped engines — the Maneuvering Characteristics Augmentation System, or MCAS — was reworked late in development in ways that made its behaviour more assertive and, critically, reliant on a single angle‑of‑attack sensor. Those changes removed layers of protection and left crews exposed to a fast, confusing failure mode that they had not been trained to recognise or counter.
MCAS: what it was supposed to do
MCAS was introduced to help the MAX handle like earlier 737 models after larger, more forward‑mounted engines changed the aerodynamics. Its nominal job was modest: when certain conditions suggested the nose might pitch up too far, MCAS trimmed the horizontal stabiliser slightly nose‑down to keep handling consistent for pilots. In its original conception it was meant to be subtle and to operate rarely.
How the system changed — and why that mattered
During development Boeing expanded MCAS’s role. The implemented version could activate more frequently and apply larger stabiliser inputs than earlier drafts, and it was wired to react to data from a single angle‑of‑attack sensor rather than cross‑checking multiple sources. In both the Lion Air and Ethiopian Airlines accidents, erroneous angle‑of‑attack data triggered repeated nose‑down inputs that pilots fought but could not overcome in time. The shift from a conservative, redundant design to a more aggressive, single‑sensor implementation was a decisive factor in the failures.
Why calling MCAS “A.I.” is misleading — and why the label still matters
Across media and public debate the crashes sometimes get framed as an “A.I.” failure. That shorthand is tempting but imprecise. MCAS was not a machine‑learning model that trained itself from data; it was deterministic flight‑control logic: rules coded to act on specific sensor inputs. The danger, however, is the same one people worry about with opaque AI systems — automated behaviour hidden from end users and regulators, interacting with messy real‑world signals in ways its designers did not fully anticipate.
Labeling MCAS as merely “automation” can underplay how design choices — especially around transparency, redundancy and human‑machine interaction — turned a protective feature into a hazard. The flights reveal that even non‑learning algorithms demand rigorous safety engineering, the same kinds of traceable requirements and independent testing we now ask of AI systems in other domains.
Organisational and regulatory failures amplified technical flaws
Technical choices did not occur in a vacuum. Multiple reviews and hearings found that problems in oversight, communication and corporate culture amplified the risk. Regulators were not always presented with the full details of MCAS as it evolved; pilot manuals initially omitted the feature; and assumptions that MCAS would rarely activate reduced pilot training on how to respond when it did. These institutional breakdowns turned an engineering mistake into a public‑safety crisis.
The fixes Boeing and regulators implemented
After the grounding of the MAX fleet, Boeing and aviation authorities required software and operational changes. The revised design limits MCAS so that it only acts when both angle‑of‑attack sensors agree, restricts it to a single activation per event, and moderates the magnitude of trim inputs. Regulators also tightened requirements for documentation, pilot training and independent verification before the type was cleared to return to service. Those changes addressed the immediate failure modes but do not erase the broader governance questions exposed by the crisis.
Lessons for the wider AI and automation debate
The Max story is a cautionary primer for anyone deploying automation at scale. Four lessons stand out:
These are familiar refrains in AI ethics and safety circles, but the 737 MAX shows they are not abstract. In safety‑critical systems the costs of getting them wrong are immediate and final.
Where the conversation should go next
Technical fixes have returned the MAX to service under stricter conditions, but the episode remains a bench‑mark for how not to manage automation in regulated industries. For policymakers and engineers the imperative is to translate lessons into enforceable standards: clearer certification pathways for automated decision systems, mandatory reporting of substantive design changes, and institutional structures that reduce conflicts of interest between manufacturers and certifiers.
For journalists and the public, it is also a reminder to be precise about terms. “AI” grabs headlines, but the underlying problem in the MAX case was not artificial intelligence in the machine‑learning sense — it was a combination of aggressive automation, poor transparency and weakened safety practices. Treating that combination as an engineering and governance challenge gives us a more productive path to prevent a repeat.
Conclusion
The 737 MAX disasters were not inevitable. They were the outcome of specific design decisions, unchecked assumptions and institutional failures. As automation and AI push into more domains, the MAX case should stand as a stark example: safer systems emerge not from confidence in a piece of code but from conservative design, clear communication with users, independent review, and robust regulatory oversight. Those are not technical niceties — they are preconditions of public safety.