My Testimony Today Before the House Subcommittee on Aviation

Costs for Boeing Start to Pile Up as 737 Max Remains Grounded
April 12, 2019
SFO near-miss: Capt. ‘Sully’ pushes bill, says ‘biggest threats are runway related’
August 8, 2019
Show all
type=, style=, post_format=

STATEMENT OF CHESLEY B. “SULLY” SULLENBERGER III

Subcommittee on Aviation
of the
The United States House Committee on Transportation and Infrastructure

June 19, 2019

Thank you, Chairman Larsen, Ranking Member Graves, Chairman DeFazio, Ranking Member Graves, and other members of the committee. It is my honor to appear today before the Subcommittee on Aviation.

We are here because of the tragic crashes within five months of Lion Air 610 and Ethiopian 302, two fatal accidents with no survivors on a new aircraft type, something that is unprecedented in modern aviation history.

Like most Americans and many others around the world I’m shocked and saddened by these two awful tragedies and the terrible loss of life. Now we have an obligation to find out why these tragic crashes happened, and keep them from ever happening again.

These crashes are demonstrable evidence that our current system of aircraft design and certification has failed us.

We don’t yet know in every way how it has failed us. Multiple investigations are ongoing. We owe it to everyone who flies to find out where and how the failures occurred, and what changes must be made to prevent them from happening in the future.

It is obvious that grave errors were made that have had grave consequences, claiming 346 lives.

The accident investigations of these crashes will not be completed for many months, but some things are already clear.

Accidents are the end result of a causal chain of events, and in the case of the Boeing 737 MAX, the chain began with decisions that had been made years before, to update a half-century-old design.

Late in the flight testing of the 737 MAX, Boeing discovered an aircraft handling issue. Because the 737 MAX engines were larger than the engines on previous 737 models they had to be mounted higher and farther forward for ground clearance, which reduced the aircraft’s natural aerodynamic stability in certain conditions. Boeing decided to address the handling issue by adding a software feature, Maneuvering Characteristics Augmentation System (MCAS), to the 737 MAX. MCAS was made autonomous, able in certain conditions to move a secondary flight control by itself to push the nose down without pilot input.

In adding MCAS, Boeing added a computer-controlled feature to a human-controlled airplane but without also adding to it the integrity, reliability and redundancy that a computer-controlled system requires.

Boeing also designed MCAS to look at data from only one Angle of Attack (AOA) sensor, not two. One result of this decision was that it allowed false data from a single sensor to wrongly trigger the activation of MCAS, thus creating a single point of failure. A single point of failure in an aircraft goes against widely held aircraft design principles.

On both accident flights, the triggering event was a failure of an AOA sensor. We do not yet know why the AOA sensors on these flights generated erroneous information, triggering MCAS, whether they were damaged, sheared off after being struck, were improperly maintained or repaired, or for some other reason.

Boeing designers also gave MCAS too much authority, meaning that they allowed it to autonomously move the horizontal stabilizer to the full nose-down limit.

And MCAS was allowed to move the stabilizer in large increments, rapidly and repeatedly until the limit was reached. Because it moved stabilizer trim intermittently, it was more difficult to recognize it as a runaway trim situation (an uncommanded and uncontrolled trim movement emergency), as appears to have happened in the first crash.

Though MCAS was intended to enhance aircraft handling, it had the potential to have the opposite effect; being able to move the stabilizer to its limit could allow the stabilizer to overpower the pilots’ ability to raise the nose and stop a dive toward the ground. Thus it was a trap that was set inadvertently during the aircraft design phase that would turn out to have deadly consequences.

Obviously Boeing did not intend for this to happen. But to make matters worse, even the existence of MCAS, much less its operation, was not communicated to the pilots who were responsible for safely operating the aircraft until after the first crash.

Also with the MAX, Boeing changed the way pilots can stop stabilizer trim from running when it shouldn’t. In every previous version of the 737, pilots could simply move the control wheel to stop the trim from moving, but in the MAX, with MCAS activated, that method of stopping trim no longer worked. The logic was that if MCAS activated, it had to be because it was needed, and pulling back on the control wheel shouldn’t stop it.

It is clear that the original version of MCAS was fatally flawed and should never have been approved.

It has been suggested that even if the MCAS software had flaws, the pilots on these flights should have performed better and been able to solve the sudden unanticipated crises they faced. Boeing has even said that in designing MCAS they did not categorize a failure of MCAS as critical because they assumed that pilot action would be the ultimate safeguard.

We owe it to everyone who flies, passengers and crews alike, to do much better than to design aircraft with inherent flaws that we intend pilots will have to compensate for and overcome.

Pilots must be able to handle an unexpected emergency and still keep their passengers and crew safe, but we should first design aircraft for them to fly that do not have inadvertent traps set for them.

We must also consider the human factors of these accidents.

From my 52 years of flying experience, and my many decades of safety work – I know that nothing happens in a vacuum, and we must find out how design issues, training, policies, procedures, safety culture, pilot experience and other factors affected the pilots’ ability to handle these sudden emergencies, especially in this global aviation industry.

Dr. Nancy Leveson, of the Massachusetts Institute of Technology, has a quote that succinctly encapsulates much of what I have learned over many years: “Human error is a symptom of a system that needs to be redesigned.”

These two recent crashes happened in foreign countries, but if we do not address all the important issues and factors, they can and will happen here. To suggest otherwise is not only wrong, it’s hubris.

As one of our preeminent human factors scientists, Dr. Key Dismukes, now retired as Chief Scientist for Human Factors at the NASA Ames Research Center, has said, “Human performance is variable and it is situation-dependent.”

I’m one of the relatively small group of people who have experienced such a sudden crisis – and lived to share what we learned about it. I can tell you firsthand that the startle factor is real and it is huge – it interferes with one’s ability to quickly analyze the crisis and take effective action.

Within seconds, these crews would have been fighting for their lives in the fight of their lives.

These two accidents, as well as Air France 447 which crashed in the South Atlantic in June 2009, are also vivid illustrations of the growing level of interconnectedness of devices in aircraft. Previously, with older aircraft designs, there were mostly stand-alone devices, in which a fault or failure was limited to a single device that could quickly be determined to be faulty and the fault remain isolated. But with integrated cockpits and data being shared and used by many devices, a single fault or failure can now have rapidly cascading effects through multiple systems, causing multiple cockpit alarms, cautions and warnings, which can cause distraction and increase workload, creating a situation that can quickly become ambiguous, confusing and overwhelming, making it much harder to analyze and solve the problem.

In both 737 MAX accidents, the failure of an AOA sensor quickly caused multiple instrument indication anomalies and cockpit warnings. And because in this airplane type the AOA sensors provide information to airspeed and altitude displays, the failure triggered false warnings simultaneously of speed being too low and also of speed being too fast. The too slow warning was a ‘stick-shaker’ rapidly and loudly shaking the pilot’s control wheel. The too fast warning was a ‘clacker’, another loud repetitive noise signaling overspeed. These sudden loud false warnings would have created major distractions and would have made it even harder to quickly analyze the situation and take effective corrective action.

I recently experienced all these warnings in a 737 MAX flight simulator during recreations of the accident flights. Even knowing what was going to happen, I could see how crews could have run out of time and altitude before they could have solved the problems.

Prior to these accidents, I doubt if any U.S. airline pilots were confronted with this scenario in simulator training.

We must make sure that everyone who occupies a pilot seat is fully armed with the information, knowledge, training, skill, experience and judgment they need to be able to be the absolute master of the aircraft and all its component systems, and of the situation, simultaneously and continuously throughout a flight.

As aviation has become safer, it has become harder to avoid complacency. We have made air travel so safe and routine, some have assumed that because we haven’t had a lot of accidents in recent years we must be doing everything right.

But we can no longer define safety solely as the absence of accidents. We must do much more than that; we must be much more proactive than that.

We need to proactively find flaws and risks and mitigate them before they lead to harm.

We must investigate accidents before they happen.

Each aircraft manufacturer must have a comprehensive safety risk assessment system that can review an entire aircraft design holistically, looking for risks, not only singly, but in combination.

We must also look at the human factors and assumptions made about human performance in aircraft design and certification, and pilot procedure design.

In addition to fixing MCAS in a way that resolves all the many issues with it, including that the AOA Disagree light be made operative on all Max aircraft, we must greatly improve the procedures to deal with uncommanded trim movement, provide detailed system information to pilots that is more complete, give pilots who fly the 737 MAX additional Level D full flight simulator training so that they will see, hear, feel, experience and understand the challenges associated with MCAS, such as Unreliable Airspeed, AOA Disagree, Runaway Stabilizer and Manual Trim. They must have the training opportunity to understand how higher airspeeds greatly increase the airloads on the stabilizer, making it much more difficult to move manually, often requiring a pilot to use two hands, or even the efforts of both pilots to move it. And in some cases, how it cannot be moved at all unless the pilot flying temporarily stops trying to raise the nose and relieves some of the airloads by moving the control wheel forward.

Pilots must develop the muscle memory to be able to quickly and effectively respond to a sudden emergency. Reading about it on an iPad is not even close to sufficient; pilots must experience it physically, firsthand.

We should all want pilots to experience these challenging situations for the first time in a simulator, not in flight with passengers and crew on board.

We must look closely at the certification process. There have been concerns about the aircraft certification process for decades. Just a brief search revealed 18 reports produced by GAO, DOT OIG, and Congressional committees since 1992.

Many questions remain to be and must be answered:

Has the Federal Aviation Administration (FAA) outsourced too much certification work?

Should FAA be selecting the manufacturer employees who do certification work on behalf of FAA, instead of the employer, as is currently the case?

Did oversight fail to result in accountability?

Do the Federal Aviation Administration (FAA) employees and Boeing employees doing certification work have the independence they need to ensure safe designs?

Was there a failure to identify risks and their implications?

Was the analysis of failure modes and effects inadequate?

How was it that critically important information was not effectively communicated and shared with airlines and pilots?

Many other questions must be asked about the role Boeing played in these accidents:

Was there a leadership failure?
A governance failure?
An engineering failure?
A risk analysis failure?
A safety culture failure?

Whistle-blower protection must be strong and effective, and if it is not strong enough, we must strengthen it.

Key leaders and members of each safety-critical aviation organization must have subject matter expertise; in other words, they must be pilots who understand the science of safety. There should be at least one person so qualified on each corporate board of directors of each aviation company. Top project engineers of aircraft manufacturers must also be pilots.

Airlines worldwide must adhere to the highest standards of aircraft maintenance and crew training.

All the layers of safety must be in place. They are the safety net that helps keep air travelers and crews from harm.

Only by investigating, discovering, and correcting the ways in which our design, certification, training and other systems have failed us and led to these tragedies can we begin to regain the trust of our passengers, flight attendants, pilots and the American people. And, of course, in order for passengers to trust that the 737 MAX is safe to fly, pilots will have to trust that it is.

We have a moral obligation to do this.

If we don’t – if we just file the findings away on a shelf to gather dust, we will compound these tragedies. What would make the loss of lives in these accidents ever more tragic is if we say these were black swan events, unlikely to happen again, and decide not act on what we learn from them. To protect the status quo.

The best way to honor the lives tragically lost is to make sure that nothing like this ever happens again.