CRM at its best: Qantas flight 32, learning from the recent past

“Let’s look at what’s working. If all we could do is build ourselves a Cessna aircraft out of the rubble that remained, we would be happy.”

qantas-1Photo:  ATSB Aviation Occurrence Investigation AO-2010-089 Final Report

In homage to an outstanding performance of an A380 flight crew facing a catastrophic failure of an inboard engine following an uncontained explosion on the morning of November 4th, 2010.

THE OCCURRENCE FLIGHT (1)

The aircraft departed Changi Airport, Singapore on a scheduled passenger flight to Sydney, Australia. About 4 minutes after take-off, while the aircraft was climbing through about 7,000 ft, the flight crew heard two ‘bangs’ and a number of warnings and cautions were displayed on the electronic centralised aircraft monitor (ECAM).

Initially, the ECAM displayed a message warning of turbine overheat in the No. 2 (inner left) engine. That warning was followed soon after by a multitude of other messages relating to a number of aircraft system problems. After assessing the situation and completing a number of initial response actions, the flight crew was cleared by ATC to conduct a holding pattern to the east of Changi Airport. While in the holding pattern, the flight crew worked through the procedures relevant to the messages displayed by the ECAM. During that time the flight crew was assisted by an additional crew that were on the flight deck as part of a check and training exercise. The aircraft sustained significant impact damage from a large number of disc fragments and associated debris. The damage affected the aircraft’s structure and a number of its systems.

A large fragment of the turbine disc penetrated the left wing leading edge before passing through the front spar into the left inner fuel tank and exiting through the top skin of the wing. The fragment initiated a short duration low-intensity flash fire inside the wing fuel tank.

Another fire has occurred within the lower cowl of the No. 2 engine as a result of oil leaking into the cowl from the damaged oil supply pipe. The fire lasted for a short time and self-extinguished.

The large fragment of the turbine disc also severed wiring looms inside the wing leading edge that connected to a number of systems.

A separate disc fragment severed a wiring loom located between the lower centre fuselage and body fairing. That loom included wires that provided redundancy (back –up) for some of the systems already affected by the severing of wires in the wing leading edge. This additional damage rendered some of those systems inoperative.

The aircraft’s hydraulic and electrical distribution systems were also damaged, which affected other systems not directly impacted by the engine failure. (Excerpted from In-flight Uncontained Engine Failure. Airbus A380-842, VH-OQA. Australian Transportation Safety Board Transport Safety Report Executive Summary)

We, aviation professionals and aviation safety scholars, sometimes tend to focus, perhaps too much, in aviation accidents and incidents. They are a great source of knowledge and learning that we later try to convert in accident prevention strategies. The human limitations, the error and its multiple causes and sometimes terrible consequences have a charming effect on us.

This time I want to do something different. This time I want to highlight the positive aspects of human performance: those unique things human beings do well, presented to us as outstanding airmanship, leadership and teamwork. All my respect to those five pilots, impeccably led by Captain Richard de Crespigny.

Next, I share the article published in AeroSafety World, with Captain de Crespigny’s first-hand testimony. All credit to the author and the publisher. The photos were added by me:

A BLACK SWAN EVENT. SAVING A CRIPPLED A380 (2)

Singapore — First, came the matter of determining how much of the Airbus A380 was still functioning. Then the issue was maintaining control of the crippled aircraft flying on the edge of a stall during approach with marginal aileron control effectiveness. Finally, there was the problem of sitting over a rapidly spreading pool of jet fuel in an aircraft with white-hot brakes and an engine that refused to shut down.

The uncontained engine failure on a Qantas A380 on Nov. 4, 2010, did not precipitate a catastrophic accident, and 469 people returned safely to the ground at Singapore, said the Qantas Flight 32 captain, Richard de Crespigny, because five experienced pilots in the cockpit — three in the regular crew and two check captains — worked as a unified team with cool heads and a singleness of purpose.

In his keynote speech opening Flight Safety Foundation’s 64th International Air Safety Seminar in Singapore in November 2011, and in an extensive interview with AeroSafety World, de Crespigny detailed the accident. What follows are just a few of the significant details of this incredibly complicated situation.

The triggering failure that launched the drama was the uncontained failure while climbing through 7,000 ft, of the airplane’s no. 2 Rolls-Royce Trent 972 three-spool turbofan, perceived in the cockpit as “two bangs, not terribly loud,” de Crespigny said. The aircraft damage caused by the heavy, high-speed engine parts leaving the nacelle created what he called “a black swan event, unforeseen, with massive consequences.qantas-2qantas-6qantas-adicional

Photos:  ATSB Aviation Occurrence Investigation AO-2010-089 Final Report

“What did we know? We knew that engine no. 2 had failed, there was a hole in the wing, fuel was leaking from the wing and we had unending checklists. What we didn’t know is that no. 2 had had a failure of the intermediate pressure turbine, engine no. 1 had also been damaged, we had 100 impacts on the leading edge, 200 impacts on the fuselage, impacts up to the tail and seven penetrations of the wing, going right through the wing and up through the top. We had lost 750 wires…. We lost 70 systems, spoilers, brakes, flight controls. … Every system in the aircraft was affected.

“Flight controls were also severely damaged. It wasn’t just the slats; we [lost] a lot of our ailerons … lost 65 percent of our roll control,” de Crespigny said. The situation was made worse, he said, because, with fuel flowing out of the left wing, the aircraft was laterally unbalanced.

“We were getting pretty close to a [cockpit work] overload situation,” working through the checklists, cancelling the alarms. “It was hard to work out a list of what had failed. It was getting [to be] too much to follow. So we inverted our logic. Like Apollo 13, instead of worrying about what failed, I said, ‘Let’s look at what’s working.’ If all we could do is build ourselves a Cessna aircraft out of the rubble that remained, we would be happy.”

Wanting to be well prepared and drop as much fuel as possible before making what would still be an overweight landing, de Crespigny entered a holding pattern. “We had seven fuel leaks coming out of multiple parts of the wing. At 50 tonnes overweight, and no [working] fuel jettisoning system, this was our jettisoning system.”

Fortunate to have the longest runway in Southeast Asia available to them, the crew still had slim margins. Taking into account the known problems — including no slats and no drooping ailerons on final — the crew computed that the aircraft could be stopped 100 m (328 ft) before the runway end.

“We briefed the approach, and then — one of the more emotional events of the crisis — we did … three control checks. We proved the aircraft safe for landing in a landing configuration. We did a rehearsal for the landing with the gear down,” using gravity to drop the gear, he said, “flaps out and at approach speed, and the aircraft proved out.”

Knowing that the fly-by-wire stick would mask the aileron movement needed to maintain attitude, de Crespigny “went to the control page to look at the percentage of effort of the flight controls we had remaining. We had normal flight controls except for the ailerons, and there we’d lost 65 percent of our roll control, lost both outer ailerons, lost one of the mids, and we were left with … one mid and the high-speed ailerons, small and inboard.

“But we also had imbalances” due to fuel issues, he said. “I was very concerned about controllability. So we did the control check, and as I rolled the aircraft up to about 10 degrees of bank, we looked at the flight controls [ECAM page] and it looked like we were using like 60 to 70 percent of the remaining ailerons just to do a very gentle turn.

“I could easily reach maximum deflection of the ailerons, and when you reach that point, the spoilers come up next. You keep getting roll control by dumping more lift, increasing your stall speed. I was really worried, [knowing I had] to be so careful to not get the spoilers coming up. I had to keep the heading and yaw as accurate as possible, so I decided to use the automatic pilot for the approach — its accelerometers sense small changes and put in tiny corrections earlier than I will.”

qantas-3qantas-4qantas-5Photos: ATSB Aviation Occurrence Investigation AO-2010-089 Final Report

Manual thrust control can allow for unbalanced thrust, which would induce destabilizing yaw. “We had a long approach, so to get stable thrust I exactly matched [engines] one and four and locked them down, and used engine three to adjust the approach speed, using that [engine] because it is inboard and produces less yaw. So I had accurate heading control, controls were not used very much, and with only one engine used to fine tune the speed, [we maintained] minus 2 kt to plus 3 kt for the whole approach.”

Another pilot in the cockpit warned, “‘Richard, you can’t be fast.’ During approach, our air speed margin was very small. Put in 3 kt, we run off the end of the runway.”

As it turned out, he couldn’t be slow, either. “I slowed down 1 kt and we got a speed warning,” he said. “That was unexpected, absolutely. We clearly didn’t have a 17 to 18 percent stall margin. We had two speed warnings” during the approach, and “in the flare, we got a stall warning.”
“We landed 40 tonnes overweight, a relatively good landing. When we stopped, the brakes said 900 degrees C (1,650 degrees F), but it takes five minutes for heat to get to the sensor, so 900 degrees on stopping meant that those brakes were going to go well beyond 2,000 degrees C.”

However, on landing “fuel sloshed to the front” and began gushing out of the holes in the wing leading edge. “The auto-ignition point of kerosene is 220 degrees C, so we were concerned.” Happily, the Singapore crash rescue crew’s response was superb, de Crespigny said. “Firemen came in and put foam down over the fuel, over the brakes, and the temps started going down.”

Finally, though, the engine no. 1 refused to shut down, further delaying evacuation. But with the threat of fire mitigated, the aircraft was evacuated before the engine was killed with massive amounts of fire-fighting foam.

qantas-ultima

Photo: ATSB Aviation Occurrence Investigation AO-2010-089 Final Report

REFERENCES

  1. In-flight Uncontained Engine Failure. Airbus A380-842, VH-OQA. Australian Transportation Safety Board. ATSB Transport Safety Report.  Aviation Occurrence Investigation AO-2010-089. Final Report 
  2. A Black Swan Event. Saving a crippled A380. DonoghueJ.A. AeroSafety World December 2011/January 2012
  3. Interview with Capt. Richard de Crespigny – Part 1
  4. Interview with Capt. Richard de Crespigny – Part 2

**********************

minime2By Laura Victoria Duque Arrubla, a medical doctor with postgraduate studies in Aviation Medicine, Human Factors and Aviation Safety. In the aviation field since 1988, Human Factors instructor since 1994. Follow me on facebook Living Safely with Human Error and twitter. Human Factors information almost every day

_______________________

LaMía CP2933 accident in Colombia, preliminary report

chapecoense

SYNOPSIS

Aircraft: AVRO 146-RJ85

Date and time: 29 November 2016, 02:58UTC (All times in this report are UTC. Five (5) hours should be subtracted to obtain the legal time in Colombia)

Location: “Cerro Gordo”, Municipality of La Unión Antioquia – Colombia.

Coordinates: N05°58’43.56″ – W075°25’7.86″

Type of operation: Commercial Air Transport (Passengers) Charter Flight

Operator: LAMIA CORPORATION S.R.L

Persons on board: 04 Crew, 73 Passengers

FACTUAL INFORMATION

Background to the flight

The operator had been chartered to fly the Chapecoense football team and associated personnel from São Paulo, Brazil (Guarhulos SKGR airport) to Rionegro, Colombia (Jose Maria Cordoba Airport in Rionegro, near Medellin). Under Brazilian regulations, charter flights may only normally be operated by an operator based in either the country of departure or arrival. The operator was based in Bolivia and was unable to get the necessary permission to operate the flight as planned. Arrangements were made instead for the passengers to be flown from São Paulo – Brazil (ICAO: SBGR) on a scheduled flight to Santa Cruz – Bolivia (ICAO: SLVR) where they boarded the charter flight to Rionegro – Colombia (ICAO: SKRG).

History of flight

The history of the flight has been compiled from a number of sources, including preliminary information from the flight data recorder (FDR), cockpit voice recorder (CVR) and recorded ATC and radar data. The ATC transcript is a translation of the original transmissions which were in Spanish.

On 28 November 2016, the aircraft departed the operator’s maintenance base at Cochabamba (OACI: SLCB), Bolivia, at 17:19hrs and positioned to Viru Viru International Airport (OACI: SLVR), Santa Cruz in Bolivia, landing at 17:58hrs.

After the arrival of the aircraft at Santa Cruz, it was refueled, with witness information indicating that the commander had instructed the maximum fuel load of 9,300 kg to be used.

It was reported that some of the crew had thought the aircraft would be refueling enroute at Cobija Airport (OACI: SLCO). Cobija Airport is located close to the border between Bolivia and Brazil and normally only operates during daylight hours. On 28 November 2016, it closed at 22:43hrs.

After the passengers had all arrived at Santa Cruz they boarded the aircraft and at 22:08hrs engine start commenced. On board were the operating crew; comprising a commander, co-pilot and two cabin crew members; and 73 passengers; including an engineer and dispatcher from the operator, and a private pilot who occupied the flight deck jump seat.

The aircraft took off at 22:18hrs and climbed to an initial cruising flight level of FL260, levelling at 22:41hrs. It then climbed again at 22:49hrs to FL280, levelling at 22:58hrs. It then started climbing to its final cruising level of FL300 at 23:54hrs, levelling at 00:14hrs. The cruising speed was recorded as 220 kt CAS. The route flown is shown in Figure 1.

foto1

During the cruise, the CVR recorded various crew conversations about the fuel state of the aircraft and they could be heard carrying out fuel calculations. At 00:42:18hrs one of the pilots could be heard to say that they would divert to Bogota (ICAO: SKBO) to refuel but at 00:52:24hrs a further conversation took place, shortly after the aircraft was transferred to Colombian ATC, with the crew deciding to continue to Rionegro (ICAO: SKRG). At 01:03:01hrs the crew began their brief for the approach to Jose Maria Cordoba Airport, Rionegro.

At 01:15:03hrs the CVR ceased recording.

The aircraft commenced decent at 02:30:30hrs at which time it was about 75NM to the south of Rionegro. It levelled at FL250 at 02:36:40hrs and at 02:40hrs ATC transferred the crew to Medellin Approach at 02:40hrs who instructed them to descend to FL230 and to hold at Rio Negro VOR (VOR RNG).

At 02:42:12hrs the crew was instructed to continue descent to FL210. At 02:43:09hrs the crew asked to hold at GEMLI RNAV Point (Figure 2), which was approved. It overflew GEMLI at 02:43:39hrs and entered the hold. (Figure 3 – each complete hold has a ground track of approximately 24 nm).

At this time there were three other aircraft holding at the Rio Negro VOR, at FL190, 18,000ft and 17,000ft. There was also an aircraft diverting into Rionegro with a reported fuel leak, about to commence its final approach to Runway 01 at Rionegro (SKRG).

foto2

foto3

At 02:43:52hrs the aircraft was levelled off at FL210, the flaps lowered to FLAP18 and the speed reduced to 180 kt CAS. At 02:45:03hrs the crew informed ATC that they had entered the hold at GEMLI at FL210.

The subsequent radio communications between the crew (callsign LMI 2933) and ATC have been tabulated below.

lamia-1

ATC then cleared LAN3020, the aircraft holding at 17,000ft for the approach.

lamia-2

ATC then cancelled the approach clearance for LAN 3020

lamia-3

At 02:53:07 hrs the thrust levers were reduced and the aircraft commenced a descent.

At 02:53:09 hrs the airbrake was extended

lamia4

At 02:53:24 hrs the gear was selected down.

lamia5

At 02:53:36 hrs FLAPS24 were selected and the aircraft speed began to reduce and continued to do so until the end of the FDR recording.

At 02:53:45hrs the Number 3 engine speed no longer matched the thrust lever demand and began to run down. 13 seconds later the same occurred on the Number 4 engine.

lamia6

At 02:54:36hrs the FDR recorded FLAPS33 selected.

At 02:54:47 hrs the Number 3 and Number 4 engine low oil pressure states were recorded on the FDR together with a MASTER WARNING. At the same time, over a 12 second period, the Number 1 engine N1 reduced from 39.5% to 29.0% and recovered (N1 value indicates the rotation speed of the 1st (compression) stage of a turbojet engine).

At 02:55:04hrs the Number 2 engine speed no longer matched the thrust lever demand and began to run down.

lamia7

At 02:55:19hrs, over a period of 10 seconds, the Number 1 engine N1 reduced again, from 38.1% to 29.9%, and recovered.

At 02:55:27hrs the Number 2 engine low oil pressure state and a MASTER WARNING were recorded on the FDR.

lamia8

At 02:55:41hrs the Number 1 engine began to run down Following the loss of all engines, at 02:55:48hrs the FDR ceased recording. At this time the FDR data showed that the aircraft was at a CAS of 115 kt, a ground speed of 142 kt and a pressure altitude of 15,934 ft msl.

The aircraft was 15.5 nm to the south of the threshold of Rionegro Runway 01 and 5.4 nm south of the accident site (which was at an elevation of about 8,500 ft amsl).

Radar recording report indicates Mode C lost at 02:55:52hrs at which time there was only a primary radar contact for the aircraft.

lamia9

No further response was received from LMI 2933 despite repeated calls by ATC.

Organization of Investigation

At 03:10hrs, the Grupo de Investigación de Accidentes Aéreos (GRIAA) from the Civil Aviation Authority of Colombia was alerted of the disappearance and subsequent location of the aircraft AVRO RJ85 accident in “Cerro Gordo”, Municipality of La Unión – Antioquia.

Immediately in accordance with the provisions of Colombian Aeronautical Regulations – RAC 8, a safety investigation was immediately initiated by GRIAA.

A team of 8 investigators traveled to the accident site on the morning of 29 November 2016, arriving at 11:30hrs. The access to the crash site was done by land and air.

Flight Recorders (FDR, CVR) were found on 29, November 2016 at 17:09hrs and placed in custody of GRIAA investigators for further preparation for readout.

Following the International Civil Aviation Organization (ICAO) Annex 13 provisions, the GRIAA made the formal Notification of the Accident to:

– International Civil Aviation Organization – OACI

– The Dirección General de Aeronáutica Civil – AIG (Bolivia), as State of Registration of the aircraft.

– The Air Accident Investigation Branch – AAIB (United Kingdom), as State of Manufacture of the aircraft. This allowed the assistance of technical advisors of the aircraft manufacturer.

– The National Transportation Safety Board – NTSB (United States), as State of Engine Manufacturer. This allowed the assistance of technical advisors of the engine manufacturer.

– The Centro de Investigação e Prevenção de Acidentes Aeronáuticos – CENIPA (Brazil), As State of the Nationals involved in the accident.

The Investigation was organized in different working groups in the areas of Airworthiness, Power Plants, Flight Operations, Human Factors, Survival and Air Traffic. The Accredited Representatives and Technical Advisors were divided into the formed working groups.

This preliminary report contains facts which have been determined up to the time of issue.

This information is published to inform the aviation industry and the public of the general circumstances of the accident and should be regarded as tentative and subject to alteration if additional evidence becomes available.

Injuries to persons

injuries

 Damage to the aircraft

Destroyed.

Other Damage

Damage to surrounding vegetation.

Personnel Information

  1. Captain

Age: 36

Licence: Airline Transport Pilot – ATP

Nationality: Bolivian

Medical Certificate: 1st. Class

Last proeficience check: 03, July 2016

Total flying hours: 6,692.51 (Ref: LAMIA status documents 20 Nov 2016)

Total RJ85 flying hours: 3,417.41 (Ref: LAMIA status documents 20 Nov 2016

2. Co-pilot

Age: 47

Licence: Airline Transport Pilot – ATP

Nationality: Bolivian

Medical Certificate: 1st. Class

Last proeficience check: 03, July 2016

Total flying hours: 6,923.32 (Ref: LAMIA status documents 20 Nov 2016

Total RJ85 flying hours: 1,474.29 (Ref: LAMIA status documents 20 Nov 2016)

Flight Recorders

The aircraft was fitted with a CVR and FDR, both of which were powered by the aircraft’s AC Essential electrical bus which required one or more of the engines or the APU to be running. Both recorders were recovered to the Air Accidents Investigation Branch (AAIB) in the UK for download.

1 Flight data recorder

The FDR download revealed approximately 54 hours of operation which included the accident flight. A number of flight parameters were recorded including flight control positions, autopilot and autothrust modes, aircraft position, engine fan speed (N1) and thrust lever position. Fuel flow was recorded for each engine every 64 seconds. APU operation, fuel quantity, fuel and master cautions were not recorded.

2 Cockpit voice recorder

The CVR was successfully downloaded and recorded just over two hours of operation.

Using the recorded UTC timings from radio transmissions and the FDR, the CVR was timealigned and revealed a recording start time of 23:08:33hrs on 28 November 2016. The following two hours of recording was of the accident flight. The recording then ceased at 01:15:0hrs at which time the aircraft was about 550 nm from Rionegro and was one hour,40 minutes and 45 seconds prior to the end of the FDR recording.

There was no recorded discussion about the CVR and the reason for the CVR recording ceasing early is unknown at this stage in the investigation.

Wreckage and impact information

The crash site was known as “Cerro Gordo”, which belonged to the Municipality of La Unión in the Department of Antioquia – Colombia. The wreckage were disturbed during search and rescue operations following the accident. Access to the crash site was limited several days and lifting equipment was not available.

1 Initial impact site

The initial point of impact was identified on the south face at the hill just below the hill ridge on an approximately 310° compass heading. According to the GPS readings, an energy path extended from the IPI approximately 140 meters down the north hill face to the main wreckage location, on an approximate 290° compass heading.

The approximate GPS position of the initial impact site was N05°58’43.56″ – W075°25’7.86″. The largest item was the tail unit, complete with rudder and both elevators (Figure 4 and 5). The tail unit had detached from the main fuselage at the pressure bulkhead frame. The leading edges of the horizontal stabilizers and fin were in good condition with little evidence of damage. The airbrakes were close to the tail unit and remained attached to it by electrical wiring.

foto4

foto5

Components from the hydraulic bay and Environmental Control System (ECS) bay were identified at the initial impact site. These included hydraulic reservoirs and a heat exchanger from the air conditioning packs.

The stick push reservoir was identified at the site. The reservoir is installed in the upper section of the avionics bay, beneath the cockpit floor.

Other noteworthy items identified at the initial site included a main landing gear door, a section of inboard engine accessory gearbox (complete with starter motor), an engine hydro-mechanical unit, the rear section of the outboard fairing from the right wing and a passenger seat cover.

2 Engines

The Number 1 and Number 4 engines were found in the proximity to the initial impact point, with Number 1 engine to the left of the impact site and Number 4 to the right.

Number 2 and Number 3 engines were found in the area of the main wreckage with Number 2 engine to the left of the area and Number 3 to the right (Figure 6). The Number 3 engine was found lying in a large uprooted tree on a slope which was considered unstable and a thorough examination of the engine was not possible.

foto6

Examination of the Number 1, 2 and 4 engines revealed no evidence of fire,

uncontainment, or internal engine failure. There were varying amounts of damage to the engines and each had soil and tree debris packed between the fan blades. None of these engines showed any circumferential or spiral scoring to the fan spinner. The state of the engines examined was consistent with these engines not being under power at the time of impact.

3 Main wreckage site

The approximate GPS position of the main wreckage site was N05°58.725, 75◦25.138.

The main site was approximately 140 m from the initial impact site. Major assemblies identified included the cockpit, forward fuselage, wings, rear fuselage and the Number 2 engine. The wreckage had travelled downhill, passing through and disrupting trees (Figure 7).

foto7

The wings remained attached to the centre box (which forms the centre fuel tank) and were in the direction of travel but inverted. The orientation of the wings indicates that the centre fuselage rolled through 180° after the tail unit separated. The rear fuselage was upright but was facing opposite to the direction of travel. The majority of the rearpressure bulkhead remained attached to the rear fuselage. The left main landing gear was identified in close proximity to the rear fuselage. The side stay was locked, indicating that the landing gear was DOWN at the time of the accident.

The cockpit was disrupted and had been disturbed during search and rescue operations.

The position of switches and levers could not, therefore, be relied upon as being in the same position as at the time of the accident.

The centre console and throttle quadrant were identified.

The airbrake lever was slightly aft of IN.

All four throttle levers and the flap selection lever had been broken off in the accident.

The remains of the throttle levers were staggered and the remaining section of the flap lever was in the 30 DEG position.

The cockpit overhead panel was identified and its orientation was indicative of the cockpit section coming to rest inverted.

Both starboard wing flap screwjacks were fully extended, indicating that the flaps were in the 33 DEG, fully extended position at the time of impact. The port wing inboard screwjack was in the fully extended position. The port wing outboard screwjack was not accessible.

The starboard aileron, complete with servo and trim tabs, remained partially attached to the wing. It was not possible to identify the position of the aileron when the accident occurred.

The left wing was very badly disrupted and it was not possible to examine the port aileron.

The rudder and both elevators were still attached to the tail unit. It was not possible to identify the position of the control surfaces when the accident occurred.

The airbrakes were found slightly open.

The nose landing gear shock absorber piston, lower torque link and both wheels were identified approximately 15 m from the initial impact point, in the direction of travel.

4. Fuel

The starboard wing fuel tank had been split open during the accident sequence. With the exception of slight fuel odour in the immediate vicinity of the fuel tanks, there was no apparent evidence of fuel anywhere on the crash site.

The refuel panel (Figure 8) had a fuel upload of 9,300 kg selected. All the three fuel contents gauges within the panel indicated below zero, commensurate with readings when electrical power is removed. The three fuel valve selection switches were in the PRE-SELECT position.

foto8

Fire
There was no evidence of fire.

Fuel Planning Information

The operator had submitted flight information to a commercial flight planning company at1325 on 28 November 2016 in order to create a flight plan for the flight from Santa Cruz to Rionegro.

The route used to create the plan was the same as that used on the ATC flight plan submitted prior to departure. The plan gave a ground distance for the flight of 1,611 nm and a trip fuel requirement of 8,658 kg.

The only other fuel requirement submitted to create the plan was for a taxi fuel requirement of 200 kg. This gave a total fuel requirement for the flight of 8,858 kg, but with no allowance for diversion, holding or contingency fuel requirements.

The plan was created using a cruising flight level of FL300 and an aircraft takeoff weight of 32,991 kg. The plan recorded an increased trip fuel requirement of 64 kg for every additional 1,000 kg above this weight.

Other flight plans were found in the aircraft after the accident covering different routes.

These included a series of three plans created on 26 November 2016 covering separate flights from São Paulo to Santa Cruz, Santa Cruz to Cobija and Cobija to Rionegro. The Cobija to Rionegro plan had used Bogota as a diversion and included a diversion fuel requirement of 837 kg and a 30minute holding fuel requirement of 800 kg.

Estimated Load-sheet

The actual load-sheet was not located at the accident site nor has a copy been located elsewhere. In order to estimate the take-off weight for the flight the following information was used.

loadsheet

It is considered likely that the actual weight of baggage onboard the aircraft at the time of the accident was higher than the weight of the baggage recovered from the accident site. Baggage weight information obtained from the flight transferring passengers from São Paulo to Santa Cruz indicates that the baggage weight of those passengers transferring onto the accident flight was 1,026 kg. This would suggest a minimum estimated takeoff weight of 42,148 kg.

The maximum allowed takeoff weight for the aircraft, recorded in the aircraft Flight Manual is 41,800 kg.

ATC flight plan

The dispatcher accompanying the flight submitted a flight plan on 28 November 2016 at about 20:10hrs at the flight plan office at Santa Cruz Airport. The submitted flight plan gave a departure time of 22:00hrs and a cruising flight level of FL280. The flight time and endurance were both recorded on the plan as 4 hrs 22 minutes.

The flight plan office requested that the flight plan was changed and re-submitted due to the following issues with the plan:

  • The route did not include a standard instrument departure (SID) from Santa Cruz
  • There was no second alternate airport included in the plan
  • The estimated enroute time (EET) was the same as the endurance
  • The dispatcher had only signed the plan but had not printed his name

The dispatcher apparently had refused to change any of the details and explained that, regarding the EET and endurance being the same, the actual flight time would be less than that on the plan. The flight plan office filed the flight plan at about 20:30hrs but sent a report to the DGAC regional office giving details of the incident, stating that under the regulations the office was not empowered to reject the submission.

Further actions

A team of investigators carried out a visit to the DGAC facilities in Bolivia in order to gather information about the Operator and Regulations. The DGAC of Bolivia and the Prosecutors in La Paz, Cochabamba and Santa Cruz contributed and provided all the support to verify the Operators documents; however, the AASANA institution did not provide any information, related to the air navigation services and interviews.

The actions that were taken by DGAC (Bolivia), as result of the information regarding the accident, the operator’s Air Operator Certificate (AOC) was suspended.

The evidence available to the investigation at the time of issue of this preliminary report has not identified a technical failure that may have caused or contributed to the accident.

The available evidence is, however, consistent with the aircraft having suffered fuel exhaustion.

The investigation into the accident continues and will concentrate on issues related to fuel planning, decision making, operational oversight, survival and organizational oversight.

GRIAA will publish a final report once the full investigation is completed.

Information updated on 22, December 2016, 20:26hrs.

GRUPO DE INVESTIGACIÓN DE ACCIDENTES – GRIAA

Unidad Administrativa Especial de Aeronáutica Civil de Colombia

REFERENCES

Excerpted from

  1. PRELIMINARY REPORT Investigation COL-16-37-GIA. Fuel Exhaustion. Accident on 29, November 2016. Aircraft AVRO 146-RJ85, Reg. CP2933. La Unión, Antioquia – Colombia

FURTHER READING

  1. Normalization of Deviance: when non-compliance becomes the “new normal”
  2. The Organizational Influences behind the aviation accidents & incidents

**********************

minime2By Laura Victoria Duque Arrubla, a medical doctor with postgraduate studies in Aviation Medicine, Human Factors and Aviation Safety. In the aviation field since 1988, Human Factors instructor since 1994. Follow me on facebook Living Safely with Human Error and twitter. Human Factors information almost every day

_______________________

Pilot performance in emergencies: why it can be so easy, even for experts, to fail

On February 4, 2015, about 1054 Taipei Local Time, TransAsia Airways (TNA) flight GE 235, an ATR72-600 aircraft, registered B-22816, experienced a loss of control during initial climb and impacted Keelung River, three nautical miles from its departing runway 10 of Taipei’s Songshan Airport. Forty-three occupants were fatally injured, including three flight crew, one cabin crew, and 39 passengers. The remaining 13 passengers and one cabin crew sustained serious injuries. One passenger received minor injuries. The aircraft was destroyed by impact forces. The aircraft’s left wing tip collided with a taxi on an overpass before the aircraft entered the river. The taxi driver sustained serious injuries and the only taxi passenger sustained minor injuries.

0

Photo: Frame of dashboard camera video excerpted from the Final Report- Use of the video was authorized by the TVBS

The accident was the result of many contributing factors which culminated in a stall-induced loss of control. During the initial climb after takeoff, an intermittent discontinuity in engine number 2’s auto feather unit (AFU) may have caused the automatic take off power control system (ATPCS) sequence which resulted in the uncommanded autofeather of engine number 2 propellers.

Following the uncommanded autofeather of engine number 2 propellers, the flight crew did not perform the documented abnormal and emergency procedures to identify the failure and implement the required corrective actions. This led the pilot flying (PF) to retard power of the operative engine number 1 and shut down it ultimately.

The loss of thrust during the initial climb and inappropriate flight control inputs by the PF generated a series of stall warnings, including activation of the stick shaker and pusher. After the engine number 1 was shut down, the loss of power from both engines was not detected and corrected by the crew in time to restart engine number 1. The crew did not respond to the stall warnings in a timely and effective manner. The aircraft stalled and continued descent during the attempted engine restart. The remaining altitude and time to impact were not enough to successfully restart the engine and recover the aircraft. (See: TransAsia Airways Flight GE235 accident Final Report)

******

Being the continuation of  When the error comes from an expert: The Limits of Expertise

EFFECTS OF ACUTE STRESS ON AIRCREW PERFORMANCE

“Emergencies and other threatening situations require pilots to execute infrequently practiced procedures correctly and to use their skills and judgment to select an appropriate course of action, often under high workload, time pressure, and ambiguous indications.

The performance of even the most skilled experts can be impaired by situational stress, applying equally to the skilled performance of almost all experts, from surgical teams to firefighters.

The term stress refers to the effects and the term stressful situations refer to the causes of a well-defined picture of two neural/hormonal systems that respond to the threat with characteristic changes that prepare the body for “fight or flight” e.g. increased heart rate and hard breathing. However, the psychological mechanisms associated with stress are less clear.

The effects of stress can be explained using the cognitive appraisal model (Lazarus and Folkman, 1984)when individuals encounter challenging situations they orient both their cognitive and physiological resources to deal with the situation.

Physiological responses, such as increased heart rate and force, faster breathing, and restriction of peripheral blood flow, prepare the body for ‘fight or flight.’ Cognitively, the individual focuses attention on the challenging situation, mentally preparing for whatever tasks may be required. Up to this point, the individual’s resources are mobilized to deal with the challenge, the individual can manage the situation effectively and performance may actually improve.

However, if the situation becomes threatening—physically or socially—and the individual is uncertain of his or her ability to manage the threat, anxiety arises. This anxiety plays a central role in altering the individual’s cognitive processes and overall performance, and is maladaptive because it disrupts the individual’s ability to manage the threatening situation, particularly by degrading attention and working memory, both of which are crucial for managing challenging situations effectively (Eysenck, Derakshan, Santos, and Calvo, 2007).”

Stress not necessarily directly caused accident pilots’ errors, but stressful conditions made these errors more likely to occur.

Attention and Working Memory

“Attention is the focus of one’s mind on one task or thought or stream of sensory input from a myriad of other possibilities. Basically, we can only fully attend to one stream of information at a given moment. If we must deal with multiple tasks, we are forced to switch attention back and forth among them, somewhat like a spotlight.

Working memory is a very small subset of the vast store of an individual’s long-term memory, momentarily activated so that it can be quickly accessed and manipulated. It consists of two components: the information stored and the control processes used to manipulate the information. These control processes are known as executive processes and are also involved in directing attention.

Attention and working memory are known as limited cognitive resources; their capacity for processing information is quite small compared to the vast store of information in long-term memory.

The content of working memory (that is, short-term memory store) is generated by the interaction of perceptual input with activation of a very small portion of long-term memory. Central executive processes and some involuntary processes, such as the startle reflex, control movement of the spotlight of attention over this limited store and update its content, holding task-relevant information readily available and updating that information.

The effects of anxiety on attention and working memory are consistent with the attention control theory (Eysenck, Derakshan, Santos, and Calvo,2007).

Attention is known to be controlled by two different brain systems: a top-down system that directs attention to support the individual’s currently active goals, and a bottom-up system that draws attention to environmental stimuli, especially stimuli that are salient, abrupt, or threatening. Attention control theory posits that anxiety disrupts the balance between the two attentional systems, giving the bottom-up system more weight. Consequently, attention is less under the control of task goals and is more easily pulled away by salient or threatening stimuli. Thus, the individual is more easily distracted from task goals. However, if the threatening stimuli are central to the task’s goals, focusing might actually be improved.

Individuals under stress are less able to manage their attention effectively. They are more likely to be distracted from a crucial task by highly salient stimuli, such as an alarm, or by threatening aspects of a situation. They may process information less fully and may have difficulty switching attention among multiple tasks in a controlled fashion, and consequently, their management of the overall situation may become disjointed and chaotic.

Because anxious thoughts tend to preempt working memory’s limited storage capacity, the individual may have difficulty performing computations that would normally be easy and have difficulty making sense of the overall situation and updating the mental model of the situation (i.e. situation awareness). In studies of accidents, by far the most common -23%- category of errors involved inadequate comprehension, interpretation, or assessment of the ongoing situation.

To understand how stress affects the skilled performance of pilots, especially in emergencies (which by their nature involve novelty, uncertainty, and threat), one must understand the distinction between the automated performance of highly practiced tasks and the effortful performance of less familiar tasks that draws heavily on attention and working memory. If the threat produces anxiety, pilots’ performance is likely to be undermined in specific ways.

Attention and working memory are essential for tasks involving novelty, complexity, or danger. Performing tasks requiring these two limited cognitive resources is typically slow and effortful. Highly practiced skills, such as manual operation of flight controls, are less vulnerable to stress because they are largely automated and are less dependent on attention and working memory. Studies show that inadequate execution of a physical action occurred only in <5% of accidents.

However, emergencies almost always require interweaving highly practiced tasks with less familiar tasks, novel situational aspects, and uncertainty. Thus, in an emergency situation, overall demands on attention and working memory are very high at a time when these limited cognitive resources may be disrupted by anxiety; consequently, tasks such as decision-making, team performance, and communication that depend heavily on attention and working memory are likely to be impaired.”

2

Photo:  GE235 loss of control and initial impact sequence, excerpted from the Final Report

Decision-Making

“Decision-making under stress becomes less systematic and more hurried, and that fewer alternative choices are considered when making decisions. However, in highly practiced situations experts make decisions largely by automatic recognition of the situation and retrieval of the appropriate response from the long-term memory of previous experiences. This is why pilots are required to practice responding to some emergency situations. Thus, experts such as pilots are protected from impairment from stress under very familiar situations, at least to some extent.

For example, airline pilots are often given an engine failure during recurrent simulator training, and so pilots are typically fairly reliable in executing the appropriate response when experiencing an actual engine failure emergency in flight, even though the situation is somewhat stressful.

Unfortunately, most emergency situations are not rehearsed. Even in cases where the emergency procedures are practiced, the decisions that the pilot needs to make to respond appropriately in a particular emergency may be unique, and thus the required decision-making is not rehearsed. For example, the immediate responses to an engine fire in flight are practiced in recurrent training and are likely to be fairly reliable. But, the decisions about the next steps to take depend on where the aircraft is, fuel remaining, weather, and many other variables. Consequently, deliberate thought is required about these aspects, and such necessary deliberation may be impaired by the stress that is induced during the emergency.

The decisions made by pilots involved in accidents are often criticized. Indeed it is easy to identify, after the fact, what the pilots could have done to avert the accidents. But, as have previously argued, that kind of assessment suffers from hindsight bias (Dismukes, Berman, and Loukopoulos, 2007). In current studies of accident errors, has been found relatively few examples of poor decision making or poor choice of action. Therefore, it is suspected that —at least in the case of experienced airline pilots— “poor decision-making” may be used as a catch-all category, and it is suggested investigations would be better served by a deeper analysis of underlying cognitive factors.”

Team Performance and Communication

“Under acute stress team members search for and share less information, tend to neglect social and interpersonal cues, and often confuse their roles and responsibilities. Stress hinders team performance, including decision-making, primarily by disrupting communication and coordination. Coordination, of course, lies at the heart of effective team performance. Stress significantly reduces both the number of communication channels used and the likelihood that teammates will be provided needed information.

Poor communication and coordination can lead to downstream errors by team members. Studies found that 14% of errors in accidents involved inadequate or improper communication, 17% involved poor management of the competing task demands, and another 17% involved inadvertent omission of required actions. It is suspected that most of the errors of all types may have resulted from an underlying cause already mentioned: disruption of pilots’ executive control of attention and working memory.”

Ways to Reduce Error Vulnerability

“The design of airline operating procedures, training, and cockpit interfaces have evolved and improved steadily over decades of operational experience. However, there could be a hidden vulnerability in the design of three crucial aspects of safety—operating procedures, training, and interfaces—when non-normal situations are encountered. There seems to be an implicit assumption by designers that experienced pilots in emergency situations will be able to perform “normally”: that is to say pilots are assumed to process information, communicate, analyze situations, and make decisions as well as if they were sitting safely on the ground. That assumption is wrong.

Therefore the vulnerability to make errors in stressful situations could be reduced by developing tools to help flight crews:

  1. Recognize, interpret, assess and comprehend the full implications of a challenging situation that may change dynamically.
  2. Keep track of where they are in a procedure or checklist.
  3. Shift attention among competing tasks without becoming locked into just one task.
  4. Identify and analyze decision options.
  5. Step back mentally from the moment-to-moment demands of the flight situation to establish a high-level (meta-cognitive) mental model that guides action.
  6. Continuously update that mental model as the situation unfolds.
  7. Maintain the cognitive flexibility to abandon a previously selected procedure or course of action that has become inappropriate for the situation.

To a large degree, these seven objectives could be supported by revising existing flight deck operating procedures, checklists, and training to reflect diminished attention control and working memory function in threatening situations. This would best be accomplished by collaboration between human factors experts and the operational community. In addition, a longer-range approach would be to support these objectives in the design of future flight deck displays and automation interfaces.

Pilots’ resilience to stressful situations could also be improved by stress exposure training. In its simplest form this training would explain the physiological and cognitive changes that occur in stressful situations, which might help pilots be less disconcerted when they experience the physiological effects and be on guard for the cognitive effects. More advanced training could be incorporated into existing Line Oriented Flight Training (LOFT), allowing pilots to examine their own performance in stressful scenarios.”

Implications for NextGen

“The NextGen environment will present flight crews with operating procedures and demands that could increase stress and the consequences of stress, especially in non-normal situations. Complexity and traffic density will increase in this environment, and thus margins for error and time to respond may decrease. Therefore, it is crucial to identify human factors challenges that may arise during implementation and to develop appropriate countermeasures.

The increased navigational precision and reduced aircraft spacing required for NextGen may sometimes reduce the time flight crews have to interpret emergency situations and to select appropriate courses of action. The complexity of choosing an appropriate course of action may also increase for crews encountering emergencies because options may be constrained while conducting NextGen operations, such as closely spaced parallel operations.

New technologies will generate new failure modes that may increase stress and cognitive demands on flight crews. Research would allow these failure modes to be characterized, well-anticipated, and thoroughly covered in training that is designed to mitigate stress effects on flight crew performance in the NextGen context. Existing alerting features on flight decks may not be adequate for NextGen procedures and failures.

As the airspace system evolves and grows more complex and crowded, the need for ways to help flight crews deal with the heavy cognitive demands of non-normal situations becomes even more important. Transition to complex new technologies poses human factors challenges, and those in NextGen are particularly critical to its successful implementation. Difficulties will be worked out as they appear, but the transition period, including learning new procedures to proficiency, is likely to be especially cognitively demanding on flight crews; thus realistic simulation research to characterize the human factors challenges and develop mitigations should be conducted before NextGen systems are fielded. After NextGen technologies are in operation, it will be important to carefully monitor operations for indicators of latent human factors problems, particularly related to the effects of stress in normal and non-normal operations.”

REFERENCES

  1. Excerpted from Dismukes, R.K., Goldsmith, T.E., & Kochan, J.A. (2015).  Effects of Acute Stress on Aircrew Performance: Literature Review and Analysis of Operational Aspects.  NASA Technical Memorandum TM-2015-218930.  Moffett Field, CA: NASA Ames Research Center.
  2. Stress and Flightcrew Performance: Types of Errors Occurring in Airline Accidents, R. Key Dismukes, Janeen A. Kochan, and Timothy E. Goldsmith, July 2014
  3. Aviation Safety Council Taipei-Taiwan Aviation Occurrence Report, 4 February 2015 TransAsia Airways Flight GE235, ATR72-212A, Loss of Control and Crashed into Keelung River Three Nautical Miles East of Songshan Airport. Report Number: ASC-AOR-16-06-001Date: June 2016. English report released on July 1st, 2016.

FURTHER READING

  1. When the error comes from an expert: The Limits of Expertise
  2. Multitasking in Complex Operations, a real danger
  3. Shutting down the wrong engine
  4. TransAsia Airways Flight GE235 accident Final Report

**********************

minime2By Laura Victoria Duque Arrubla, a medical doctor with postgraduate studies in Aviation Medicine, Human Factors and Aviation Safety. In the aviation field since 1988, Human Factors instructor since 1994. Follow me on facebook Living Safely with Human Error and twitter. Human Factors information almost every day

_______________________

When the error comes from an expert: The Limits of Expertise

“On Aug 3rd, 2016, an Emirates Airlines Boeing 773 was performing flight EK-521 from Thiruvananthapuram (India) to Dubai (United Arab Emirates) with 282 passengers and 18 crew. As the flight neared Dubai, the crew received the automatic terminal information service (ATIS) Information Zulu, which included a windshear warning for all runways.

The aircraft was configured for landing with the flaps set to 30, and approach speed selected of 152 knots (VREF + 5) indicated airspeed (IAS) The Aircraft was vectored for an area navigation (RNAV/GNSS) approach to runway 12L. Air traffic control cleared the flight to land, with the wind reported to be from 340 degrees at 11 knots and to vacate the runway via taxiway Mike 9.

emirates-1

Emirates B773 crashed at Dubai on Aug 3rd, 2016. Photo from Malaysian Wings Forum page

During the approach, at 0836:00, with the autothrottle system in SPEED mode, as the Aircraft descended through a radio altitude (RA) of 1,100 feet, at 152 knots IAS, the wind direction started to change from a headwind component of 8 knots to a tailwind component. The autopilot was disengaged at approximately 920 feet RA and the approach continued with the autothrottle connected. As the Aircraft descended through 700 feet RA at 0836:22, and at 154 knots IAS, it was subjected to a tailwind component which gradually increased to a maximum of 16 knots.

At 0837:07, 159 knots IAS, 35 feet RA, the PF started to flare the Aircraft. The autothrottle mode transitioned to IDLE and both thrust levers were moving towards the idle position. At 0837:12, 160 knots IAS, and 5 feet RA, five seconds before touchdown, the wind direction again started to change to a headwind.

As recorded by the Aircraft flight data recorder, the weight-on-wheels sensors indicated that the right main landing gear touched down at 0837:17, approximately 1,100 meters from the runway 12L threshold at 162 knots IAS, followed three seconds later by the left main landing gear. The nose landing gear remained in the air.

At 0837:19, the Aircraft runway awareness advisory system (RAAS) aural message “LONG LANDING, LONG LANDING” was annunciated.

At 0837:23, the Aircraft became airborne in an attempt to go-around and was subjected to a headwind component until impact. At 0837:27, the flap lever was moved to the 20 position. Two seconds later the landing gear lever was selected to the UP position. Subsequently, the landing gear unlocked and began to retract.

At 0837:28, the air traffic control tower issued a clearance to continue straight ahead and climb to 4,000 feet. The clearance was read back correctly.

The Aircraft reached a maximum height of approximately 85 feet RA at 134 knots IAS, with the landing gear in transit to the retracted position. The Aircraft then began to sink back onto the runway. Both crewmembers recalled seeing the IAS decreasing and the Copilot called out “Check speed.” At 0837:35, three seconds before impact with the runway, both thrust levers were moved from the idle position to full forward. The autothrottle transitioned from IDLE to THRUST mode. Approximately one second later, a ground proximity warning system (GPWS) aural warning of “DON’T SINK, DON’T SINK” was annunciated.

One second before impact, both engines started to respond to the thrust lever movement showing an increase in related parameters.

At 0837:38, the Aircraft aft fuselage impacted the runway abeam the November 7 intersection at 125 knots, with a nose-up pitch angle of 9.5 degrees, and at a rate of descent of 900 feet per minute. This was followed by the impact of the engines on the runway. The three landing gears were still in transit to the retracted position.” (See: Going around with no thrust. Emirates B773 accident at Dubai on August 3rd, 2016, interim report)

emirates-2

Emirates B773 crashed at Dubai on Aug 3rd, 2016. Photo from Bureau of Aircraft Accidents Archives B3A

***********************

When the error comes from an expert: The Limits of Expertise

Being the continuation of  Multitasking in Complex Operations, a real danger

RETHINKING CREW ERROR (1)

“The vast majority of airline accidents are attributed to flight crew error. However, the great majority of commercial pilots has received strict training, is checked with punctual regularity, operates advanced safety technology and is highly experienced. They do their job according to a flight operations manual and checklists that prescribe carefully planned procedures for an almost conceivable situation, normal or abnormal, they will encounter. How can all this expertise co-exist with the pilot error that we are told is a factor in more than half of airline accidents?”  (Darby, Rick & Setze, Patricia. Factors in Vulnerability  Aviation Safety World, May 2007 53-54)

Why very experienced professional pilots make errors?

“This well-known fact is widely misinterpreted, even by experts in aviation safety. Certainly, if pilots never made mistakes the accident rate would go down dramatically, but is it reasonable to expect pilots not to make mistakes? For both scientific and practical reasons, this expectation is not reasonable.”

“The accident rate for major airlines operations in industrialized nations is already very low. This impressive record has been accomplished by developing very reliable systems, by thorough training, by requiring high levels of experience for captains, and by emphasizing safety. However, this accident rate can be further reduced substantially through the understanding of the underlying causes of human error and better ways of managing human error and changing how we think about the causes of error.”

“It is all too easy to say, because crew errors led to an accident, that the crew was the problem: they should have been more careful or more skilful. This “blame and punish” mentality or even the more benign “blame and train” mentality does not support safety—in fact, it undermines safety by diverting attention from the underlying causes.”

“Admittedly in general aviation, many accidents do show evidence of poor judgment or of marginal skill. This is much less common in airline operations because of the high standards that are set for this type of operation. Nonetheless, whatever discussion about airline operation could have implications for general aviation.”

“There are two common fallacies about pilot error:

  1. Fallacy 1: Error can be eliminated if pilots are sufficiently vigilant, conscientious, and proficient.

The truth is that vigilant, conscientious pilots routinely make mistakes, even in tasks at which they are highly skilled. Helmreich and his colleagues have found that on average airline crews make about two errors per flight leg and even more on challenging flights (Helmreich, Klinect, & Wilhelm, 1999; Klinect, Wilhelm, & Helmreich, 1999). And this is, if anything, an undercount because of the difficulty in observing all errors.

  1. Fallacy 2: If an accident crew made errors in tasks that pilots routinely handle without difficulty, that accident crew was in some way deficient—either they lacked skill, or had a bad attitude, or just did not try hard enough.

But the truth is that the most skilful, conscientious expert in the world can perform a procedure perfectly a hundred times in a row and then do something wrong on the 101st trial. This is true in every field of expertise—medicine, music, and mountain climbing just as much as aviation (Reason, 1990).”

“It must also be highlighted something called “hindsight bias”. After an accident, all know the outcome of the flight. The thorough investigation by the investigation authorities reveals many details about what happened leading up to the accident. Armed with this information it is easy for everybody to say the crew should have handled things differently. But the crew in that airplane did not know the outcome. They may not have known all of the details later revealed and they certainly did not realize how the factors were combining to create the conditions for an accident.”

“Experts do what seems reasonable, given what they know at the moment and the limits of human information processing. Errors are not de facto evidence of lack of skill or lack of conscientiousness.

In some accidents, crews may not have had access to adequate information to assess the situation and make prudent decisions on how to continue. Many bits and pieces of information may be available to the crew, who weigh the information as well as they can. But comes the question whether crews always have enough information in time to decide and to be absolutely certain that the decision is correct.”

“It is ironic that in some wind shear accidents the crew was faulted for continuing an approach even though an aircraft landed without mishap one minute ahead of the accident aircraft. Both crews had the same information, both made the same decision, but for one crew luck ran the wrong way. We do not like to admit that any element of luck still pertains to airline safety—and in fact, the element of chance in airline operations has been reduced enormously since the 1930s, as described by Ernest Gann in Fate is the Hunter (1984). But there are still a few accidents in which we should admit that the crew made decisions consistent with typical airline practice and still met disaster because risk cannot be completely eliminated.”

“Tension and tradeoffs between safety and mission completion are inherent in any type of real-world operation. Modern airlines have done an extraordinary job of reducing risk while maintaining a high level of performance. Nevertheless, some small degree of risk will always exist. The degree of risk that is acceptable should be a matter of explicit public discussion, which should guide policy. What we must not do is tell the public they can have zero risk and perfect performance—and then say when a rare accident occurs: “it was the crew’s fault”, neglecting to mention that the accident crew did what many other crews had done before.”

“If the investigation of an accident or incident reveals explicit evidence of deliberate misconduct the pilot obviously should be held accountable. If the investigation reveals a lack of competence the pilot obviously should not fly again unless retrained to competency. But with these rare exceptions, identifying “pilot error” as the probable cause of accidents is dangerous because it encourages the aviation community and the public to think something was wrong with the crew and that the problem is solved because the crew is dead or can be fired (or retrained in less serious cases).”

“Rather than labeling probable cause, it is more useful to identify the contributing factors including the inherent human vulnerability to characteristic forms of error, to characterize the interplay of those factors, and to suggest ways errors can be prevented from escalating into accidents. If probable cause must be retained, it would in most cases be better to blame the inherent vulnerability of conscientious experts to make errors occasionally rather than to blame crews for making errors.”

“To improve aviation safety we must stop thinking of pilot errors as the prime cause of accidents, but rather think of errors as the consequence of many factors that combine to create the conditions for accidents. It is easy in hindsight to identify ways any given accident could have been prevented, but that is of limited value because the combination of conditions leading to accidents has a large random component. The best way to reduce the accident rate is to develop ways to reduce vulnerability to error and to manage errors when they do occur.”

emirates_b773_a6-emw_dubai_160803_3

Emirates B773 crashed at Dubai on Aug 3rd, 2016. Aerial overview of accident site Photo from The Aviation Herald

ERROR SITUATIONS (2)

“The naïve view is that pilots who make an error are somehow less expert than others. That view is wrong. The pilot who makes an error – as seen in hindsight- typically does not lack skill, vigilance or conscientiousness. He or she is behaving expertly, in a situation that may involve misinformation, lack of information, ambiguity, rare weather phenomena or a range of other stressors, in a possibly unique combination.”

“No one thing “causes” accidents. Accidents are produced by the confluence of multiple events, task demands, actions taken or not taken, and environmental factors. Each accident has unique surface features and combinations of factors.”

Human cognitive processes are by their nature subject to failures of attention, memory and decision-making. At the same time, human cognition, despite all its potential vulnerability to error is essential for safe operations.

“Computers have extremely limited capability dealing with unexpected and novel situations, for interpreting ambiguous and sometimes conflicting information, and for making value judgments on the face of competing goals. Technology helps make up for the limitations of human brainpower, but by the same token, humans are needed to counteract the limitations of aviation technology.”

“Airline crews routinely deal with equipment displays imperfectly matched to human information-processing characteristics, respond to system failures and decide how to deal with threats ranging from unexpected weather condition to passenger medical emergencies. Crews are able to manage the vast majority of these occasions so skillfully that what could have become a disaster is no more than a minor perturbation in the flow of high-volume operations.”

“But on the rare occasions when crews fail to manage these situations, it is detrimental to the case of aviation safety to assume that failure stems from the deficiency of the crews. Rather, these failures occur because crews are expected to perform tasks at which perfect reliability is not possible for either humans or machines. If we insist on thinking of accidents in terms of deficiency, that deficiency must be attributed to the overall system in which crews operate.”

“It has been described six overlapping clusters of error situations:

  • Inadvertent slips and oversights while performing highly practiced tasks under normal conditions
  • Inadvertent slips and oversights while performing highly practiced tasks under challenging conditions
  • Inadequate execution of non-normal procedures under challenging conditions
  • Inadequate response to rare situations for which pilots are not trained
  • Judgment in ambiguous situations
  • Deviation from explicit guidance or SOP

However, error is NOT just part of doing business, it must still be reduced and to reduce it, the factors associated with it must be understood as well as possible.”

“Uncovering the causes of flight crew error is one of the investigators biggest challenges because human performance including that of experts pilots is driven by the confluence of many factors, not all of which are observable in the aftermath of an accident. Although it is often impossible to determine with certainty why accident crewmembers did what they did, it is possible to understand the types of error to which pilots are vulnerable and to identify the cognitive, task and organizational factors that shape that vulnerability”. (Carl W.Wogt, 2007, on his Foreword to the book The Limits of expertise: Rethinking pilot error and the causes of airline accidents. Burlington, VT: Ashgate.)

“Studies have shown the most common cross-cutting factors contributing to crew errors (3):

  • Situations requiring rapid response
  • Challenges of managing concurrent tasks
  • Equipment failure and design flaws
  • Misleading or missing cues normally present
  • Plan continuation bias
  • Stress
  • Shortcomings in training and/or guidance
  • Social/organizational issues”

emirates-3

Emirates B773 crashed at Dubai on Aug 3rd, 2016. Photo from Bureau of Aircraft Accidents Archives B3A

EXPERIENCED PILOTS ERRORS (4) 

“Studies show that almost all experienced pilots operating in the same environment in which the accidents crews were operating and knowing only what the crews knew at each moment of the flight would be vulnerable to making similar decisions and actions.”

“The skilled performance of experts is driven by the interaction of moment-to-moment task demands, availability of information and social/organizational factors with the inherent characteristics and limitations of human cognitive processes. Whether a particular crew in a given situation makes errors depends as much, or more, on this somewhat random interaction of factors as it does on the individual characteristics of the pilots.”

“The two most common themes saw in aviation accidents are Continuation Bias, –a deep-rooted tendency to continue their original plan of action even when changing circumstances require a new plan– and situations that lead to Snowballing Workload- a workload that builds on itself and increases at accelerating rate.”

Continuation bias

“Too often crew errors are attributed to complacency or intentional deviations from standard procedures, but these are labels, not explanations. To understand why experienced pilots sometimes continue ill-advised actions is important to understand the insidious nature of plan continuation bias which appears to underlie what pilots call “press-on-itis”. This bias results from the interaction of three major components: social/organizational influences, the inherent characteristics and limitations of human cognitive processes and incomplete or ambiguous information.”

“Safety is the highest priority in all commercial flight operations, but there is an inevitable trade-off between and competing goals of schedule reliability and cost-effectiveness. To ensure conservative margins of safety, airlines establish written guidelines and standard procedures for most aspects of operations.”

“Yet considerable evidence exists that the norms for actual flight operations often deviate considerably for these ideals. When standard operating procedures are phrased not as requirements but as strong suggestions that may appear to tacitly approve of bending the rules, pilots may -perhaps without realizing it- place too much importance on costs and scheduling.”

“Also, pilots may not understand why guidance should be conservative; that is they may not recognize that the cognitive demands of recovering a plane from an unstabilized approach severely impair their ability to assess whether the approach will work out. For all these reasons many pilots, not only the few who have accidents may deviate from procedures that the industry has set up to build extra safety into flight operations. Most of the time, the result of these deviations are successful landings, which further reinforce deviating norms.”

“As pilots amass experience in successfully deviating from procedures they unconsciously recalibrate their assessment of risk toward taking greater chances.”

“Another inherent and powerful cognitive bias in judgment and decision making is expectation bias- when someone expects one situation, she or he is less likely to notice cues indicating that the situation is not quite what it seems. Human beings become less sensitive to cues that reality is deviating from the mental model of the situation.”

“Expectation bias is worsened when crews are required to ingrate new information that arrives piecemeal over time in incomplete, sometimes ambiguous, fragments. Human working memory has an extremely limited capacity to hold individual chunks of information, and each piece of information decays rapidly fro working memory. Further, the cognitive effort required to interpret and integrate this information can reach the limits of human capacity to process information under the competing workload to flying an approach.”

Snowballing Workload

“Errors that are inconsequential on themselves have a way of increasing crews vulnerability to further errors and combining with happenstance events – with fatal results. The abnormal situations can produce acute stress, and acute stress narrows the field of attention (tunnel-vision) and reduces working memory capacity. The combination of a high workload with many other factors, as stress and/or fatigue, can severely undermine cognitive performance.”

“A particularly insidious manifestation of snowballing workload is that it pushes crews into a reactive, rather than proactive stance. Overloaded crews often abandon efforts to think ahead of the situation strategically, instead simply responding to events as they occur not thinking if that is going to work out.”

Implications and countermeasures

“Simply labelling crew errors as simply “failure to follow procedures” misses the essence of the problem. All experts no matter how conscientious and skilled are vulnerable to inadvertent errors. The basis of this vulnerability is in the interaction of task demands, limited availability of information, sometimes conflicting organizational goals and random events with the inherent characteristics and limitations of human cognitive processes. Even actions that are not inadvertent are the consequences of the same interaction.”

“Almost all airline accidents are system accidents. Human reliability in the system can be improved if pilots, instructors, check pilots, managers and the designers of aircraft equipment and procedures understand the nature of vulnerability to error.”

“For example, monitoring and checklists are essential defenses but in snowballing workload situations, when these defenses are most needed they are most likely to be shed in favor of flying the airplane, managing systems and communicating.”

“Monitoring can be more reliable by designing procedures that accommodate the workload and by training and checking monitoring as an essential task more than a secondary one.”

“Checklist use can be improved by explaining the cognitive reasons that effectiveness declines with extensive repetition and showing how this can be countered by slowing the pace of execution to be more deliberate, and by pointing to or touching items being checked.”

“Inevitable variability in skilled performance must be accepted. Because skilled pilots normally perform a task without difficulty, it doesn’t mean they should be able to perform that task without error 100% times.”

“Plan continuation bias is powerful, although it can be countered once acknowledged. One countermeasure is analyze situations explicitly, stating the nature of the threat explicitly, the observable indications of the threat and the initial plan for dealing with it.”

“Questions as what if our assumptions are wrong? How will we know? Will we know on time?, are the basis for forming realistic backup plans and implementing them on time before snowballing workload limits the pilot’s ability to think ahead.”

“Airlines should periodically review normal and non-normal procedures looking for design features that could induce error. Examples of correctable design flaws are checklist conducted during periods of high interruptions, critical items that are permitted to “float” in time and actions that require the monitoring pilot to head down during critical periods such as taxing near runway intersections.”

“Operators should carefully examine whether they are unintentionally giving pilots mixed messages about competing goals such as SOPs adherence versus on-time-performance and fuel costs. If a company is serious about SOPs adherence it should publish, train and check those criteria as hard-and-fast rules rather than as guidelines. Further, it is crucial to collect data about deviation from those criteria (LOSA & FOQA) and to look for organizational factors that tolerate or even encourage those deviations.”

“These are some of the ways to increase human reliability on the flight deck, making errors less likely and helping the system recover from the errors that inevitably occur. This is hard work, but it is the way to prevent accidents. In comparison, blaming flight crews for making errors is easy but ultimately ineffective.”

To be continued on Pilot performance in emergencies: why can be so easy, even for experts, to fail

REFERENCES

The previous paragraphs were excerpted from:

  1. Dismukes, R. K. (2001). Rethinking crew error: Overview of a panel discussion. In R. Jensen (Ed.), Proceedings of the 11th International Symposium on Aviation Psychology. Columbus, OH: Ohio State University.
  2. Darby, Rick & Setze, Patricia. Factors in Vulnerability a book review to The Limits of expertise: Rethinking pilot error and the causes of airline accidents. Dismukes, R. K., Berman, B. A., & Loukopoulos, L. D. (2007). Burlington, VT Ashgate. Aviation Safety World, May, 2007 53-54
  3. Dismukes, R.K., Berman, B., & Loukopoulos, L. D. (2006, April). The limits of expertise: rethinking pilot error and the causes of airline accidents. Presented at the 2006 Crew Resource Management Human Factors Conference, Denver, Colorado. (PDF 232KB)
  4. Berman, B. A. & Dismukes, R. K. (2006) Pressing the approach: A NASA study of 19 recent accidents yields a new perspective on pilot error, Aviation Safety World, December 2006, 28-33.
  5. United Arab Emirates, General Civil Aviation Authority, Air Accident Investigation Sector. Accident Preliminary Report: Runway Impact During Attempted Go-Around. Dubai International Airport. 3 August 2016. Boeing 777-300 operator: Emirates. AAIS Case No: AIFN/0008/2016

FURTHER READING

  1. Pilot performance in emergencies: why can be so easy, even for experts, to fail
  2. Multitasking in Complex Operations, a real danger
  3. Speaking of going around
  4. Going around with all engines operating
  5. Normalization of Deviance: when non-compliance becomes the “new normal”
  6. The Organizational Influences behind the aviation accidents & incidents

**********************

minime2By Laura Victoria Duque Arrubla, a medical doctor with postgraduate studies in Aviation Medicine, Human Factors and Aviation Safety. In the aviation field since 1988, Human Factors instructor since 1994. Follow me on facebook Living Safely with Human Error and twitter. Human Factors information almost every day

_______________________

 

Normalization of Deviance: when non-compliance becomes the “new normal”

In November 28th, 2016, a LaMía Bolivia Avro RJ-85, registration CP-2933 performing the charter flight LMI-2933 from Santa Cruz (Bolivia) to Medellin (Colombia) with 68 passengers and 9 crew, crashed,  killing 71 people.  The flight was chartered to carry the Brazilian football team Chapecoense to play the finals of South American Coup 2016. Three players, one flight attendant, one technician and one journalist survived.

Pretty quickly became evident the plane has suffered fuel starvation. The flight plan was unofficially leaked to news channels evidencing several no-go issues, being the most relevant the Total EET: 04 HR 22MIN having the same value as ENDURANCE:04 HR 22MIN. When Bolivian Aviation Authority officer inquired the crashed flight’s dispatcher about the issues, he asked her to let it pass arguing they will complete the flight in less time, as they have done before, and eventually, the flight was authorized.

15253380_10153997197395779_6891750729738525550_n

Main wreckage LaMía Bolivia Avro RJ-85, CP-2933 crashed on approach to SKRG-MDE (Photo: AP/Luis Benavides)

Being the continuation of  The Organizational Influences behind the aviation accidents & incidents

Normalization of Deviance

“Social normalization of deviance means people within the organization become so much accustomed to the deviation that they don’t consider a deviant, despite the fact that they far exceeded their own rules of elementary safety” Diane Vaughan, 1996 – Challenger Accident Investigation (The Challenger Launch Decision. Risky Technology, Culture, and Deviance at NASA. Chicago, IL: University of Chicago Press, 1996)

Normalization of deviance is the gradual process by which, in the absence of immediate adverse consequences, the unacceptable becomes acceptable. It refers to unnoticed failures that do not cause immediate harm and permeate everyday work becoming routine behaviour.  In other words, “The shortcut slowly but surely over time becomes the norm” Chris Sharber, 2015 (first officer and flight simulator instructor–Boeing 777 fleet, at the United Airlines Training Center in Denver )

The term can be applied as legitimately to the human factors risks in airline operations as was applied to the Challenger accident, where it was first used.

“Normalization of deviance breaks the safety culture, substituting a slippery slope of tolerating more and more errors and accepting more and more risks, always in the interest of efficiency and on-time schedules. This toxic thinking often ends with a mindset that demands evidence that these errors would destroy the vehicle, instead of demanding proof that the shuttle is safe and not being harmed. The boundaries are soon pushed to extremes without understanding where and why the original limits were established.” (Westgard JO, Westgard S. It’s Not Rocket Science: Lessons from the Columbia and Challenger Disasters. Guest Essays. Available at: http://www.westgard.com)

It is invisible and insidious, common and pernicious. People tend to ignore or misinterpret the deviations as an innocuous part of the daily job. If the deviations also save time and resources and reduce costs, they can even be encouraged by managers and supervisors. However, the more times deviations occur without apparent consequences, the system becomes more complacent.

Normalization of deviance can lead to Groupthink (1), which can be defined as “… a mode of thinking that persons engage in when they are deeply involved in a cohesive in-group when concurrence-seeking becomes so dominant that it tends to override critical thinking or realistic appraisal of alternative courses of action.” —Irving l. Janis, 1982

“There are eight symptoms of groupthink. All of them need not be present for the process to influence decisions:

1.Illusion of Invulnerability
2.Belief in Inherent Morality of the Group
3.Collective Rationalization
4.Out-Group Stereotypes
5.Self-Censorship: for example, a no-go item became  a “recommendation”
6.Illusion of Unanimity: Silence is interpreted as agreement
7.Direct Pressure on Dissenters
8.Self-Appointed Mindguards: Subject matter experts excluded from decision briefs and meetings

There is a natural tendency to rationalize shortcuts under pressure. The lack of bad outcomes reinforces the rightness of trusting past success instead of objectively assessing risk.”

Moreover, when the outcomes are successful, this reinforce the natural tendency people have to tend to focus on the results and to assume that the process that led to it was correct, even when there are evidence that it wasn’t .

Near-misses (2)

With time, deviations lead to near-misses. But instead of seeing them as a sign of alarm people tend to ignore or misinterpret them, therefore often are not evaluated or worst of all are seen as a symptom of resilience.The big problem is “if conditions change, even slightly, and luck does not intervene, the near-miss becomes an accident.”

“Accidents are initiated by the unexpected interaction of multiple small, often seemingly unimportant, human errors, deviations or violations, technological failures or bad business decisions, and are culminated by these latent conditions combining with enabling conditions.  Near misses arise from the same preconditions but in the absence of enabling conditions they produce only small failures and thus go undetected or are ignored. Multiple near misses precede (and foreshadow) every disaster and business crisis, and most of the misses are ignored or misread.”

In LaMía’s accident commented above an enabling condition could has been an unexpected landing time delay because of an abnormal indication inside the cockpit of another plane that caused that plane to receive priority  to land meanwhile the LMI-2933 was sent into holding pattern.

“Whether an enabling condition transforms a near-miss into disaster generally depends on chance, thus it makes little sense to try to predict or control all the possible enabling conditions. Instead, companies should focus on identifying and fixing latent conditions before circumstances allow them to produce an accident.”

To recognize and learn from normalization of deviance and from near-misses just paying attention is not enough. “It actually runs contrary to human nature.”

“Research suggests seven strategies that can help organizations recognize near-misses and root-out the latent error behind them:

  1. Heed high pressure
  2. Learn from deviations
  3. Uncover root causes
  4. Demand accountability
  5. Consider worst-case scenarios
  6. Evaluate projects at every step
  7. Reward owning up

Two forces conspire to make learning from near misses difficult: Cognitive biases make them hard to see, and, even when they are visible, leaders tend not to grasp their significance. Thus, organizations often fail to expose and correct latent errors even when the cost of doing so is small—and so they miss opportunities for organizational improvement before disaster strikes. This tendency is itself a type of organizational failure—a failure to learn from “cheap” data. Surfacing near misses and correcting root causes is one the soundest investments an organization can make.”

Intentional Noncompliance (3)

“Flight crews engage in intentional noncompliance — and sometimes self-justify this behaviour — out of a variety of motivations. “Maybe it’s a bad SOP. Maybe there are competing priorities. Maybe it just doesn’t work. It’s not functional. … It’s not that important. It doesn’t really matter. I might [take a] shortcut just because I’m trying to save time,” he said. “[Or pilots rationalize], ‘I just don’t like it. I like the way we did it before. I’ve got a better way of doing things. I think this is a bad idea. I’m just not going to do it.” These occur with a perceived lack of consequences. The LOSA Collaborative’s latest data analysis suggests that acts of intentional noncompliance occur on between 40 to 60 percent of flights, or about half, on average.”

The categories and subcategories for both types of noncompliance with SOPs can be summarised as follows:

sop-noncompliance
Image from United Airlines SOP Noncompliance concept
  1. Compliant: Unintentional errors
    • Slips
    • Mistakes
  2. Risky: Intentional act. Risk is underestimated or believed justified
    • Omission
    • Violation
  3. Reckless: Intentional disregard of significant risk
    • Gross negligence
    • Criminal act

Procedural Intentional Non-Compliance-PINC (4)

“Procedural Intentional Non-Compliance-PINC is one of the most frequent contributors to aviation accidents.

“PINCs are often the result of well-meaning pilots trying to do their job but willfully taking risks to achieve what should be the secondary goal, “completing the mission”… However, when your efforts to get there include fudging the rules, you do raise risk.

PINCs raise risks, and there are a lot of PINCs happening every day.But if you are in a position to do so, you can take a straightforward series of steps that are critical to prevent PINCs in your organization: (1) Gain commitment (2) budget and develop the resources and (3) ensure performance management.

Gain Commitment

…Everyone learns early in life about the two sets of rules to live by: the formal rules written or stated, and the real rules- those the game is actually played by. When there is a significant difference between the two, the real rules, became the standard. The solution is to establish and maintain a universal commitment to the formal rules- that is flight operations manuals, procedures, etc. That emphasis must start at the very top of the organization.

If the Chief Executive Officer (CEO) of an organization is truly committed to safety the safety program is set up to succeed. A safety-committed CEO is the chief enforcement officer. Anything less leaves the door open for informal rules and resultant PINCs.

The commitment from top management allows expecting appropriate behaviour from those all involved in the operation.

No PINCs are permitted. Period. With that understanding as a start point, it becomes the manager’s responsibility to get the necessary resources into play.

Budget and develop resources

Aviation professionals tend to be highly service-oriented. They naturally push themselves and their equipment to get the job done, so it is critically important than their leaders and managers give them the appropriate resources. If they don’t have the appropriate resources, they will stretch the ones they have. The result of this heroic efforts populates accident investigation files.

The most important resources are enough people, time and equipment. Also are the guidelines to using them- effective policies, standards and procedures. Those are critical in ensuring the quality and continuity of organizational and individual performance and the avoidance o PINCs.

Some aviation managers say vague policies and procedures create the flexibility they need to get the job done. Wrong! That approach sends a loud and clear message: safety is a variable, service is an absolute. That sets the stage for people to push. Lives are lost and hills are littered with aircraft wreckage as a result of crews pushing. Weak policies and procedures send the wrong message.

On the other hand, Standard Operating Procedures-SOP also must establish clear guidelines for the use of judgment in a way that continues to assure safety while being flexible enough to adjust to unique service need. Some aviation managers make a cause for absolute SOPs that have no wiggle room for judgment. They are the enforcers, unwilling to take responsibility for using common sense. Overly rigid guidelines prevent the use of using judgment and decision making to get the job done safely.

If there is the expectation people make informed and collaborative decisions that are biased to the safe side, it is critical to have a comprehensive set of operational policies, standards, and procedures. Once those are in place, it’s up to the team to perform… top to bottom.

Performance Management

Since safety starts at the top, operational managers must not only be the champions of proper performance, they must be the models. “Do as I say, not as I do” is not an option.

They always must catch people doing things right and praise routinely, and publicly praise people for taking time and care to follow and implement proper procedures. Doing this they are creating a culture of co-responsibility. Co-responsibility is basic to effective crew resource management. Each member is co-responsible for the rest of the team performance. This applies to ground and scheduling operations too.

From a managerial perspective, each PINC deserves unique attention and action. There are few things to consider:

  • A PINC is a deliberate violation of an established policy, standard or practice
  • A PINC often raises risk
  • A PINC perpetrator is likely to commit future PINCs
  • If other members of the organization are aware of a PINC event and the see no negative consequences, they may correctly assume management don’t take SOPs seriously

Therefore, contrary to the old axiom “praise publicly and punish privately”. The consequence of PINC should be emphasized, the floggings should be public. Not only does this approach provide positive public reinforcement of proper behaviors to prevent such public embarrassment. (From the blogger: please note and remember, the author is talking about violations and intentional noncompliance, not talking about error) 

The University of Texas found that crews who intentionally deviate from standard operating procedures are almost twice as likely to commit additional errors with consequential results. PINCs are a disease. Unchecked they will infect an entire operation. That infection can have extreme consequences. Sadly, the price of PINCs is paid by innocent people. The antidote to PINCs is discipline.”

REFERENCES

  1. The Cost of Silence: Normalization of Deviance and Groupthink. Terry Wilcutt, Hal Bell. National Aeronautics and Space Administration- NASA. Senior Management ViTS Meeting November 3, 2014.
  2. How to Avoid Catastrophe. Catherine H. Tinsley, Robin L. Dillon, Peter M. Madsen. Harvard Business Review. April 2011

  3. Normalization of Deviance. Wayne Rosenkrans. Flight Safety Foundation’s AeroSafety World. June 2015.

  4. Discipline as Antidote. Peter V. Agur Jr.Flight Safety Foundation’s AeroSafety World. February 2007.

FURTHER READING

  1. The Organizational Influences behind the aviation accidents & incidents
  2. LaMía CP2933 accident in Colombia, preliminary report
  3. The numerous safety deficiencies behind Helios Airways HCY 522 accident

********************

Recognizing and learning from normalization of deviance and from near-misses runs contrary to human nature. Therefore, it requires constant effort, reinforcement and supervision, a zero tolerance policy and well-defined consequences. The managerial commitment is indispensable.

But, what happens when the pilot in command is also a manager and co-owner of the airline?

In that case, the Aviation Authority MUST prove more than ever that it deserves such a title.

**********************

minime2By Laura Victoria Duque Arrubla, a medical doctor with postgraduate studies in Aviation Medicine, Human Factors and Aviation Safety. In the aviation field since 1988, Human Factors instructor since 1994. Follow me on facebook Living Safely with Human Error and twitter. Human Factors information almost every day

_______________________

See and Be Seen: Your Life Depends on It. NTSB Safety Alert 045 May 2015

On November 16, 2016, NTSB  issued a Safety Alert to pilots with suggestions on what they can do to reduce their chances of being involved in a midair collision (See NTSB Issues Safety Alert to Pilots on Midair Collision Prevention. November 2016) the same subject issued on a Safety Alert dated May 2015. Let’s review the Safety Alert 045

10502289_1608152882801217_3669548634204575811_n

Photo excerpted from the Fox News video about the January 2015, two Piper PA-18s midair collision 

NTSB Safety Alert 045 May 2015

See and Be Seen: Your Life Depends on It – Maintaining Separation from Other Aircraft

The problem

 Accidents have occurred in which pilots operating near one another did not maintain adequate visual lookout and failed to see and avoid the other aircraft.
 While some accidents occurred in high-traffic areas (near airports), some accidents occurred in cruise flight; in the cases described below, the pilots were flying in daytime visual meteorological conditions.
 All pilots can be vulnerable to distractions in the cockpit, and the presence of technology has introduced challenges to the see-and-avoid concept. Aviation applications on portable electronic devices (PEDs) such as cell phones, tablets, and handheld GPS units, while useful, can lead to more head-down time, limiting a pilot’s ability to see other aircraft.

Related accidents

  • In January 2015, two Piper PA-18s collided near Wasilla, Alaska, while conducting cross-country flights. The commercial pilots of each airplane sustained serious injuries. A ground witness indicated that the airplanes were converging at a 90˚ right angle and that neither airplane changed altitude or direction as they approached one another. (ANC15FA009)
  • In September 2014, a Cessna 172 and an amateur-built Searey collided near Buffalo-Lancaster Regional Airport, Lancaster, New York, while participating in a fly-in event. The commercial pilot and passenger of the Cessna 172 died, and the private pilot and passenger of the Searey were not injured. Both airplanes were traveling westbound with the Cessna behind the Searey. The Cessna was traveling about 90 knots and was gradually descending, and the Searey was traveling about 70 knots and was gradually climbing when the Cessna overtook it. (ERA14FA459)
  • In March 2012, a Cessna 172 and a Cessna 180 collided near Longmont, Colorado, about 7,200 ft mean sea level. The private pilot and instructor in the Cessna 172 died, and the pilot of the Cessna 180 sustained minor injuries. The Cessna 172 was in level flight on a north-northeast course, and the Cessna 180 was in a gradual climb on a northerly course. The pilots were not in contact with air traffic control at the time of the accident, and neither pilot maintained adequate visual lookout for the other airplane. (CEN12FA199)
  • In July 2011, a Cessna 180 and a Cessna 206 collided about 900 ft above ground level near Talkeetna, Alaska. The airline transport-rated pilot of the Cessna 206 was not injured, and the private pilot and three passengers of the Cessna 180 died. The pilots were monitoring different radio frequencies and failed to see and avoid the other airplane as each was approaching Amber Lake on the left downwind. (ANC11FA071)

What can pilots do?

  • Be vigilant and use proper techniques to methodically scan for traffic throughout your flight, not only in high-volume traffic areas.
  • Divide your attention inside and outside the aircraft and minimize distractions (including nonessential conversations, photography or sightseeing activities, and PED use) that may degrade your ability to maintain awareness of other aircraft.
  • Make your aircraft as visible as possible to other aircraft by turning on available lights, including anticollision lights, and consider using high-intensity discharge or LED lighting.
  • Clearly communicate your intentions and use standard phraseology, known distances, and obvious ground references to alert other pilots of your location.
  • Recognize that some conditions make it harder to see other aircraft, such as operating in areas where aircraft could be masked by surrounding terrain or buildings and when sun glare is present.
  • Encourage passengers to help look for traffic and, during instructional flights, ensure that one pilot is always responsible for scanning for traffic.
  • Effectively use on-board traffic advisory systems, when available, to help visually acquire and avoid other aircraft and not as a substitute for an outside visual scan.

Need more information?

The following Federal Aviation Administration (FAA) advisory circulars (ACs) can be accessed from http://www.faa.gov:

  • AC 90-48C, “Pilots’ Role in Collision Avoidance”
  • AC 90-66A, “Recommended Standard Traffic Patterns for Aeronautical Operations at Airports without Operating Control Towers”
  • AC 90-42F, “Traffic Advisory Practices at Airports without Operating Control Towers”

The website http://www.seeandavoid.org, which is funded by the FAA and the Air National Guard, provides pilots with information and education on airspace, visual identification, aircraft performance, and mutual hazards to safe flight to help eliminate midair collisions.

The FAA Aviation Safety Program publication “How to Avoid a Mid Air Collision” (P-8740-51), which describes pilot scanning techniques and offers a useful collision avoidance checklist, can be accessed from the FAA Safety Team’s web page at http://www.faasafety.gov.

This National Transportation Safety Board (NTSB) safety alert and others can be accessed from the NTSB’s Safety Alerts web page at http://www.ntsb.gov/safety/safety-alerts/Pages/default.aspx or searched from the NTSB home page at http://www.ntsb.gov.

Further reading

  1. Cessna 150M and a Lockheed Martin F-16CM midair collision. Final report
  2. Cessna 172M and Sabreliner midair collision on August 16, 2015, final report
  3. NTSB Issues Safety Alert to Pilots on Midair Collision Prevention. November 2016

**********************

minime2By Laura Victoria Duque Arrubla, a medical doctor with postgraduate studies in Aviation Medicine, Human Factors and Aviation Safety. In the aviation field since 1988, Human Factors instructor since 1994. Follow me on facebook Living Safely with Human Error and twitter. Human Factors information almost every day

_______________________

NTSB Issues Safety Alert to Pilots on Midair Collision Prevention. November 2016

NTSB Safety Alert 058 November 2016

Prevent Midair Collisions: Don’t Depend on Vision Alone
Augment your reality to help separate safely

The problem

 The “see-and-avoid” concept has long been the foundation of midair collision prevention. However, the inherent limitations of this concept, including human limitations, environmental conditions, aircraft blind spots, and operational distractions, leave even the most diligent pilot vulnerable to the threat of a midair collision with an unseen aircraft.
 Technologies in the cockpit that display or alert of traffic conflicts, such as traffic advisory systems and automatic dependent surveillance–broadcast (ADS-B), can help pilots become aware of and maintain separation from nearby aircraft (1).Such systems can augment reality and help compensate for the limitations of visually searching for traffic. (1 To receive a complete traffic picture and benefit fully from this technology, aircraft must be equipped with both ADS-B In and ADS-B Out. Due to the design of the ADS-B system, aircraft equipped with only ADS-B In may be presented with incomplete traffic information. Although the information could be useful when operating near an ADS-B Out-equipped aircraft, in other situations, a pilot could potentially receive a traffic picture that omits the closest traffic, resulting in false security.)

Related accidents

The National Transportation Safety Board (NTSB) has investigated midair collisions in high-traffic areas near airports or practice areas, in cruise flight, in controlled and uncontrolled airspace, and in a variety of weather conditions.

sin-titulo

Photo: Cessna Aft Fuselage and Empennage at Collision Site (Cessna 150M and a Lockheed Martin F-16CM midair collision, July 7, 2015)

  • A Cessna 172M and an NA265-60SC Sabreliner collided on August 16, 2015, while maneuvering for landing at Brown Field Municipal Airport, San Diego, California. The pilot (sole occupant) of the Cessna and all four crewmembers aboard the Sabreliner died. This accident occurred within controlled airspace with a relatively high traffic density. The controller failed to properly identify the aircraft in the pattern and to ensure that the control instructions were being followed before turning the Sabreliner into the Cessna’s path. A postaccident simulation showed that a cockpit display of traffic information in one or both of the airplanes could have provided a traffic picture that would likely have allowed the pilots to become aware of and look for the other airplane and may have prevented the accident. (WPR15MA243A/B)
  • A Cessna 150M and a Lockheed Martin F-16CM collided in midair near Moncks Corner, South Carolina, on July 7, 2015. The two occupants of the Cessna died, and the F-16 pilot ejected and landed safely using a parachute. The F-16 pilot was receiving air traffic control (ATC) services at the time of the collision. The controller failed to provide an appropriate resolution to the conflict between the F-16 and the Cessna. Because of the high closure rate involved, each pilot had a limited opportunity to see and avoid the other airplane. A postaccident simulation showed that technologies in the cockpit that display or alert of traffic conflicts might have provided both pilots with clear traffic depictions and/or aural alerts as the conflict developed and could have enabled them to develop a plan of action to avoid the collision. (ERA15MA259A/B)
  • A Beechcraft V35B and a Piper PA-28-140 collided in midair near Warrenton, Virginia, on May 28, 2012. The pilot and flight instructor aboard the Beechcraft died, and the pilot of the Piper was seriously injured during his subsequent forced landing. At the time of the collision, the Piper pilot was in contact with ATC and was receiving services but had not been alerted to the presence of the Beechcraft. Even though the controller received a conflict alert on his radar system, he assessed that there was no conflict and did not issue an immediate safety alert. A postaccident simulation showed that technologies in the cockpit that display or alert of traffic conflicts might have provided the pilots with about 30 seconds of aural and visual alerts before the collision. (Transportation Safety Board of Canada Aviation Investigation Report A12H0001)
  • A Piper PA-32R-300 and a Eurocopter AS350BA collided over the Hudson River near Hoboken, New Jersey, on August 8, 2009. The pilot and two passengers aboard the Piper and the pilot and five passengers aboard the Eurocopter died. ATC services were being provided, but the controller was distracted, and the aircraft collided. Both aircraft were equipped with collision avoidance technologies, but the pilots made ineffective use of the technologies to maintain their awareness of the other aircraft. (NTSB Report AAR-10/05)

What can you do?

  • Educate yourself about the benefits of flying an aircraft equipped with technologies that aid in collision avoidance. Whether you are flying in congested airspace or a remote location, a cockpit display or alert of traffic information will increase your awareness of surrounding traffic.
  • Become familiar with the symbology, display controls, alerting criteria, and limitations of such technologies in your aircraft, whether the systems are portable or installed in the cockpit. High-density traffic around airports can make interpreting a traffic display challenging due to display clutter, false traffic alerts, and system limitations.
  • Use information provided by such technologies to separate your aircraft from traffic before aggressive, evasive maneuvering is required. Often, slight changes in rate of climb or descent, altitude, or direction can significantly reduce the risk of a midair collision long before the conflicting aircraft has been seen.
  • Remember that while such technologies can significantly enhance your awareness of traffic around you, unless your system is also capable of providing resolution advisories, visual acquisition of and separation from traffic is your primary means of collision avoidance (when weather conditions allow).

Interested in more information?

The following Federal Aviation Administration (FAA) resources can be accessed from http://www.faa.gov:

  • Advisory Circular (AC) 90-48D, “Pilots’ Role in Collision Avoidance,” alerts pilots of the potential hazards of midair collisions and emphasizes pilot education, operating practices, procedures, and improved scanning techniques. The AC also discusses technologies in the cockpit that display or alert of traffic conflicts.
  • The FAA’s NextGen program on ADS-B offers up-to-date requirements, coverage maps, and program information.

The website http://www.seeandavoid.org, which is funded by the FAA and the Air National Guard, aims to eliminate midair collisions by providing pilots with educational resources and other information about airspace, aircraft visual identification, aircraft performance, and flight hazards.

The NTSB’s Aviation Information Resources web page, http://www.ntsb.gov/air, provides convenient access to NTSB aviation safety products. The reports for the accidents referenced in this safety alert are accessible by NTSB accident number from the Aviation Accident Database link, and each accident’s public docket is accessible from the Accident Dockets link for the Docket Management System. This safety alert and others, such as SA-045, “See and Be Seen: Your Life Depends on It,” can be accessed from the Aviation Safety Alerts link.

Excerpted from  NTSB Safety Alert 058 November 2016

Further reading

  1. Cessna 150M and a Lockheed Martin F-16CM midair collision. Final report
  2. Cessna 172M and Sabreliner midair collision on August 16, 2015, final report
  3. See and Be Seen: Your Life Depends on It. NTSB Safety Alert 045 May 2015

**********************

minime2By Laura Victoria Duque Arrubla, a medical doctor with postgraduate studies in Aviation Medicine, Human Factors and Aviation Safety. In the aviation field since 1988, Human Factors instructor since 1994. Follow me on facebook Living Safely with Human Error and twitter. Human Factors information almost every day

_______________________

Multitasking in Complex Operations, a real danger

The human ability to process more than one stream of information at a time and respond accordingly is severely limited, when an individual tries multitasking in a situation that involves novel tasks, complex decision making, monitoring, or overriding habits, it all falls apart… Cognitive Resource Management.

Multitasking 5

Photo: The charred wreckage of the Spanair flight after the crash in Madrid in 2008 which killed 154. From The Daily Mail Online

The Perils of Multitasking
Pilots overestimate their abilities, as well as the benefits of doing several things at once. By Loukia D. Loukopoulos, R. Key Dismukes, and Immanuel Barshi. AeroSafety World August 2009. (Note from the blogger: Just sharing! All credits to the authors and publisher.)

As we started the taxi, I called for the taxi checklist but became confused about the route and queried the first officer to help me clear up the discrepancy. We discussed the route and continued the taxi. … We were cleared for takeoff from Runway 1, but the flight attendant call chime wasn’t working. I had called for the ‘Before Takeoff’ checklist, but this was interrupted by the communications glitch. After affirming the flight attendants were ready, we verbally confirmed the ‘Before Takeoff’ checklist. On takeoff, rotation and liftoff were sluggish. At 100–150 ft, as I continued to rotate, we got the stick shaker. The first officer noticed the no-flap condition and placed the flaps to 5. … We wrote up the takeoff configuration warning horn but found the circuit breaker popped at the gate.(1)

Is this an example of recklessness? Complacency? Absent-mindedness? Complex operating conditions? Complicated operating procedures? Insufficient crew experience? Or something as subtle as multitasking?

During another flight, in February 2009, a crew rejected their takeoff from Birmingham (England) International Airport at 155 kt after finding it impossible to rotate the aircraft. The investigation revealed that “a number of distractions, combined with unusual demands imposed by the poor weather, led to a breakdown of normal procedures and also allowed a missed action [stabilizer trim set for takeoff] to go unchecked.”(2)

Are these incidents exceptions to usual practice or symptoms of widespread vulnerability? What do they say about the progress of an industry that has suffered at least three catastrophic accidents when a takeoff configuration warning system failed to alert the crew that they were attempting to take off without having set the flaps?(3),(4),(5)
In reviewing categories of accidents for 2008 — spurred in part by the fatal Aug. 20 crash of a Spanair McDonnell Douglas MD-82 during an attempted takeoff from Madrid, apparently with improperly set flaps, according to preliminary reports (6) — Flight Safety Foundation decried the “unwelcome return of the no-flaps takeoff ” and concluded that “we are not making much progress in reducing the risk of these [types of loss of control] high-fatality accidents” (ASW, 02/09, p. 18). (Note from the blogger: At the date this article was written the Spanair accident was an ongoing investigation. The final report by the Spanish investigation Authority was released on July 26th, 2011. This blog has published an article regarding this accident 
Spanair DC-9-82 (MD82) accident at Madrid Barajas Airport, on 20 August 2008)

A quick search of the U.S. National Aeronautics and Space Administration (NASA) Aviation Safety Reporting System (ASRS) database reveals more than 50 reports of attempted no-flaps takeoffs in the last decade, as well as reports of incorrectly set trim, airspeed and heading bugs; cockpit windows not latched; and other omissions. In many of these events, the crew was saved by the proverbial bell — a takeoff configuration warning horn. That bell cannot be relied on to always work, however.

What leaves expert, conscientious pilots — and their passengers — hanging by the thread of a last line of defense, such as a warning horn or a checklist? Articles abound in the daily news about a multitasking society and the dangers inherent in our natural drive to have more than one thing going on at once.(7),(8),(9),(10) Most people know they should not talk on their cell phone while driving, although many do it anyway.(11),(12)
But what does multitasking have to do with pilots on an airline flight deck?

Complex Operations
In 2000, we embarked on a research project sponsored by NASA and the U.S. Federal Aviation Administration (FAA) to characterize the nature and demands of routine flight operations. Preliminary findings(13) raised red flags for an industry that, like many others, had unsuspectingly accepted multitasking as a normal state of affairs.

We argued that commercial and public pressures, organizational and social demands, and the increase in air traffic, mixed with a healthy dose of pilots’ overestimation of their own abilities, were creating situations that were considered routine, although they concealed appreciable risk.

Multitasking 4

Photographs: Bazuki Muhammad/Reuters. Rediff Business

Our research at the Flight Cognition Laboratory at NASA’s Ames Research Center in California is based on a combination of methodologies that through the years have included laboratory experiments, structured interviews and surveys, in-depth analyses of flight manuals, participation and observation of ground and flight training, incident and accident report analyses, and many hours of cockpit jump seat observations during passenger-carrying operations. Taking advantage of these sources of data, we systematically analyzed and contrasted cockpit operations in theory and in reality.(14) Take any carrier’s flight operations manual (FOM) and draw out the flow of activities required of each pilot from moment to moment while the aircraft is taxied from the gate to the runway for takeoff, and you will see the theoretical, “ideal” taxi phase of flight (Figure 1).

Multitasking 2

The crew’s activities can be traced from the moment the captain requests that the first officer obtains taxi clearance until the aircraft is lined up with the runway centerline, ready for takeoff. There are a number of procedures that pilots conduct individually, two checklists conducted by pilots together, monitoring requirements and other pieces of information from external sources. In the ideal world, everything occurs at specific, predictable moments as the taxi phase of flight unfolds.

This is the way activities are laid out in the manuals, the way cockpit tasks are taught in training, and the way pilots are expected to perform, on the line. The activity-tracing exercise can be repeated for every phase of flight, and in each case, the ideal perspective portrays crew activities as linear or following a prescribed sequence; predictable, and under the moment-to-moment control of the crew.

The real world is not as straightforward. Observation of flight crews, from the vantage point of the cockpit jump seat, helps us understand the full ramifications of that. During our observations, we recorded every event that caused some perturbation — or disruption — of the ideal sequence of activities of the two pilots. It did not take long to realize that the real operational world is more complex and more dynamic than represented in writing and in training.

Let’s look at the taxi phase of flight in more detail, as it often unfolds in the real operating environment.

Multitasking 3

The base layer (grayed-out and in the background of Figure 2) is the ideal representation depicted in Figure 1. Another layer has been added, formed by some of the many disruptions that were observed from the jump seat during routine flights. Ovals contain some of the possible, additional demands that are not explicitly expressed in the FOMs.

The disruptions listed in each oval carried additional task demands for attention and action. Ice or snow on the ground meant that the captain deferred calling for flaps prior to taxi to avoid contaminating the wing surfaces with slush, continued with other taxi activities, performed the taxi checklist calling for verification of the flaps setting, and remembered to set the flaps right before takeoff. Encountering a busy frequency meant that the first officer had to continue monitoring all radio calls in order to “jump in” when the frequency became available, all while monitoring the captain, maintaining situational awareness and carrying out other pre-taxi preparations.

Again, the exercise can be repeated for each phase of flight. The resulting “real” picture reveals activities that are much more fluid, convoluted and variable than in theory: Activities are dynamic and not so linear, are unpredictable, and are not fully under the control of the pilots. Pilots are routinely forced to deviate from their linear, well-practiced and habitual execution of procedures. Neither the nature nor the timing of tasks and events can be anticipated with certainty. Essential information and/or the individuals required to perform some activities are not always available when expected. Tasks often must be initiated earlier or later than planned. Pilots must continually find ways to fit more than the “ideal” activities into the allotted time.

One implication of the real picture is that there is considerably more work than the ideal perspective suggests. But it is not just a question of workload quantity. It is also a question of workload management. Responding to the multiple, concurrent demands of flight operations requires interweaving new activities with old ones, deferring or suspending some tasks while performing others, responding to unexpected interruptions and delays and unpredictable demands imposed by external agents — all while monitoring everything that is going on. This is multitasking in a pilot’s world.

Limitations
People often feel they are perfectly capable of performing several tasks simultaneously. There seems to be a popular myth that humans are good multitaskers. In reality, however, the human ability to process more than one stream of information at a time and respond accordingly is severely limited. Truly simultaneous performance is possible only when tasks are highly practiced and rehearsed extensively together. Performance in this situation becomes largely automatic, making few demands on the brain’s limited capacities for attention and working memory. But when an individual tries multitasking in a situation that involves novel tasks, complex decision making, monitoring, or overriding habits, it all falls apart.

In principle, pilots, like all people, have limited choices when called to multitask: They can interweave steps of one task with steps of other tasks, or defer one task until the other task is completed, or even purposefully omit one task. The choice and the degree to which any of these proves successful depend on the interaction of the characteristics of the tasks being performed, human information processing attributes, and the experience, skill, and goals of the individual — always within the context of prevailing standard operating procedures and operational restrictions. However, the approach people take to multitasking demands is not necessarily deliberate or well thought out.

During our observations, we spent many hours watching pilots handle routine multitasking situations, apparently without much effort or many errors — but we became increasingly uneasy with the risks they were unknowingly accepting each time they were called to react in ad hoc, inventive ways. Too many of these seemingly benign situations bore a striking resemblance to stories recounted by pilots in incident reports or that we read about in accident reports.(15)

For example, the crew cited in the first paragraph of this article received a stick shaker warning after rotation and realized they had inadvertently omitted setting the flaps to the takeoff position. This crew had been multitasking, attempting to concurrently address a discrepancy in their route and an inoperative call chime.

The crew in the Birmingham event rejected their takeoff, after finding it impossible to rotate the aircraft, because they had inadvertently omitted setting the stabilizer trim for takeoff. This crew was also multitasking: They had to deice the aircraft, were preoccupied by the weather conditions, were trying to meet a takeoff time constraint, and were focused on remembering (which they did) to set the flaps, which they had deferred earlier because of the slushy conditions.

The Madrid accident apparently resulted from the crew’s inadvertent omission of setting the flaps for takeoff, coupled with the failure of the takeoff configuration warning system. Was this crew also multitasking? There are indications that the crew was distracted by an overheating probe, and had to return to the gate for maintenance, receive additional fuel, and start the engines anew. Our research has focused on key aspects of human cognition that lie at the heart of multitasking, namely remembering to perform tasks that must be deferred (prospective memory), automatic processing and switching attention between tasks. There is considerable scientific evidence that pilots, like all people, are highly vulnerable to inadvertent but potentially deadly omissions when a situation leads them to defer a task that normally is performed at a particular time and place. Deferring a task breaks up the normal sequence of habitual actions and removes environmental cues that help pilots remember what to do next. Interruptions create especially dangerous prospective memory situations — by requiring pilots to remember to resume the deferred, interrupted task — but are so commonplace that pilots may not recognize the threat.
Interruptions typically disrupt the chain of procedure execution so abruptly that pilots turn immediately to the source of interruption without noting the point where the procedure was suspended, without forming an explicit intention to resume the suspended procedure, or without creating salient cues to remind themselves to resume the interrupted task. Certain phases of flight such as taxi-out and approach are often so busy that it is extremely difficult for pilots to pause long enough to review whether they have completed deferred or interrupted tasks.

Pilots also are highly vulnerable to errors of omission when they must attempt to interweave two or more tasks — performing a few steps of a task such as flight management system (FMS) data entry, switching attention to another task such as monitoring taxi progress, back and forth. Much of the time pilots can interweave tasks without problems, but if one task becomes demanding— the FMS does not accept the input, for example — their attention is absorbed by these demands, and they forget to switch attention to other tasks. Monitoring, a crucial defense against threats and errors, often falls by the wayside when pilots must interweave it with demanding tasks. In fact, monitoring is far more difficult to maintain consistently than most pilots realize, as evidenced by studies of automation monitoring. (16),(17)

Dispelling the Myth
There is no single best technique to manage the challenges posed by multitasking in flight operations, but we have suggested various things that pilots and organizations can do.(18) First, we must dispel the myth that multitasking comes easily to humans, especially to pilots with “the right stuff.” We must help pilots recognize typical multitasking situations that create vulnerability to error even in the most routine aspects of operations.
Organizations must take a close look at the difference between the ideal perspective and the real nature of actual flight operations and adjust procedures, training and expectations accordingly.

Fortunately, both individual pilots and organizations can reduce the peril of multitasking. Pilots can treat interruptions, suspending tasks, deferring tasks or performing tasks out of normal sequence as red flags. When interrupted, they can reduce vulnerability by pausing momentarily to mentally note the point at which the procedure is interrupted and by reminding themselves to return to that place later, before addressing the interruption. When suspending or deferring tasks, they can identify when and where they intend to perform the task; create salient reminder cues, such as putting an empty coffee cup over the throttles when they have deferred setting the flaps to their takeoff position; and ask the other pilot to help remember. When forced to interweave tasks, such as monitoring and data entry, pilots can bolster their implicit intention to not stay head-down too long by explicitly noting to themselves the need to perform only a few steps of the one task before checking the status of the other task.

At the organizational level, we were greatly encouraged when one of the air carriers participating in our research, inspired by our preliminary findings, undertook a comprehensive review of all normal cockpit procedures. After months of analysis, that carrier’s review committee devised procedural modifications to reduce multitasking demands in daily operations and to help crew performance become resilient in the face of inevitable disruptions of the ideal flow of procedure execution. The revised procedures demonstrated substantial decrease in error rates.Although the risks of multitasking have been widely underestimated by both individual pilots and flight organizations, we are confident that by taking decisive action, the industry can make substantial progress in protecting pilots from these risks and reducing the types of accidents that have been associated with them. 

(Note from the blogger: To be continued on When the error comes from an expert: The Limits of Expertise & Pilot performance in emergencies: why can be so easy, even for experts, to fail

For more information and to download relevant presentations and publications, visit <humanfactors.arc.nasa.gov/flightcognition>.

Loukia D. Loukopoulos is a senior research associate at the U.S. National Aeronautics and Space Administration (NASA) Ames Research Center/ San Jose State University Research Foundation, and is involved in research and teaching activities through the Hellenic Air Accident and Aviation Safety Board, the Hellenic Air Force and the Hellenic Institute of Transport.

R. Key Dismukes is the chief scientist for aerospace human factors at the Human-System Integration Division at NASA Ames Research Center.

Immanuel Barshi is a research psychologist at the Human-System Integration Division at NASA Ames Research Center. Their book, The Multitasking Myth, was reviewed in ASW in April 2009, on p. 53.

Notes
1. NASA ASRS. Report no. 658970. May 2005.
2. U.K. Air Accidents Investigation Branch (AAIB). AAIB Bulletin 7/2009. <www.aaib.gov.uk/sites/aaib/publications/bulletins/july_2009/boeing_737_3l9__g_ogbe.cfm>.
3. U.S. National Transportation Safety Board (NTSB). Northwest Airlines, Inc.,McDonnell Douglas DC-9-82, N312RC,Detroit Metropolitan Wayne County Airport, Romulus, Michigan, August 16, 1987. Report no. PB88-910406, NTSB/ AAR-88-05.
4. NTSB. Delta Airlines, Inc., Boeing 727-232, N473DA, Dallas-Fort Worth International Airport, Texas, August 31, 1988. Report no. PB89-910406, NTSB/AAR-89-04.
5. Comisión de Investigación de Accidentes e Incidentes de Aviación Civil (CIAIAC). Preliminary Report A-32/2008. <www.fomento.es/NR/rdonlyres/C58972BCB96C-4E14-B047-71B89DD0173E/43303/PreliminaryReportA_032_2008.pdf>.
6. Some 154 people were killed and 18 were seriously injured in the crash, which destroyed the airplane. As a result of preliminary findings, the European Aviation Safety Agency issued an airworthiness directive calling for flight crews on DC-9/MD-80 series airplanes to check the takeoff warning system before starting engines for every flight. The system warns crews if flaps and slats are not correctly set.
7. Javid, F.; Varney, A. (2007). “The Grand Seduction of Multitasking.” ABC News, 20/20, Aug. 14, 2007. <http://abcnews. go.com/2020/story?id=3474058&page=1>.
8. Lohr, S. “Slow Down, Brave Multitasker, and Don’t Read This in Traffic.” The New York Times, Business Section, March 25,2007. <www.nytimes.com/2007/03/25/ business/25multi.html?ex=1332475200&en= f295711cb4a65d9b&ei=5088&partner=rssny t&emc=rss>.
9. Wallis, C. (2006, March 19). “The Multitasking Generation.” Time, March 19, 2006. <www.time.com/time/magazine/article/0,9171,1174696-9,00.html>.
10. “Help! I’ve Lost My Focus.” Time. Jan. 10,2006. <http://time.com/time/magazine/article/0,9171,1147199,00.html>.
11. Strayer, D.L.; Drews, F.A.; Johnston,W.A. (2003). “Cell Phone-InducedFailures of Visual Attention DuringSimulated Driving.” Journal ofExperimental Psychology: AppliedVolume 9(1): 23–32.
12. Redelmeier, D.A.; Tibshirani, R.J. (1997).“Association Between Cellular-TelephoneCalls and Motor Vehicle Collisions.” TheNew England Journal of Medicine Volume 336 (Feb. 13, 1997): 453–458.
13. Loukopoulos, L.D.; Dismukes, R.K.; Barshi, I. “Concurrent Task Demands in the Cockpit: Challenges and Vulnerabilities in Routine Flight Operations.” In R. Jensen (editor), Proceedings of the 12th International Symposium on Aviation Psychology (pp.
737–742). Dayton, Ohio, U.S. The Wright State University. 2003.
14. Loukopoulos; Dismukes; Barshi. The Multitasking Myth: Handling Complexity in Real-World Operations. Burlington, Vermont, U.S.: Ashgate Publishing Co. 2009.
15. Dismukes, R.K.; Berman, B.; Loukopoulos, L.D. The Limits of Expertise: Rethinking Pilot Error and the Causes of Airline Accidents. Burlington, Vermont, U.S.: Ashgate Publishing Co. 2007.
16. Sarter, Nadine B.; Mumaw, Randall J.;Wickens, Christopher D. (2007). “Pilots’ Monitoring Strategies and Performance on Automated Flight Decks: An Empirical Study Combining Behavioral and Eye-Tracking Data.” Human Factors:The Journal of the Human Factors and Ergonomics Society, Volume 49 (3): 347–357.
17. NTSB. A Review of Flightcrew-Involved Major Accidents of U.S. Air Carriers, 1978 Through 1990. Report no. PB94-917001, NTSB/SS-94/01. 1994. <http://libraryonline.erau.edu/online-full-text/ntsb/safetystudies/SS94-01.pdf>.
18. Loukopoulos; Dismukes; Barshi. 2009.

Excerpted from Flight Safety Foundation’s AeroSafety World August 2009 The Perils of Multitasking. AeroSafetyWorld August 2009 by Loukia D. Loukopoulos, R. Key Dismukes, and Immanuel Barshi

Further reading

  1. Pilot performance in emergencies: why can be so easy, even for experts, to fail
  2. When the error comes from an expert: The Limits of Expertise
  3. Spanair DC-9-82 (MD82) accident at Madrid Barajas Airport, on 20 August 2008
  4. Why do pilots takeoff with no flaps/slats?
  5. Lessons learned from Northwest Airlines Flight 255, 30 years later
  6. NTSB. Delta Airlines, Inc., Boeing 727-232, N473DA, Dallas-Fort Worth International Airport, Texas, August 31, 1988. Report no. PB89-910406, NTSB/AAR-89-04.

**********************

minime2By Laura Victoria Duque Arrubla, a medical doctor with postgraduate studies in Aviation Medicine, Human Factors and Aviation Safety. In the aviation field since 1988, Human Factors instructor since 1994. Follow me on facebook Living Safely with Human Error and twitter. Human Factors information almost every day

_______________________

Shutting down the wrong engine

TransAsia Airways Flight GE235 accident, associated with the pilot-flying confusion after an uncommanded auto-feather of engine number 2 which resulted in reducing power on the operative engine 1, brings to our days an old danger we thought had been overridden.

ATR engine

Photo (C) Michael Karch ATR 72-212A(500)

Therefore, the Aviation Safety Council Taipei-Taiwan during the investigation process made a review of the available literature and studies related to Propulsion System Malfunction and Inappropriate Crew Response (PSM + ICR) and published this review in its final report.

Propulsion System Malfunction and Inappropriate Crew Response (1)

1. Overview of PSM+ICR Study

Following an accident in the U.S. in December 1994, (Note from the blogger: American Eagle Flight 3379 uncontrolled collision with terrain. Morrisville, North Carolina December 13th, 1994) the U.S. Federal Aviation Administration (FAA) requested the Aviation Industries Association (AIA) to conduct a review of serious incidents and accidents that involved an engine failure or perceived engine failure and an ‘inappropriate’ crew response. The AIA conducted the review in association with the European Association of Aerospace Industries (AECMA) and produced their report in November 1998. (Note from the blogger: This report is included in a digest from the Flight Safety Foundation which includes multiple reports.  The subject report concludes on page 193 of the linked document Sallee, G. P. & Gibbons, D. M. (1999). Propulsion system malfunction plus inappropriate crew response (PSM + ICR). Flight Safety Digest, 18, (11-12), 1-193)

The review examined all accidents and serious incidents worldwide which involved ‘Propulsion System Malfunction + Inappropriate Crew Response (PSM+ICR)’. Those events were defined as ‘where the pilot(s) did not appropriately handle a single benign engine or propulsion system malfunction’. Inappropriate responses included incorrect response, lack of response, or unexpected and unanticipated response. The review focused on events involving western-built commercial turbofan and turboprop aircraft in the transport category. The review conclusions included the following:

  • The rate of occurrences per airplane departure for PSM+ICR accidents had remained essentially constant for many years. Those types of accidents were still occurring despite the significant improvement in propulsion system reliability that has occurred over the past 20 years, suggesting that the rate of inappropriate crew response to propulsion system malfunction rates had increased.
  • As of 1998, the number of accidents involving PSM+ICR was about three per year in revenue service flights, with an additional two per year associated with flight crew training of simulated engine-out
  • Although the vast majority of propulsion system malfunctions were recognized and handled appropriately, there was sufficient evidence to suggest that many pilots have difficulty identifying certain propulsion system malfunctions and reacting appropriately.
  • With specific reference to turboprop aircraft, pilots were failing to properly control the airplane after a propulsion system malfunction that should have been within their capabilities to handle.
  • The research team was unable to find any adequate training materials on the subject of modern propulsion system malfunction recognition.
  • There were no existing regulatory requirements to train pilots on propulsion system malfunction
  • While current training programs concentrated appropriately on pilot handling of engine failure (single engine loss of thrust and resulting thrust asymmetry) at the most critical point in flight, they do not address the malfunction characteristics (auditory and visual cues) most likely to result in inappropriate

Turboprop Aircraft

Of the 75 turboprop occurrences with sufficient data for analysis, about 80% involved revenue flights. PCM+ ICR events in turboprop operations were occurring at 6 ± 3 events per year. About half of the accidents involving turboprop aircraft in the transport category occurred during the takeoff phase of flight. About 63% of the accidents involved a loss of control, with most of those occurring following the propulsion system malfunction during takeoff. Seventy percent of the ‘power plant malfunction during takeoff’ events led to a loss of control, either immediately or on the subsequent approach to land.

Propulsion system failures resulting in an uncommanded total power loss were the most common technical events. ‘Shut down by crew’ events included those where either a malfunction of the engine occurred and the crew shut down the engine, or where one engine malfunctioned and the other (wrong) engine was shut down. Fifty percent of the ‘shut down by crew’ events involved the crew shutting down the wrong engine, half of which occurred on training flights.

Failure Cues

The report’s occurrence data indicated that flight crews did not recognize the propulsion system malfunction from the symptoms, cues, and/or indications. The symptoms and cues were, on occasion, misdiagnosed resulting in inappropriate action. In many of the events with inappropriate actions, the symptoms and cues were totally outside of the pilot’s operational and training experience base.

The report stated that to recognize power plant malfunctions, the entry condition symptoms and cues need to be presented during flight crew training as realistically as possible. When these symptoms and cues cannot be presented accurately, training via some other means should be considered. The need to accomplish failure recognition emerges from the analysis of accidents and incidents that were initiated by single power plant failures which should have been, but were not, recognized and responded to in an appropriate manner.

While training for engine failure or malfunction recognition is varied, it often involved the pilot reaction to a single piece of data (one instrument or a single engine parameter), as opposed to assessing several data sources to gain information about the total propulsion system. Operators reported that there was little or no training given on how to identify a propulsion system failure or malfunction.

There was little data to identify which cues, other than system alerts and annunciators, the crews used or failed to use in identifying the propulsion system malfunctions. In addition, the report was unable to determine if the crews had been miscued by aircraft systems, displays, other indications, or each other where they did not recognize the power plant malfunction or which power plant was malfunctioning.

Effect of Auto-feather Systems

The influence of auto-feather systems on the outcome of the events was also examined. The events “loss of control during takeoff”  were specifically addressed since this was the type of problem and flight phase for which auto-feather systems were designed to aid the pilot. In 15 of the events, auto-feather was fitted and armed (and was therefore assumed to have operated). In five of the events, an auto-feather system was not fitted and of the remaining six, the auto-feather status is not known. Therefore, in at least 15 out of 26 events, the presence of auto-feather failed to prevent the loss of control. This suggests that whereas auto-feather is undoubtedly a benefit, control of the airplane is being lost for reasons other than excessive propeller drag.

Training Issues

In early generation jet and turboprop aircraft, flight engineers were assigned the duties of recognizing and handling propulsion system anomalies. Specific training was given to flight engineers on these duties under the requirements of CFR Part 63 – Certification: Flight Crew Members Other than Pilots, Volume 2, Appendix 13. To become a pilot, an individual progressed from flight engineer through co-pilot to pilot and all pilots by this practice received power plant malfunction recognition training. The majority of pilots from earlier generations were likely to see several engine failures during their careers, and failures were sufficiently common to be a primary topic for discussion. It was not clear how current generation pilots learned to recognize and handle propulsion system malfunctions.

At the time of the report, pilot training and checking associated with propulsion system malfunctions concentrated on emergency checklist items which were typically limited, on most aircraft, to an engine fire, in-flight shutdown and re-light, and, low oil pressure. In addition, the training and checking covered the handling task following engine failure at or close to V1. Pilots generally were not exposed in their training to the wide range of propulsion system malfunctions that can occur. No evidence was found of specific pilot training material on the subject of propulsion system malfunction recognition on modern engines.

There’s a broad range of propulsion system malfunctions that can occur and the symptoms associated with those malfunctions. If the pilot community is, in general, only exposed to a very limited portion of that envelope, it is probable that many of the malfunctions that occur in service will be outside the experience of the flight crew. It was the view of the research group that, during basic pilot training and type conversion, a foundation in propulsion system malfunction recognition was necessary. This should be reinforced, during recurrent training with exposure to the extremes of propulsion system malfunction; e.g., the loudest, most rapid, most subtle, etc. This, at least, should ensure that the malfunction was not outside the pilot’s experience, as was often the case.

The report also emphasized that “Although it is important to quickly identify and diagnose certain emergencies, the industry needs to effect cockpit/aircrew changes to decrease the likelihood of a too-eager crew member in shutting down the wrong engine”. In addition, the report also noted that negative transfer has also been seen to occur since initial or ab-initio training was normally carried out in aircraft without auto-feather systems. Major attention was placed on the need for rapid feathering of the propeller(s) in the event of engine failure. On most modern turboprop commercial transport airplanes, which are fitted with auto-feather systems, this training can lead to over-concentration on the propeller condition at the expense of the most important task of flying the airplane.

Furthermore, both negative training and transfer were most likely to occur at times of high stress, fear, and surprise, such as may occur in the event of a propulsion system malfunction at or near the ground.

Loss of control may be due to a lack of piloting skills or it may be that preceding inappropriate actions had rendered the aircraft uncontrollable regardless of skill. The recommended solutions (even within training) would be quite different for these two general circumstances. In the first instance, it is a matter of instilling through practice the implementation of appropriate actions without even having to think about what to do in terms of control actions. In the second instance, there is a serious need for procedural practice. Workload, physical and mental, can be very high during an engine failure event.

Training Recommendations

The report made a number of recommendations to improve pilot training. With specific reference to turboprop pilot training, the report recommended:

  • Industry provides training guidelines on how to recognize and diagnose the engine problem by using all available data in order to provide the complete information state of the propulsion
  • Industry standardized training for asymmetric
  • Review stall recovery training for pilots during takeoff and go-around with a focus on preventing confusion during low-speed flight with an engine failure.

Hist_Morrisville1994_04

American Eagle Flight 3379 uncontrolled collision with terrain. Morrisville, North Carolina December 13th, 1994 Photo from GenDisasters.com 

Error types

Errors in integrating and interpreting the data produced by propulsion system malfunctions were the most prevalent and varied in the substance of all error types across events. This might be expected given the task pilots have in propulsion system malfunction (PSM) events of having to integrate and interpret data both between or among engines and over time in order to arrive at the information that determines what is happening and where (i.e., to which component). The error data clearly indicated that additional training, both event-specific and on system interactions, is required.

Data integration

The same failure to integrate relevant data resulted in instances where the action was taken on the wrong engine. These failures to integrate data occurred both when engine indications were changing rapidly, that is more saliently, as well as when they were changing more slowly over time.

Erroneous assumptions

The second category of errors related to interpretation involved erroneous assumptions about the relationship between or among aircraft systems and/or the misidentification of specific cues during the integration/interpretation process. Errors related to erroneous assumptions should be amenable to reduction, if not elimination, through the types of training recommended by the workshop. Errors due to misidentification of cues need to be evaluated carefully for the potential for design solutions.

Misinterpretation of cues

A third significant category of errors leading to inappropriate crew responses under “interpret” was that of misinterpretation of the pattern of data (cues) available to the crew for understanding what was happening and where in order to take appropriate action. Errors of this type may be directly linked to failures to properly integrate cue data because of incomplete or inaccurate mental models at the system and aircraft levels, as well as misidentification of cues. A number of the events included in this subcategory involved the misinterpretation of the pattern of cues because of the similarity of cue patterns between malfunctions with very different sources.

Crew communication

A fourth error category involved the failure to obtain relevant data from crew members. The failure to integrate input from crew members into the pattern of cues was considered important for developing recommendations regarding crew coordination. It also highlighted the fact that inputs to the process of developing a complete picture of relevant cues for understanding what is happening and where can and often must come from other crew members as well as from an individual’s cue-seeking activity. This error type was different to “not attending to inputs from crew members”, which would be classified as a detection error.

System knowledge

Knowledge of system operation under non-normal conditions was inadequate or incomplete and produced erroneous or incomplete mental models of system performance under non-normal conditions. The inappropriate crew responses were based on errors produced by faulty mental models at either the system or aircraft level.

Improper strategy and/or procedure and execution errors

The selection of an inappropriate strategy or procedure featured prominently in the events and included deviations from best practice and choosing to reduce power on one or both engines below a safe operating altitude. Execution errors included errors made in the processing and/or interpretation of data or those made in the selection of the action to be taken.

2. U.S. Army ‘Wrong Engine’ Shutdown Study

The United States (U.S.) Army conducted a study to see if pilots’ reactions to single-engine emergencies in dual-engine helicopters were a systemic problem and whether the risks of such actions could be reduced. The goal was to examine errors that led to pilots to shutting down the wrong engine during such emergencies (‘The Wrong Engine Study’- Wildzunas, R.M., Levine, R.R., Garner, W., and Braman, G.D. (1999). Error analysis of UH-60 single-engine emergency procedures (USAARL Report No. 99-05). Fort Rucker, AL: U.S. Army Aeromedical Research Laboratory).

The research involved the use of surveys and simulator testing. Over 70 % of survey respondents believed there was the potential for shutting down the wrong engine and 40 % confirmed that they had, during actual or simulated emergency situations, confused the power control levers (PCLs). In addition, 50% of those who recounted confusion confirmed they had shut down the “good engine” or moved the good engine’s PCL. When asked what they felt had caused them to move the wrong PCL, 50% indicated that their action was based on an incorrect diagnosis of the problem. Other reasons included the design of the PCL, the design of the aircraft, use of night vision goggles (NVG), inadequate training, negative habit transfer, rushing the procedure and inadequate written procedures. When asked how to prevent pilots from selecting the wrong engine, 75% recommended training solutions and 25% engineering solutions.

The simulator testing (n=47) found that 15% of the participants reacted incorrectly to the selected engine emergency and 25% of the erroneous reactions resulted in dual engine power loss and simulated fatalities. Analysis of reactions to the engine emergencies identified difficulties with the initial diagnosis of a problem (47%) and errors in action taken (32%). Other errors included the failure to detect system changes, failure to select a reasonable goal based on the emergency (get home versus land immediately), and failure to perform the designated procedure. The range of responses included immediately recognizing and correcting the error to shutting down the “good” engine, resulting in loss of the helicopter. Although malfunctions that require single-engine emergency procedures were relatively rare, the study indicated that there was a one in six likelihood that, in these types of emergencies, the crew will respond incorrectly.

The pattern of cognitive errors was very similar to the PSM+ICR error data. The functions contributing to the greatest number of errors were diagnostic (interpretation) and action (execution). The largest difference was in the major contribution of strategy/procedure errors in the PSM+ICR database, whereas there were comparatively few goal, strategy, and procedure errors in the U.S. Army simulator study. The survey data indicated that pilots felt that improper diagnosis and lack of training were major factors affecting their actions on the wrong engine. This supported the findings of the PSM+ICR report that included the need for enhanced training to improve crew performance in determining what is happening and  where.

blackhawk

Photo (C) David Lara Colombia – Police – PNC-0600 Aircraft: Sikorsky – UH-60L Black Hawk

3. Additional Human Factors considerations

3.1 Diagnostic skills

Diagnostic skills are recognized as having important implications for operators of complex socio-technical systems, such as aviation (Wiggins, M. W. (2015). Cues in diagnostic reasoning. In M. W. Wiggins and T. Loveday (Eds.), Diagnostic expertise in organizational environments (pp. 1-13). Aldershot, UK: Ashgate.).

The development of advanced technologies and their associated interfaces and displays have highlighted the importance of cue acquisition and utilization to accurately and efficiently determine the status of a system state before responding appropriately to that situation. Moreover, cue-based processing research has significant implications for designing diagnostic support systems, interfaces, and training (Wiggins, M. W. (2012). The role of cue utilization and adaptive interface design in the management of skilled performance in operations control. Theoretical Issues in Ergonomics Science, 15, 1-10).

In addition, miscuing (Miscuing refers to the activation of an inappropriate association in memory by a salient feature, thereby delaying or preventing the accurate recognition of an object or event) and/or poorly differentiated cues have been implicated in several major aircraft accidents, including Helios Airways Flight 522 and Air France Flight 447 (Loveday, T. (2015). Designing for diagnostic cues. In M.W. Wiggins and T. Loveday (Eds.), Diagnostic expertise in organizational environments (pp. 49-60). Aldershot, UK: Ashgate.)

It has also been argued that cue-based associations comprise the initial phase of situational awareness (O’Hare, D. (2015). Situational awareness and diagnosis.  In M. W.  Wiggins and T.  Loveday (Eds.), Diagnostic expertise in organizational environments (pp. 13-26). Aldershot, UK: Ashgate).

Furthermore, it has been demonstrated that individuals and teams with higher levels of cue utilization have superior diagnostic skills and are better equipped to respond to non-normal system states (Loveday, T., Wiggins, M. W., & Searle, B. J. (2013). Cue utilization and broad indicators of workplace expertise. Journal of Cognitive Engineering and Decision-Making, 8, 98-113).

The ‘PSM+ICR’ study identified recurring problems with a crew’s diagnosis of propulsion system malfunctions, in part, because the cues, indications, and/or symptoms associated with the malfunctions were outside of the pilot’s previous training and experience. Consistent with the U.S. Army study, that often led to confusion and inappropriate responses, including shutting down the operative engine.

3.2 Situational Awareness

Situational awareness (SA) is a state of knowledge which is achieved through various situation assessment processes (Endsley, M.R. (2004). Situation awareness: Progress and directions. In S. Banbury & S. Tremblay (Eds.), Acognitive approach to situation awareness: Theory and application (pp. 317–341). Aldershot, UK: Ashgate Publishing).

This internal model is believed to be the basis of decision-making, planning, and problem solving. Information in the world must be perceived, interpreted, analyzed for significance, and integrated with previous knowledge, to facilitate a predictive understanding of a system’s state. SA is having an accurate understanding of what is happening around you and what is likely to happen in the near future. Team SA is the degree to which every team member possesses the SA required for their responsibilities (Endsley, M. R. & Jones, W. M. (2001). A model of inter- and intrateam situation awareness: Implications for design, training and measurement. In M. McNeese, E. Salas & M. Endsley (Eds.), New trends in cooperative activities: Understanding system dynamics in complex environments. Santa Monica, CA: Human Factors and Ergonomics Society).

The three stages of SA formation have traditionally included:

  • Perception of environmental elements (important and relevant items in the environment must be perceived and recognized. It includes elements in an aircraft such as system status, warning lights and elements external to an aircraft such as other aircraft, obstacles);
  • The comprehension of their meaning; and
  • The projection of their status following a change in a variable (with sufficient comprehension of the system and appropriate understanding of its behavior, an individual can predict, at least in the near term, how the system will behave. Such understanding is important for identifying appropriate actions and their consequences).

Dominguez et al. (1994) proposed that SA comprised the following four elements: (Dominguez, C. (1994). Can SA be defined? In M. Vidulich, C. Dominguez, E. Vogel, & G. McMillan, Situation awareness: Papers and annotated bibliography (pp. 5-16). Wright-Patterson AFB, OH: Armstrong Laboratory. Flight Safety Foundation. (2009)

  • Extracting information from the environment;
  • Integrating this information with relevant internal knowledge to create a mental picture of the current situation;
  • Using this picture to direct further perceptual exploration in a continual perceptual cycle; and
  • Anticipating future

Many factors can induce a loss of situational awareness. Errors can occur at each level of the process. (Crew resource management. Operator’s guide to human factors in aviation. Alexandria, VA: Author. Also, see Skybrary Situational Awareness article

A loss of situational awareness could occur when there was a failure at any one of these stages resulting in the pilot and/or crew not having an accurate mental representation of the situation.

The following list shows a series of factors related to loss of situational awareness, and conditions contributing to those errors

  • Data are not observed, either because they are difficult to observe or because the observer’s scanning is deficient due to:

-Attention narrowing

-Passive, complacent behavior

-High workload

-Distractions and interruptions

-Visual Illusions

  • Confirmation bias:

-Information is misperceived. Expecting to observe something and focusing attention on that belief can cause people see what they expect rather than what is actually happening.

  • Use of a poor or incomplete mental model due to:

-Deficient observations

-Poor knowledge/experience

– Use of a wrong or inappropriate mental model, over-reliance on the mental model and failing to recognize that the mental model needs to change.

Human operators may interpret the nature of the problem incorrectly, which leads to inappropriate decisions because they are solving the wrong problem (an SA error) or operators may establish an accurate picture of the situation, but choose an inappropriate course of action (error of intention).

Endsley (1999) reported that perceptual issues accounted for around 80% of SA errors, while comprehension and projection issues accounted for 17% and 3% of SA errors, respectively. That the distribution of errors was skewed to perceptual issues likely reflected that errors at Levels 2 and 3 will lead to behaviors (e.g., misdirection of attentional resources) that produce Level 1 errors (Endsley, M. R. (1999). Situation awareness in aviation systems. In J. A. Wise, V. D. Hopkin, V. D., & D. J.  Garland, (Eds.), Handbook of aviation human factors (pp. 257-275). Mahwah, NJ: Lawrence Erlbaum.).

St. John and Smallman (2008) noted that SA is negatively affected by  Endsley, interruptions and multi-tasking (St.John, M. S., & Smallman, H. S. (2008). Staying up to speed: Four design principles for maintaining and recovering situation awareness. Journal of Cognitive Engineering and Decision Making, 2, 118-139).

One of the difficulties of maintaining SA was to recover from a reallocation of cognitive resources as tasks and responsibilities change in a dynamic environment. In many, interruptions and multi-tasking introduce conditions for change blindness or for problems with cue acquisition, understanding, and utilization. Change blindness is the striking failure to see large changes that normally would be noticed easily. (See Simons, D. J., & Rensink, R. A. (2005). Change blindness: Past, present, and future. Trends in Cognitive Sciences, 9, 16-20).

For a pilot, situational awareness means having a mental picture of the existing inter-relationship of location, flight conditions, configuration and energy state of the aircraft as well as any other factors that could be about to affect its safety such as proximate terrain, obstructions, airspace, and weather systems. The potential consequences of inadequate situational awareness include CFIT, loss of control, airspace infringement, loss of separation, or an encounter with wake vortex turbulence.

There is a substantial amount of aviation-related situational awareness research. Much of this research supports the loss of situational awareness mitigation concepts. These include the need to be fully briefed, in order to completely understand the particular task at hand. That briefing should also include a risk management or threat and error management assessment. Another important mitigation strategy is distraction management. It is important to minimize distraction, however, if a distraction has occurred during a particular task, to ’back up ‘a few steps, and check whether the intended sequence has been followed.

3.3 Stress

Stress can be defined as a process by which certain environmental demands evoke an appraisal process in which perceived demand exceeds resources and results in undesirable physiological, psychological, behavioral or social outcomes. This means if a person perceives that he or she is not able to cope with a stressor, it can lead to negative stress reactions. Stress can have many effects on a pilot’s performance. These include cognitive issues such as narrowed attention, decreased search activity, longer reaction time to peripheral cues and decreased vigilance, and increased errors performing operational procedures.

Stress management techniques include simulator training to develop proficiency in handling non-normal flight situations that are not encountered often and the anticipation and briefing of possible scenarios and threats that could arise during the flight even if they are unlikely to occur (e.g. engine failure). These techniques help prime a crew to respond effectively should an emergency arise.

stress

Photo from AeroSafety World December 2012 Cover story Attention on Deck

Why did the pilots shut down the wrong engine? (4)

Besides the scientific literature overviewed by the Taiwanese investigation authority, on September 2011 Safety Science published a study that looked to demonstrate that Schema Theory (as incorporated in the Perceptual Cycle framework) offers a compelling causal account of human error. Schema Theory offers a system perspective with a focus on human activity in context to explain why apparently erroneous actions occurred, even though they may have appeared to be appropriate at the time. This is exemplified in a case study of the pilots’ actions preceding the 1989 Kegworth accident (Lessons learned from British Midland Flight 92, Boeing B-737-400, January 8, 1989and offers a very interesting approach to human error in aviation. (Katherine L. Plant, Neville A. Stanton, Why did the pilots shut down the wrong engine? Explaining errors in context using Schema Theory and the Perceptual Cycle Model. Transportation Research Group, School of Civil Engineering and the Environment, University of Southampton, Highfield, Southampton SO17 1BJ, United Kingdom)

What is a Schema?

“For the purposes of this paper, a Schema will be considered as an organized mental pattern of thoughts or behaviors to help organize world knowledge (Neisser, 1976). The concept of Schemata is an attempt to explain how we represent aspects of the world in mental representations and use these representations to guide future behaviors. They provide instruction to our cognition and organize the mass of information we have to deal with (Chalmers, 2003). Our knowledge about everything can be considered as networks of information that become activated as we experience things and function according to Schematic principles (Mandler, 1984). It is not analytic knowledge that is required for effective decision making in a naturalistic setting of a complex socio-technical system, such as a flight deck, but instead intuition. This intuition can be in the form of metaphors or storytelling that allow the perceiver to draw parallels, make inferences and consolidate experiences. It is this area of intuition that Schematic processing will be influential.

When a person carries out a task, Schemata affect and direct how they perceive information in the world, how this information is stored and then activated to provide them with past experiences and the knowledge about the actions required for a specific task (Mandler, 1984).”

 The Kegworth Disaster (1989, UK)

“At 1845 on the 8th of January 1989 a Boeing 737-400 landed at London Heathrow Airport after completing its first shuttle from Belfast Aldergrove Airport. At 1952 the plane left Heathrow to return to Belfast with eight crew and 118 passengers. As the aircraft was climbing through 28,300 ft the outer panel of a blade in the fan of the No. 1 (left) engine detached. This gave rise to a series of compressor stalls in the No. 1 engine, which resulted in the airframe shuddering, smoke and fumes entering the cabin and flight deck and fluctuations of the No. 1 engine parameters. The crew made the decision to divert to East Midlands Airport. Believing that the No. 2 (right) engine had suffered damage, the crew throttled it back. The shuddering ceased as soon as the No. 2 engine was throttled back, which persuaded the crew that they had correctly dealt with the emergency, so they continued to shut it down. The No. l engine operated apparently normally after the initial period of vibration, during the subsequent descent, however, the No. 1 engine failed, causing the aircraft to strike the ground 2 nm from the runway. The ground impact occurred on the embankment of the M1 motorway. Forty-seven passengers died and 74 of the remaining 79 passengers and crew suffered serious injury. (The synopsis was adapted from the official Air Accident Investigation Branch report, AAIB, 1990.)”

Schematic analysis of the Kegworth crash

“As highlighted in the synopsis of the Kegworth accident many human errors contributed to the crash. This section accounts for the actions of the pilots in the Kegworth accident from a Schema perspective, using the five key contributory factors presented in the AAIB report as the structure (italics denote information paraphrased from the report).

Fundamental error: Shut down the wrong engine due to inappropriate diagnosis of smoke origin. The pilots believed that smoke entering the flight deck was coming forward from the passenger cabin. Their appreciation of the air conditioning system contributed to the pilots’ belief that this meant a fault with the right engine (instead of the damaged left one).

For the purpose of this paper, the Schematic representation of the air conditioning system will be dealt with detail, though for completion it is worth noting that the AAIB report states that the Captain believed the First Officer had seen positive engine instrument indications and therefore accepted the First Officer’s assessment of the situation. The decision made by the pilots about which engine was damaged was partly based on their assumed knowledge of the air conditioning system (Individual). This is a classic example of how people rely on their Schemata and mental  representations. The Captain’s appreciation of the air conditioning system was correct for other types of aircraft flown in which he had acquired substantial flying experience. This experience would have resulted in the Captain having and utilising a Schema based on his knowledge that a problem with the right engine would mean smoke in the passenger’s cabin which could blow forward onto the flight deck due to the configuration of the air conditioning system (world), resulting in the wrong engine being shut-down (action). In a previous generation of aircraft (i.e. series 300 rather than series 400) this would be an entirely accurate mental representation, however, without the time and experience to develop an appropriate Schema, the Captain used a ‘default’ Schema based on previous similar situations. Reason (1990) states that errors are due to the human tendency towards the familiar, similar and expected because people favor using Schemata what are routine to them.

Similarly, when conducting task analyses for Navy tactical decision makers, Morrison et al. (1997) found that 87% of information transactions associated with situation assessment involved feature matching strategies, i.e. matching the observed event to those previously experienced. The underlying principle of Schema Theory is the use of previous experience to develop a set of expectations, even if they subsequently turn out to be wrong as was the case here; the air conditioning system did not work in the pilots expected way in the 400 series. In Norman’s (1981) view of Schema triggering components, the smoke on the flight deck thought to have come forward from the passenger cabin was enough to trigger the Schema for this situation. This, therefore, led to an erroneous classification of the situation. Thus the action was intended and correct for the assumed situation (right engine damage) but not the actual situation (left engine damage). In the Perceptual Cycle Model Schemata are anticipations and are the medium by which the past affects the future, i.e. information already acquired determines what will be picked up (Hanson and Hanson, 1996), this process is clearly evident in this example.

The error of shutting down the wrong engine seems so fundamental, but it is only with the benefit of hindsight that the authors know more than the pilots involved did at the time. The question that the ‘new view’ of error would want to ask is why the contradictions that are so easy to see now, were not interpreted at the time, i.e. why did assessments and actions make sense to the pilots’ at the time? (Dekker, 2006). Operators are often victims of Schema fixation, preventing them from changing their representation and detecting the error.

It must also be noted that the emergency nature of the situation would have influenced the crew’s decision-making abilities. It is far more comfortable and reassuring to impose structure on a situation and deal with the known rather than an unknown problem often resulting in unnecessary haste and failure to question a decision (Berman and Dismukes, 2006).

The subsequent five contributory factors were identified in the AAIB report as the key reasons why the fundamental error of shutting down the wrong engine occurred.

Contributory Factor 1: situation (combination of engine vibration, noise, and smell of fire) was outside crew’s training and experience.

The error literature describes inaccurate or incomplete Schemata as ‘‘buggy’’ (Groeger, 1997). It can be argued that the crew in the Kegworth accident had ‘buggy’ Schemata for the situation they were in because they had not been in that situation before and therefore they had not built up an accurate mental representation of what was going on and how to deal with it. The crew, however, did have an extensive knowledge of the previous generation of the aircraft type, which in this case, appeared to influence their decision making.

Operators in complex systems are in a state both mindfulness and ambivalence to allow for awareness-based detection. In other words, operators are faced with partly novel (in this case fan blade rupture in the left engine causing smoke and vibration outside of the crews training and experience) and partly familiar (in this case the crew had an awareness from their experience of other aircraft types that smoke entering the flight deck from the cabin was likely to mean a problem with the right engine) situations. This dual state however of belief and doubt is hard to achieve, especially in time-critical situations.

The Schematic influences acting on the pilots in the Kegworth accident can be further distinguished between the summary of the experience, in this case what the vibration, noise and smell meant to the pilots (left engine damage) and the actions that occur at a moment in time, in this case throttling back and shutting down the wrong engine. It is impossible to personally experience every eventuality for a complex system such as a flight deck, that is why training, for example, case-based learning (O’Hare et al., 2010), is so critical because it allows for people to develop Schemata that can be utilised when it is required in a real-life situation. The incident of the fan blade rupture and its associated symptoms (fumes, vibration, etc.) was a rare event and not included in training, nor had the crew any first-hand experience of it and therefore they did not have the Schema available to deal with it adequately

Contributory Factor 2: Premature reaction to problem, contrary to training. The speed at which the pilots acted was contrary to their training and the Operators Procedures.

Human action takes place at different cognitive levels; distinguishing between conscious attention-requiring action and unconscious action that do not require the presence of thought. Automatic activation is thus likely to lead to automatic action responses, hence the premature reaction to the problem. At the time the Captain thought he had made the correct assessment of the situation and had no reason to suspect he had made any erroneous decisions. Problems with data interpretation are exacerbated when people either explain away or take immediate action (as occurred here) to counteract a symptom and later forget to integrate data that may be available (see Contributory Factor 3), this situation can be seen to happen here.

Controlled processes (i.e. willed and guided attention) are only activated when either a task is too difficult (a novel situation) or errors are made. At this stage the pilots were not aware they were facing a novel situation, i.e. fan blade rupture rather than a more routine and trained for the engine fire, nor was they aware that their decision was to be erroneous. Therefore a Schematic perspective of the attention processes the pilots were engaged with would suggest they were automatic and thus relatively instantaneous based on their assessment of the situation which can account for the premature reaction to the situation.

Contributory Factor 3: Lack of equipment monitoring and assimilation of instrument indications. The engine parameters appeared to be stable, even though the vibration continued to show on the Flight Data Recorder (FDR) and were felt by passengers but they were not perceived by the pilots. Additionally, the crew looking at the engine instruments did not get an indication of which engine was faulty. Furthermore, the engine vibration gauges that were part of the Engine Instrument System (EIS) were not included in the Captain’s visual scan as in previous models of aircraft it was considered unreliable.

One of the defining features of Schemata is their emphasis on the role of past experience in guiding the way people perceive and act in the world. After the Kegworth accident, the Captain stated that he rarely included the engine vibration gauges in his visual scans as he believed them to be unreliable and prone to false readings, this belief was based on his experience of these instruments in other aircraft. The crew was expecting the engine vibration gauges to be unreliable based on their past experiences, therefore, their ‘scanning Schema’ would not have included these instruments once it was activated. Expectancy can override any external cues to the contrary. The active Schemata available to the Captain were not based on the current model of the aircraft as he only had 23 h flying experience in it. The prevailing and enduring view of the unreliability of the engine vibration gauges was formed from over 13,000 h of flying experience with other aircraft types. As a result, the Captain was left with a faulty Schematic representation for the scanning process.

The engine instrument contributory factor highlights how systemic factors can play a vital role in triggering error. Although the new engine vibration displays (digital rather than mechanical pointers) were technically more reliable, 64% of British Midland Airway pilots thought the new display system was not effective at drawing attention to changes in engine parameters and 74% preferred the old mechanical pointers (AAIB, 1990). It is likely that such views were discussed in crew rooms and colleague’s opinions would have been an influencing factor. Additionally, it would appear that training failed to demonstrate how the new engine vibration gauges were more accurate. Training on the new EIS was included in the 1-day course that explained the differences between the Series 300 and 400 aircraft. There was no flight simulator equipped with the new EIS at the time, therefore the first time a pilot was likely to see abnormal indications on the instruments, such as the engine vibration gauges, was in-flight with a failing engine (AAIB, 1990). Therefore the crew had no real experience of the EIS and the individual gauges and would not have had a relevant Schema developed for it. In this case, these Schemata were not accurate for the situation.

Missing events, known as negative cues, are usually a stumbling block for novices because the experience possessed by experts allows them to form and use expectancies. When these expectancies are misleading, however, the expert may still fall foul of missing events, in this case not assimilating the information from the engine vibration gauges.

The AAIB report (1990) recommended that Civil Aviation Authority (CAA) review their training procedures to ensure crews are provided with EIS display familiarisation in a simulator to acquire the visual and interpretive skills necessary, in other words, develop Schemata for a range of failures and their representation on an EIS, such as the engine vibration gauges. These systemic influences would have played a part in creating the Schemata the pilots relied on and therefore influenced their actions. Short courses and user manuals for converting to the series 400 were unlikely to prevail over the combination of the Captain’s experiences, flying hours and expectations in the immediate term. Reactions to the engine vibration gauges are modified by general experience; old views are liable to prevail unless the technical knowledge of pilots is effectively revised. Schemata reside in long-term memory, to modify it is only likely to occur over relatively longer time scales rather than the shorter time scales associated with dynamic task performance.

When feedback is discrepant from an operator’s expectation, this loss of control requires extensive revision of mental representation and diagnostic actions, which can often be ill-afforded in time-critical situations. Therefore, in the Kegworth accident, had the engine parameters on the engine vibration gauges been noted it would have required a laborious diagnosis to determine whether the display readings were accurate. The lack of visual scanning of the vibration gauges led to a ‘description error’ meaning the relevant information needed to form the appropriate intention is not available. An appropriate intention was formed but based on an insufficient description contributing to the wrong engine being shut down.

Contributory Factor 4: Cessation of smoke, noise, and vibration when the engine was throttled back. Believing that the No 2 (right) engine had suffered damage, it was throttled back and eventually shut down. As soon as this happened the shuddering ceased and there was a cessation of the smoke and fumes, even though it would have been coming from the left engine. This ‘chance’ occurrence (that the left engine ceased to surge as the right one was throttled back) caused the crew to believe that their action had the correct impact.

People tend to assume that their version of the world is correct whenever events happen in accordance with their expectations. This phenomenon is termed ‘confirmation bias’ and refers to people seeking information that is likely to confirm their expectations. Nearly two-thirds of driving accidents are the result of inappropriate expectations or interpretations of the environment. In the Kegworth example the pilots believed they had a good picture of the system (i.e. their Schema was that the right engine was damaged) and the two sequential events that happened were throttling back the right engine (action) and the cessation of vibration and fumes from the left engine (inworld), convincing the crew that their assertion was correct. The reduction in  the level of symptoms (engine vibration and fumes) lasted for 20min and was compatible with the pilot’s expectations of the outcome of throttling back the No. 2 engine; therefore it is clear how this would have been taken as evidence that the correct action had been performed.

This confirmation bias is exacerbated by the lack of equipment monitoring by the pilots (see Contributory Factor 3) and the time critical nature of the situation

 Contributory Factor 5: lack of communication from the passengers and cabin crew. In the cabin, the passengers and the cabin attendants saw signs of fire from the left engine. The Captain broadcast to the passengers that there was trouble with the right engine which had produced smoke and was shut down. Many of the passengers were puzzled by the commander’s reference to the right engine, but none brought the discrepancy to the attention of the cabin crew, even though several were aware of continuing vibration.

One of the key features of the reciprocal and cyclical nature of the Perceptual Cycle Model is how not only do Schemata affect how people act in the world (i.e. direct action) but that information from the world can affect Schemata. Therefore had the flight deck crew been informed of the confusion the passengers were experiencing due to the apparent misdiagnosis of smoke origin, their Schema may have been revised accordingly. This might have resulted in them realizing they did not have an accurate internal representation of the situation and thus they may have adjusted their actions and the outcome could have been different. This contributory factor again emphasizes how the Perceptual Cycle takes the system as a whole into account. Whilst the framework models perception and action of an individual (in this case flight crew), the world in which the crew interact is the systemic element of the cycle. As previously mentioned systems are not a single entity but built up of various layers. Colleagues are part of these layers and they can have an influencing effect on the perception and action of the operators at the sharp end. Operators are often unaware of the ‘bugs’ in their Schemata and feedback (potentially from people or machines) is one way to avoid this miscalibration of knowledge. Systems in which feedback is poor are more likely to have miscalibrated operators, which appeared to be the case in the Kegworth accident.

The AAIB (1990) report suggested that the issue of ‘role’ influenced the passenger’s acceptance of the Captain’s cabin address. Lay passengers generally assume that flight crew will have all the information and knowledge available to them to have made an informed and correct decision. Similarly, the report suggests that although the cabin crew would have also been confused with the address they had no reason to suspect the pilots had not assimilated all of the engine parameter information available to them on the flight deck. In addition, cabin crew is aware that their presence on the flight deck could be distracting especially when the flight crew are dealing with an emergency. Whilst the flight crew can be considered to have incorrect Schemata for the situation, the cabin crew also did not have the Schemata available to them to make them question the pilots. The AAIB report recommended joint training between flight and cabin crew to deal with such circumstances, after such training both the flight and cabin crews would have revised Schemata for when it is appropriate to coordinate communication in emergency situations. This contributory factor again demonstrates how the wider system plays such an important part in the formation of error and shows how an error which initially looked to result from the actions of one person are actually symptoms of trouble deeper within the system.

BMI_crash

Photo from FAA’s Lessons Learned from Transport Airplane Accidents. British Midland B737 Flight 92 at Kegworth 

Discussion

In summary of the Kegworth accident, the unforeseen combination of symptoms was outside the pilots training or experience. This led pilots to base actions on what experience they did have, which has previously been shown to be the case with decision makers in complex situations. The pilots’ expectations about the engine vibration gauges meant they did not assimilate the readings on both engine vibration indicators. Additionally, the pilot’s Schematic representation of the air conditioning system was not appropriate for the unfamiliar aircraft type which contributed to the faulty diagnosis of the problem. Therefore, the actions the pilots took were not appropriate for the situation and information that was available in the world (e.g. knowledge of cabin crew or vibration gauge information) was not utilized by the pilots to update their Schematic representations and modify their actions. From the explanations of the contributory factors outlined in the AAIB report, it would appear that the Perceptual Cycle Model and Schema Theory offers a good model to structure the actions of the pilots and accounts for some of the key points highlighted in the modern systems error literature.

The only ‘‘real explanation of error is that all factors come together’’, there is no single cause of failure rather a ‘‘dynamic interplay of multiple contributors’’ which was certainly the case in the Kegworth accident, without one of the factors (including, lack of engine vibration gauge monitoring, assumed knowledge of air system, lack of crew communication) the accident may not have happened. The Perceptual Cycle framework and Schema Theory can account for these factors and how they influenced the perceptions and actions of the pilots to show how their actions made sense to them at the time.

The preceding discussion shows how interactions between situation, person and the system as a whole contribute to such misfortunes.”

References

(1) Aviation Safety Council Taipei-Taiwan Aviation Occurrence Report, 4 February 2015 TransAsia Airways Flight GE235, ATR72-212A, Loss of Control and Crashed into Keelung River Three Nautical Miles East of Songshan Airport. Report Number: ASC-AOR-16-06-001Date: June 2016

(2) Sallee, G. P. & Gibbons, D. M. (1999). Propulsion system malfunction plus inappropriate crew response (PSM + ICR). Flight Safety Digest, 18, (11-12), 1-193

(3) FAA Engine & Propeller Directorate ANE-110 Turbofan Engine Malfunction Recognition and Response Final Report July, 17h, 2009

(4) Katherine L. Plant, Neville A. Stanton, Why did the pilots shut down the wrong engine? Explaining errors in context using Schema Theory and the Perceptual Cycle Model. Transportation Research Group, School of Civil Engineering and the Environment, University of Southampton, Highfield, Southampton SO17 1BJ, United Kingdom)

Further reading

  1. TransAsia Airways Flight GE235 accident Final Report
  2. Learning from the past: American Eagle Flight 3379, uncontrolled collision with terrain. Morrisville, North Carolina December 13th, 1994.
  3. Lessons learned from British Midland Flight 92, Boeing B-737-400, January 8, 1989

**********************

minime2By Laura Victoria Duque Arrubla, a medical doctor with postgraduate studies in Aviation Medicine, Human Factors and Aviation Safety. In the aviation field since 1988, Human Factors instructor since 1994. Follow me on facebook Living Safely with Human Error and twitter. Human Factors information almost every day

_______________________

Managing the mission with a crew of… just you! Single pilot CRM

There is no one right answer in aeronautical decision-making. Each pilot is expected to analyze each situation in light of experience level, personal minimums, and current physical and mental readiness level, and make his or her own decision.

SRM

Single-Pilot Crew Resource Management

While crew resource management (CRM) focuses on pilots operating in crew environments, many of the concepts apply to single pilot operations. Many CRM principles have been successfully applied to single-pilot aircraft and led to the
development of single-pilot resource management (SRM).

Single-pilot resource management (SRM) is the art of managing all onboard and outside resources available to a pilot before and during a flight to help ensure a safe and successful outcome. SRM includes the concepts of aeronautical decision-making (ADM), risk management, controlled flight into terrain (CFIT) awareness, and situational awareness. SRM training helps the pilot maintain situational awareness by managing automation, associated aircraft control, and navigation tasks. This enables the pilot to accurately assess hazards, manage resulting risk potential, and make good decisions.

SRM helps pilots learn to execute methods of gathering information, analyzing it, and making decisions. Although the flight is coordinated by a single person and not an onboard flight crew, the use of available resources, such as air traffic control (ATC) and automated flight service stations (AFSS), replicates the principles of CRM.

Incorporating SRM into GA pilot training is an important step forward in aviation safety. A structured approach to SRM helps pilots learn to gather information, analyze it, and make sound decisions on the conduct of the flight.

Use of Resources
To make informed decisions during flight operations, a pilot must also become aware of the resources found inside and outside the flight deck. Since useful tools and sources of information may not always be readily apparent, learning to recognize these resources is an essential part of ADM training. Resources must not only be identified, but a pilot must also develop the skills to evaluate whether there is time to use a particular resource and the impact its use has upon the safety of flight. For example, the assistance of ATC may be very useful if a pilot becomes lost, but in an emergency situation, there may be no time to contact ATC.

During an emergency, a pilot makes an automatic decision and prioritizes accordingly. Calling ATC may take away from time available to solve the problem. Ironically, the pilot who feels the hourglass is running out of sand would be surprised at the actual amount of time available in which to make decisions. The perception of “time flying” or “dragging” is based on various factors. If the pilot were to repeat the event (in which time seemed to evaporate) but had been briefed on the impending situation and could plan for it, the pilot would not feel the pressure of time “flying.” This example demonstrates the theory that proper training and physiological well-being is critical to pilot safety.

Internal Resources
One of the most underutilized resources may be the person in the right seat, even if the passenger has no flying experience. When appropriate, the PIC can ask passengers to assist with certain tasks, such as watching for traffic or reading checklist items.

SRM (2)

When possible, have a passenger reconfirm that critical tasks are completed.

A passenger can assist the PIC by:
• Providing information in an irregular situation, especially if familiar with flying. A strange smell or sound may alert a passenger to a potential problem.
• Confirming after the pilot that the landing gear is down.
• Learning to look at the altimeter for a given altitude in a descent.
• Listening to logic or lack of logic.

Also, the process of a verbal briefing (which can happen whether or not passengers are aboard) can help the PIC in the decision-making process. For example, assume a pilot provides his passenger a briefing of the forecasted landing weather before departure. When the Automatic Terminal Information Service (ATIS) is picked up at the destination and the weather has significantly changed, the integration of this report and forecasted weather causes the pilot to explain to a passenger the significance or insignificance of the disparity.The pilot must provide a cohesive analysis and explanation that is understood by the passenger. Telling passengers everything is okay when the weather is a ¼ mile away is not fooling anyone. Therefore, the integration of briefing passengers is of great value in giving them a better understanding of a situation. Other valuable internal resources  include ingenuity, solid aviation knowledge, and flying skill.

When flying alone, another internal resource is verbal communication. It has been established that verbal communication reinforces an activity; touching an object
while communicating further enhances the probability an activity has been accomplished. For this reason, many solo pilots read the checklist out loud; when they reach critical items, they touch the switch or control. For example, to ascertain the landing gear is down, the pilot can read the checklist and hold the gear handle down until there are three green lights. This tactile process of verbally communicating coupled with a physical action are most beneficial.

It is necessary for a pilot to have a thorough understanding of all the equipment and systems in the aircraft being flown. Lack of knowledge, such as knowing if the oil pressure
gauge is direct reading or uses a sensor, is the difference between making a wise decision or poor one that leads to a tragic error.

Checklists are essential flight-deck internal resources. They are used to verify that aircraft instruments and systems are checked, set, and operating properly. They also ensure the proper procedures are performed if there is a system malfunction or inflight emergency. Students reluctant to use checklists can be reminded that pilots at all levels of experience refer to checklists and that the more advanced the aircraft is, the more crucial checklists become. In addition, the pilot’s operating handbook (POH) is required to be carried on board the aircraft and is essential for accurate flight planning and resolving inflight equipment malfunctions. However, the ability to manage workload is the most valuable resource a pilot has.

External Resources
Air traffic controllers and AFSS (automated flight service stations) are the best external resources during flight. In order to promote the safe, orderly flow of air traffic around airports and along flight routes, the ATC provides pilots with traffic advisories, radar vectors, and assistance in emergency situations. Although it is the PIC’s responsibility to make the flight as safe as possible, a pilot with a problem can request assistance from ATC.  For example, if a pilot needs to level off, be given a vector, or decrease speed, ATC assists and becomes integrated as part of the crew. The services provided by ATC can not only decrease pilot workload, but also help pilots make informed in-flight decisions.

The AFSS are air traffic facilities that provide pilot briefing, en route communications, VFR search and rescue services, assist lost aircraft and aircraft in emergency situations, relay ATC clearances, originate Notices to Airmen (NOTAM), broadcast aviation weather and National Airspace System (NAS) information, receive and process IFR flight plans, and monitor navigational aids (NAVAIDs). In addition, at selected locations, AFSS provide En Route Flight Advisory Service (Flight Watch), issue airport advisories, and advise Customs and Immigration of transborder flights. Selected AFSS in Alaska also provide Transcribed Weather En Route Broadcast (TWEB) recordings and take weather observations.

Another external resource available to pilots is the very high frequency (VHF) Direction Finder (VHF/DF). This is one of the common systems that helps pilots without their awareness of its operation. FAA facilities that provide VHF/DF service are identified in the airport/facility directory (A/FD). DF equipment has long been used to locate lost aircraft and to guide aircraft to areas of good weather or to airports. DF instrument approaches may be given to aircraft in a distress or urgent condition.

Experience has shown that most emergencies requiring DF assistance involve pilots with little flight experience. With this in mind, DF approach procedures provide maximum flight stability in the approach by using small turns and wings level descents. The DF specialist gives the pilot headings to fly and tells the pilot when to begin a descent. If followed, the headings lead the aircraft to a predetermined point such as the DF station or an airport. To become familiar with the procedures and other benefits of DF, pilots are urged to request practice DF guidance and approaches in VFR weather conditions.

5P Approaching to the Single-pilot Resource Management.

SRM is about how to gather information, analyze it, and make decisions. Learning how to identify problems, analyze the information, and make informed and timely decisions is not as straightforward as the training involved in learning specific maneuvers. Learning how to judge a situation and “how to think” in the endless variety of situations encountered while flying out in the “real world” is more difficult.

SRM sounds good on paper, but it requires a way for pilots to understand and use it in their daily flights. To get the greatest benefit from SRM, you also need a practical framework for application in day-to-day flying. One practical application is called the Five Ps (5 Ps). Such approach involves regular evaluation of Plan, Plane, Pilot, Passengers, and Programming.

Each of these areas consists of a set of challenges and opportunities that face a single pilot. Each can substantially increase or decrease the risk of successfully completing the
flight based on the pilot’s ability to make informed and timely decisions. The 5 Ps are used to evaluate the pilot’s current situation at key decision points during the flight or when an
emergency arises. These decision points include preflight, pre-takeoff, hourly or at the midpoint of the flight, pre-descent, and just prior to the final approach fix or for VFR operations, just prior to entering the traffic pattern.

SRM (3)

The 5Ps are applied to various points in the flight to make a critical safety decision, prior to and during the flight.

The 5 Ps are based on the idea that the pilots have essentially five variables that impact their environment and can cause the pilot to make a single critical decision or several less
critical decisions that when added together can create a critical outcome. This concept stems from the belief that current decision-making models tended to be reactionary in
nature. A change has to occur and be detected to drive a risk management decision by the pilot. For instance, many pilots use risk management sheets that are filled out by the pilot
prior to takeoff. These form a catalog of risks that may be encountered that day and turn them into numerical values. If the total exceeds a certain level, the flight is altered or
canceled. Informal research shows that while these are useful documents for teaching risk factors, they are almost never used outside of formal training programs. The 5P concept is an attempt to take the information contained in those sheets and in other available models and put it to good use.

The point of the 5P approach is not to memorize yet another aviation mnemonic. You might simply write these words on your kneeboard, or add a reference to 5Ps to your checklist for key decision points during the flight. These include preflight, pre-takeoff, cruise, pre-descent, and just prior to the final approach fix or, for VFR operations, just prior to entering the traffic pattern.

Items to consider in association with the 5Ps might include the following:

Plan
The plan includes the basic elements of cross-country planning: weather, route, fuel, current publications, etc. The plan also includes all the events that surround the flight and allow the pilot to accomplish the mission. The pilot should review and update the plan at regular intervals in the flight, bearing in mind that any of the factors in the original plan can change at any time.

Plane
The plane includes the airframe, systems, and equipment, including avionics. The pilot should be proficient in the use of all installed equipment as well as familiar with the aircraft/equipment’s performance characteristics and limitations. As the flight proceeds, the pilot should monitor the aircraft’s systems and instruments in order to detect any abnormal indications at the earliest opportunity.

SRM (4)

The plane consists of not only the normal mechanical components but also the many advanced systems and software that
supports it.

Pilot
The pilot needs to pass the traditional “IMSAFE” checklist (see below). This part of the 5P process helps a pilot to identify and mitigate physiological hazards at all stages of the flight.

I'm Safe (4)

Making sure a pilot is ready to perform to a high standard is as important as the aircraft—maybe more!

I'm safe (2)

Another version of the I’m safe checklist

Passengers
The passengers can be a great help to the pilot by performing tasks such as those listed earlier. However, passenger needs — e.g., physiological discomfort, anxiety about the flight, or desire to reach the destination — can create potentially dangerous distractions. If the passenger is a pilot, it is also important to establish who is doing what. The
5P approach reminds the pilot-in-command to consider and account for these factors.

Programming
The programming can refer to both panel mounted and hand-held equipment. Today’s
electronic instrument displays, moving map navigators, and autopilots can reduce pilot workload and increase pilot situational awareness. However, the task of programming or operating both installed and handheld equipment (e.g., tablets) can create a serious distraction from other flight duties. This part of the 5P approach reminds the pilot to mitigate this risk by having a thorough understanding of the equipment long before takeoff, and by planning in advance when and where the programming for approaches, route changes, and airport information gathering should be accomplished, as well as times it should not be attempted.

Whatever SRM approach you choose, use it consistently and remember that solid SRM skills can significantly enhance the safety of “crew of you” flights.

Resources
1. FAA Risk Management Handbook (Chapter 6)
http://1.usa.gov/1Lyumk4

2. Advisory Circular 120-51E, Crew Resource Management Training
http://go.usa.gov/ZECw

3. General Aviation Joint Steering Committee Safety Enhancement Topic, March 2015. Produced by FAA Safety Briefing   http://www.faa.gov/news/safety_briefing/

**********************

minime2By Laura Victoria Duque Arrubla, a medical doctor with postgraduate studies in Aviation Medicine, Human Factors and Aviation Safety. In the aviation field since 1988, Human Factors instructor since 1994. Follow me on facebook Living Safely with Human Error and twitter. Human Factors information almost every day

_______________________