The AirSafe Journal - Issue 10


The AirSafe Journal
Issue 10 21 July 1997 (revised 1 June 2008)
Todd Curtis, PhD

Airline Safety Ranking of the Air Travelers Association

In July 1997, the Air Travelers Association, a passenger advocacy group headed by David Stempler, produced an Airline Safety Report Card which ranked 260 of the world's airlines for safety based on an A through F grading system. I reviewed the Report Card in order to see if it represented a useful guide for judging the level of risk one faces when flying with one of these airlines. In my review, I found a number of significant drawbacks and weaknesses that made it ineffective as a useful safety guide.

An airline's grade in the report was based the score it received from the following formula:
100-[(10,000)(number of fatal accidents)/(thousands of flights during 1987-1996)]
That formula is converted into A to F grade as follows:

100-90 = A
89.9-80 = B
79.9-70 = C
69.9-60 = D
less than 60 = F.

This formula has several weaknesses:

  1. It did not differentiate between accidents where the crew and the airline played no role and those where actions or policies of the crew or airline played a role.
  2. The formula limited the possible grades an airline could receive based on its number of flights during the 10 year period:
    • Less than 250,000 - A or F
    • Between 250,000 and 500,000 - A, C, or F
    • Between 500,000 and 750,000 - A, B, D, or F
    • More than 750,000 - A, B, C, D, or F.
  3. For an airline with less than 250,000 flights in 10 years, even a single accident would keep the airline in the lowest grade until the accident is over 10 years old, regardless of any improvements the airline made to reduce risk and enhance safety.
  4. On the other hand, if a large airline were to change in such a way that the risk of a fatal accident significantly increases, the formula would not produce a failure grade of F until many aircraft were involved in an accident. For example, for a carrier with seven million flights in 10 years, it would take 28 fatal accidents to produce an F grade. All the U.S. carriers in the Report Card accounted for only 17 fatal events from 1987-1996.
A consumer using this Report Card may be misled in at least three ways:
  1. An F grade for a small airline could be based on a single event that was not a result of any crew or airline action.
  2. A non-failing grade for a large carrier is not a guarantee that the airline does not have a high fatal accident rate.
  3. An airline of any size with no fatal events in the last 10 years would get an A rating regardless of how risky that airline's operations happen to be.

In order for any rating system to be a useful tool for consumer decisions on airline safety, the system does not have to be perfect, but it must at the very least be relatively free of internal bias and flexible enough to consider the context of an airline's safety related events. This particular rating system does not succeed on either level and as a result would be of limited effectiveness as an indicator of an airline's future risk.

The AirSafe Journal - Issue 10
http://airsafe.com/journal/issue10.htm -- Revised: 24 May 2015