• 热门标签

当前位置: 主页 > 航空资料 > 国外资料 >

时间:2010-08-19 10:56来源:蓝天飞行翻译 作者:admin
曝光台 注意防骗 网曝天猫店富美金盛家居专营店坑蒙拐骗欺诈消费者

other demands.
One of the main lessons from these outages is that security,
reliability, and survivability are closely interrelated,
especially when there are people in the loop. For example,
both the ARPANET outage and the AT&T long-lines
extreme slowdown could have been easily triggered remotely
by subversive human activity — if the fault mode
had been known to the perpetrator—rather than accidentally.
5 Unsafe Systems
To security-minded people, many past incidents that resulted
in accidental losses of life, injuries, and serious impairment
of human well-being further illustrate the difficulties
in providing high-assurance trustworthy systems.
Many of the lessons that should be learned for human
safety are rather similar to those in developing secure systems,
and suggest that many commonalities exist between
safe systems and secure systems.
• Requirements errors abound. Many critical systems
are developed with no specified requirements, or perhaps
incomplete ones.
• Design flaws abound. In many cases, the system
architectures and detailed designs are inherently incapable
of satisfying the intended requirements, although
this is often not identified until much later in
the development cycle.
• Programming bugs abound. (I once posed an exam
question that could be satisfied with a five-line program.
One student managed to make three programming
errors in five lines, including an off-by-one
loop count and a missing bounds check.)
A few pithy examples of safety-related risks are summarized
here, particularly as a reminder to younger people
who were not around at the time. References are found
in [12].
• Aviation, defense, and space. The RISKS archives
include many cases of deaths involving commercial
and military aviation. Here are just a few examples.
The Iran Air Airbus mistakenly shot down by
USS Vincennes’ Aegis missile system was attributed
to human error and a poor human interface. The
Patriot system defending against Iraqi scud missiles
had a serious hardware/software clock drift problem
that prevented the system from tracking targets after
a few days. The Handley Page Victor tailplane
broke off in its first high-speed flight, killing the
crew. Each of three independent test methods had
its own flaw that made the analysis appear satisfactory,
which consequently prevented identification of
a fundamental instability. A Lauda Air aircraft broke
up over Thailand, after its thrust-reverser accidentally
deployed in mid-air. A British Midland plane
crashed after an engine caught fire and the pilot erroneously
shut off the remaining good engine because
the instrumentation had been crosswired. Three early
A320 crashes were blamed variously on pilot error,
safety controls being off, software problems in
the autopilot, inaccurate altimeter readings, sudden
power loss, barometric pressure reverting to the previous
flight, and tampering with a flight recorder; in
one case, the pilots were convicted of libeling the
integrity of the technology! An Air New Zealand
flight crashed into Mt. Erebus in Antarctica; computerized
course data was known to be in error, but the
pilots had not been informed. (Incidentally, the shuttle
Discovery’s tail speed-brake gears were installed
backwards in 1984, but this was not discovered until
2004, 30 missions later!)
3
• Rail travel. The RISKS archives include dozens of
train wrecks attributable to various hardware, software,
and operational problems, some despite signaling
systems and safety devices, some as a result
of manual operation when automated systems failed.
• Ferry crashes. The Puget Sound ferry experienced
numerous computer failures that resulted in twelve
crashes, and the removal of the automated “sail-bywire”
system.
• Nuclear power. The Chernobyl accident in 1986
was the result of a misconceived experiment on
emergency-shutdown recovery procedures. The
long-term death toll among cleanup crew and neighbors
continues to mount, 20 years later. The earlier
Three Mile Island accident in 1979 was attributed
to various equipment failures, operational misjudgment,
and a software flaw (reported in 1982 by
Daniel Ford [4]): experimentally installed thermocouple
sensors were able to read abnormally high
temperatures, but the software suppressed readings
that were outside of normal range — printing out
“???????” for temperatures above 700 degrees, and
thus masking the reality of a near-meltdown in which
 
中国航空网 www.aero.cn
航空翻译 www.aviation.cn
本文链接地址:航空资料23(105)