• 热门标签

当前位置: 主页 > 航空资料 > 国外资料 >

时间:2010-08-19 10:56来源:蓝天飞行翻译 作者:admin
曝光台 注意防骗 网曝天猫店富美金盛家居专营店坑蒙拐骗欺诈消费者

survivability, human safety, and other attributes, and span
a variety of applications and critical infrastructures —
such as electric power, telecommunications, transportation,
finance, medical care, and elections. The range of
causative factors and the diversity of the resulting risks
are both enormous. Unfortunately, many of the problems
seem to recur far too often. Various lessons therefrom and
potential remedies are discussed.
1 Risks
This contribution to the Classic Papers track is a retrospective
consideration of computer-related risks from
the archives of the ACM SIGSOFT Software Engineering
Notes (since 1976) and the online ACM Risks Forum
(since 1985, a.k.a. RISKS and comp.risks), both of
which were created by the author. The cumulative Illustrative
Risks index to both sources [12] provides a hint
of the enormous range of problems that must be considered.
Discussion of many interesting cases prior to 1995 is
found in [13]; surprisingly, apart from a steadily increasing
number of more recent instances of similar cases, the
basic conclusions of that book are still very timely!
Application areas in RISKS include space missions, defense,
aviation and other forms of transportation, power,
telecommunications, health care, process control, information
services, law enforcement, antiterrorism, elections,
and many others. The causes of computer-related
risks are manifold. The RISKS archives include rampant
cases of power glitches, undetected hardware failures,
flawed software, human error, malicious misuse, questionable
election results, and even animal-induced system
failures. The results of these problems have caused
deaths, physical injury and health problems, mental anguish,
financial losses and errors, fraud, security and privacy
violations, environmental damage, and so on. There
is much to be learned from this litany of cases.
People are always a potential weak link, throughout
the system life cycle. Although technology is sometimes
blamed, people have created that technology. For example,
requirements are often incorrect, incomplete, mutually
inconsistent, and lacking in foresight. System designs
and detailed architectures are typically flawed. Software
is frequently buggy. Patches intended to fix existing flaws
often create further bugs. System adminstrators are usually
beset with too many opportunities for mistakes. Indeed,
blame can often be spread rather widely.
2 Trustworthiness
The term trustworthiness implies that something is worthy
of being trusted to satisfy its specified requirements. The
requirements may specify in detail various system properties
such as security, reliability, human safety, and survivability
in the presence of a wide range of adversities.
Trustworthiness thus implies some sort of assurance measures,
and is typically never perfect.
Trustworthiness needs to be considered pervasively
throughout the system life cycle, through system development,
use, operation, maintenance, and evolutionary upgrades.
It cannot be easily retrofitted into systems that
were not carefully designed and developed. It is dependent
on technology and on many other factors—the most
important of which ultimately tends to be people.
Sections 3 through 6 discuss a few instructive cases of
untrustworthiness, with references in [12, 13].
1
3 Unreliable Backup
A major source of problems relates to failures of backup
systems, or failures of the interface between the primary
and backup systems, or in some cases the total absence of
backup.
One of the most interesting cases of a problem involving
a backup system arose in NASA’s very first attempt
to launch a shuttle, the Columbia. The synchronization
problem in the first shuttle was partly a design error and
partly a programming flaw. About 20 minutes before the
scheduled launch on 10 April 1981, the backup computer
failed to be synchronized with the four primary computers.
This failure had actually occurred previously in testing,
and was later identified as a one-in-64 probabilistic
intermittent [5], but was apparently not known to the operations
crew. The two-day delay in launch could apparently
have been avoided by a retry.
Several major airport disruptions are also worth noting,
as well as other cases that resulted in total system failures,
either because of the lack of a backup system or in spite
of its presence.
Air-traffic control (ATC) backup/recovery failures:
• Palmdale (Los Angeles) ATC, July 2006: a pickup
truck hit a utility pole; automatic cutover to backup
 
中国航空网 www.aero.cn
航空翻译 www.aviation.cn
本文链接地址:航空资料23(103)