• 热门标签

当前位置: 主页 > 航空资料 > 国外资料 >

时间:2010-08-19 10:56来源:蓝天飞行翻译 作者:admin
曝光台 注意防骗 网曝天猫店富美金盛家居专营店坑蒙拐骗欺诈消费者

temperatures reached over 4000 degrees.
• Medical care. The most often cited example of a
software failure in a medical device is the Therac 25
accelerator, which resulted in several deaths as a result
of a race condition in a nonatomic transaction of
switching from the high-intensity research mode to
the low-intensity therapeutic mode [10]. A physical
interlock had been present in hardware in the Therac
20, and was mistakenly assumed to have been implemented
in software in the Therac 25. At the other end
of the technology spectrum was the heart-monitoring
device with a standard electrical-socket wall plug instead
of a jack, which had come loose; a cleaning
person instinctively plugged it into the wall socket
rather than into the monitor, thereby electrocuting
the patient. Recent reports show new cases of operations
on the wrong patient because of mistaken or
misinterpreted computer data, erroneous test results,
a mode-change fault in a glucose-monitoring device,
and so on.
For each of these application areas (and many more),
the safety risks of untrustworthy systems are considerable.
In addition, risks tend to arise in supposedly safe systems
with respect to security, privacy, reliability, system survivability,
graceful degradation, and so on. Overall, much
greater care is needed in developing safe systems than is
devoted to run-of-the-mill software. (For example, see
work by Leveson [7, 8, 9] for some serious approaches
to enhancing safety that could be an inspiration to R&D
in trustworthiness for secure applications.)
Safety-related accidents continue to occur, particularly
in air, rail, and medical applications — e.g., caused
by hardware/software malfunctions and errors by controllers,
pilots, and operators. In hindsight, some of those
should have been preventable with better human interfaces,
cross-checking, adequate staffing, preventive diagnostics
and maintenance, training, pervasive oversight,
and so on.
6 Unsecure Systems
We next consider problems of unsecure systems, with the
hopes of gaining some insights from the previous sections.
For ACSAC, documenting historical and recent security
vulnerabilities and their exploitations might seem
to be preaching to the ACSACramental choir, and might
cause me to be ACSACked or ACSACrificed for rampant
repetitiousness. Of course, typical risks include penetrations
by outsiders and misuse by insiders, e-mailstorms
(e-maelstroms?) of spam and phishing attacks, and isolated
and coordinated distributed denials of service, to
name just a few. Vulnerable applications include a wide
array of financial systems with potentials for fraud and
undetected errors, databases with rampant opportunities
for identity thefts and other privacy violations, all of the
critical national infrastructures, electronic voting systems
with serious needs for system integrity and voter privacy
as well as detection and prevention of manipulations and
errors, and many other types of systems.
Buffer overflows, bounds checks, type mismatches, and
many other program flaws continue to appear, frequently
causing security failures. There have been numerous efforts
to provide a taxonomy for such problems (as for
example [1, 6, 18, 22]), as well as efforts such as static
analysis tools to detect the presence of the characteristic
flaws. However, disciplined software development and
systematic use of analysis tools are required. There are
enormous needs for well-designed and well-implemented
trustworthy systems that can satisfy a broad set of security
requirements. Curiously, for many years, system integrity
tended to be subordinated to confidentiality, whereas accountability
remained more or less in the dark; preventing
denials of service is often still widely ignored.
From a total-system perspective, it would be highly desirable
that systems designed for security also be able to
4
satisfy some of the other requirements for trustworthiness
that transcend security per se. For example, systems
that are supposedly secure but unreliable may no
longer be secure when unreliable. Similarly, supposedly
secure systems that are not predictably survivable under
certain environmental disruptions may become unsecure
when transformed into fail-safe or other degraded operation.
The concept of fail-secure systems presents some
significant challenges.
7 Conclusions
Considering the broad scope of the problems considered
here, including huge diversities among causes and effects,
it should not be surprising that rather far-reaching measures
 
中国航空网 www.aero.cn
航空翻译 www.aviation.cn
本文链接地址:航空资料23(106)