• 热门标签

当前位置: 主页 > 航空资料 > 航空制造 >

时间:2011-08-31 13:58来源:蓝天飞行翻译 作者:航空
曝光台 注意防骗 网曝天猫店富美金盛家居专营店坑蒙拐骗欺诈消费者

the
advisory duration for both the TCAS logic and the DP logic optimized using the linear pilot
TABLE 8
Performance metrics for di.erent pilot response models

Pr(NMAC)  Pr(Alert)  Pr(Strengthening)  Pr(Reversal)  E[RA changes]  E[RA duration] 
Lin. 
 
 
 
Quad. 
 
 
 
Det. 
 Lin. DP  Quad. DP  Det. DP  TCAS 
 

Note: The four bars in each cell of this table are normalized so that the relative performance of the four systems are more easily compared.
response
model,
evaluated
on
the
linear
pilot
response
model.
Figure
26(a)
shows
that
the
DP
logic
changes
the
advisory
less
frequently
than
TCAS.
Figure
26(b)
indicates
that
the
DP
logic
displays advisories for a shorter duration than TCAS.

6.6 DISCUSSION
This section presented extensions to the logic of the previous sections to include probabilistic pilot response models. Two Markov chain models were examined. Future work could explore the use of other models. In the two models presented in this section, when an advisory is issued, the pilot could respond to the advisory, continue following his current advisory (if any), or choose to ignore all advisories. They do not explicitly model situations in which the pilot runs counter to the advisory on display or situations in which the pilot may take a course of action vastly di.erent than ones suggested by the advisories. They also do not model the pilot responding to advisories with a variety of di.erent strengths. A collection of recorded radar data from TCAS-equipped aircraft can be used to construct a pilot response model that would be more representative of the actual response of pilots in the airspace.
0.4
0.3
0.2
0.1

0
1234567
Pr(RA changes)
Number of advisory changes
(a) Distribution of advisory changes. The probability of no changes is 0.8645 for DP and 0.5211 for TCAS.
0.04
0.03
0.02
0.01
0 0
10 20 30 40 50

Pr(RA duration)
(b) Distribution of advisory duration. The probability of no duration is 0.8638 for DP and 0.5211 for TCAS.


7. STATE ESTIMATION

The previous sections discussed solutions to the collision avoidance problem in the absence of sensor error. Because real surveillance systems are imperfect, it is important to account for state uncer-tainty.
As
discussed
in
Section
2.2,
solving
for
the
optimal
policy
for
a
POMDP
that
accounts
for
imperfect observations generally requires approximation. This section applies the QMDP approx-imation
method
suggested
in
Section
2.3
to
account
for
state
uncertainty
in
a
collision
avoidance
system.
As discussed earlier, the QMDP method uses the state-action costs JMDP(s, a) over the un-derlying MDP states s to approximate the cost function J(b, a) over belief states b. In particular,
J(b, a)= b(s)JMDP(s, a). (31)
s
That is, the expected cost when starting in belief state b, taking action a, and then continuing with the optimal policy is estimated by the expectation over the belief state of the state-action costs assuming no state uncertainty. Thus, under this approximation, the current uncertainty in the state, as encoded by b, is accounted for when choosing actions, but all future uncertainty is disregarded. Because the choice of advisory makes little di.erence in the reduction in state uncertainty, this approximation is expected to perform well.
 
中国航空网 www.aero.cn
航空翻译 www.aviation.cn
本文链接地址:Robust Airborne Collision Avoidance through Dynamic Programm(35)