曝光台 注意防骗
网曝天猫店富美金盛家居专营店坑蒙拐骗欺诈消费者
Taking a lower bound of the right-hand side, we obtain
⎧⎨
Perf(¯ω)三 Perf(ω) − (A − 1) P¯ − P¯min Vω : P¯ (ω) ∝ P¯ .
Taking the maximum and eliminating the quantifier on the right-hand side we obtain the desired inequality.
Proposition 2.1 suggests a method for choosing A to ensure that the solution ω¯of ¯
the optimization problem will satisfy P(¯ω) ∝ P¯ . In particular it suffices to know P¯(ω) ¯¯
for some ω ∀ n with P(ω) < P to obtain a bound. If there exists ω ∀ n for which ˆ¯¯
P = P(ω) < P is known, then any A三 P 1 −− PˆPˆ
¯
¯¯
ensures that P(¯ω) ∝ P. If we know that there exists a parameter ω ∀ n for which the constraints are satisfied almost surely, a tighter (and potentially more useful) bound can
¯
be obtained. If there exists ω ∀ n such that P(ω) = 0, then any
A三 P1¯(7)
¯¯
ensures that P(¯ω) ∝ P. Clearly to minimize the gap between the optimal performance and the performance of ¯ω we need to select A as small as possible. Therefore the optimal choices of A that ensure the bounds on constraint satisfaction and minimize the sub
ˆ
1 ¯
ˆ¯
optimality of the solution are A = 1−P and A=P respectively.
P−P
3 Monte Carlo Optimization
In this section we describe a simulation-based procedure, to find approximate optimizers of U (ω). The only requirement for applicability of the procedure is to be able to obtain realizations of the random variable X with distribution pd (x) and to evaluate u(ω, x) point-wise. This optimization procedure is in fact a general procedure for the optimization of expected value criteria. It has been originally proposed in the Bayesian statistics literature [16].
5
The optimization strategy relies on extractions of a random variable D whose distribution has modes which coincide with the optimizers of U (ω). These extractions are obtained through Monte Carlo Markov Chain (MCMC) simulation [20]. The problem of optimizing the expected criterion is then reformulated as the problem of estimating the optimal points from extractions concentrated around them. In the optimization procedure, there exists a tunable parameter that governs the trade-off between estimation accuracy of the optimizer and computational effort. In particular, the distribution of D is proportional to U (ω)J where J is a positive integer which allows the user to increase the “peakedness” of the distribution and concentrate the extractions around the modes at the price of an increased computational load. If the tunable parameter J is increased during the optimization procedure, this approach can be seen as the counterpart of Simulated Annealing for a stochastic setting. Simulated Annealing is a randomized optimization strategy developed to find tractable approximate solutions to complex deterministic combinatorial optimization problems, [22]. A formal parallel between these two strategies has been derived in [17].
中国航空网 www.aero.cn
航空翻译 www.aviation.cn
本文链接地址:
Monte Carlo Optimization for Conflict Resolution in Air Traffic Control(5)