曝光台 注意防骗
网曝天猫店富美金盛家居专营店坑蒙拐骗欺诈消费者
The MCMC optimization procedure can be described as follows. Consider a stochastic model formed by a random variable D, whose distribution has not been defined yet, and J conditionally independent replicas of random variable X with distribution po(x). Let us denote by h(ω, x1,x2,...,xJ ) the joint distribution of (D,X1,X2,X3,...,XJ ). Itis straightforward to see that if
J
r
h(ω, x1,x2,...,xJ ) ⊆ u(ω, xj )pd (xj ) (8) j=1
then the marginal distribution of D, also denoted by h(ω) for simplicity, satisfies
J h(ω) ⊆u(ω, x)pd (x)dx= U (ω)J . (9)
This means that if we can extract realizations of (D,X1,X2,X3,...,XJ ) then the extracted D’s will be concentrated around the optimal points of U (D) for a sufficiently high
J. These extractions can be used to find an approximate solution to the optimization of U (ω). Realizations of the random variables (D,X1,X2,X3,...,XJ ), with the desired joint probability density given by (8), can be obtained through Monte Carlo Markov Chain simulation. The algorithm is presented below. In the algorithm, g(ω) is known as the instrumental (or proposal) distribution and is freely chosen by the user; the only requirement is that g(ω) covers the support of h(ω).
Algorithm 1 (MCMC Algorithm)
initialization:
Extract D(0) ≥ g(ω)
Extract Xj (0) ≥ po(x) j =1,...,J
rJ
Compute UJ (0) = j=1 u(D(0),Xj (0))
Set k =0
6
repeat
Extract D˜ g(ω)
˜
Extract Xj ≥ po˜ (x) j =1,...,J
rJ ˜
Compute U˜ = j=1 u(D˜ ,X j )
U˜g(d(k))
Set ρ = min1,
uJ (k) g(˜o)
([D˜ ,U˜ with probability ρ Set [D(k + 1),UJ (k + 1)] =
l
[ω(k),uJ (k)] with probability 1 − ρ
Set k=k+1
until True
In the description of the algorithm, lower-and upper-case symbols denote respectively quantities that are known at the iteration k and quantities that are extracted at the iteration k. Notice for example that [ω(k),uJ (k)] denotes the current state and that [D(k + 1),UJ (k + 1)] denotes the subsequent state of the chain. In the initialization step the state [D(0),UJ (0)] is always accepted. In subsequent steps the new extraction [ D˜ ,U˜ J ] is accepted with probability ρ otherwise it is rejected and the previous state of the Markov chain [ω(k),uJ (k)] is maintained. Practically, the algorithm is executed until a certain number of extractions (say 1000) have been accepted. Because we are interested in the stationary distribution of the Markov chain, the first few (say 10%) of the accepted states are discarded to allow the chain to reach its stationary distribution (“burn in period”).
This algorithm is a formulation of the Metropolis-Hastings algorithm for a desired distribution given by h(ω, x1,x2,...,xJ ) and proposal distribution given by
中国航空网 www.aero.cn
航空翻译 www.aviation.cn
本文链接地址:
Monte Carlo Optimization for Conflict Resolution in Air Traffic Control(6)