• 热门标签

当前位置: 主页 > 航空资料 > 航空制造 >

时间:2011-08-31 13:58来源:蓝天飞行翻译 作者:航空
曝光台 注意防骗 网曝天猫店富美金盛家居专营店坑蒙拐骗欺诈消费者

[16] R. Bellman, Dynamic Programming, Princeton: Princeton University Press (1957).
[17] M.L. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Programming, Wi-ley series in probability and mathematical statistics, New York: Wiley (1994).
[18] D.P. Bertsekas, Dynamic Programming and Optimal Control, vol. 1, Belmont, Mass.: Athena Scienti.c, 3rd ed. (2005).
[19] O. Sigaud and O. Bu.et (eds.), Markov Decision Processes in Arti.cial Intelligence, Hoboken, NJ: Wiley (2010).
[20] W.B. Powell, Approximate Dynamic Programming: Solving the Curses of Dimensionality, Hoboken, NJ: Wiley (2007).
[21] D.P. Bertsekas and J.N. Tsitsiklis, Neuro-Dynamic Programming, Optimization and neural computation series, Belmont, Mass.: Athena Scienti.c (1996).
[22] R. Munos and A. Moore, “Variable resolution discretization in optimal control,” Machine Learning 49(2–3), 291–323 (2002).
[23] S. Davies, “Multidimensional triangulation and interpolation for reinforcement learning,” in
M.C. Mozer, M.I. Jordan, and T. Petsche (eds.), Advances in Neural Information Processing Systems, Cambridge, Mass.: MIT Press, vol. 9, pp. 1005–1011 (1997).
[24] L.P. Kaelbling, M.L. Littman, and A.R. Cassandra, “Planning and acting in partially observ-able stochastic domains,” Arti.cial Intelligence 101(1–2), 99–134 (1998).
[25] E.W. Kamen and J.K. Su, Introduction to Optimal Estimation, London: Springer (1999).
[26] C.H. Papadimitriou and J.N. Tsitsiklis, “The complexity of Markov decision processes,” Math-ematics of Operations Research 12(3), 441–450 (1987).
[27] W.S. Lovejoy, “Computationally feasible bounds for partially observed Markov decision pro-cesses,” Operations Research 39(1), 162–175 (1991).
[28] T. Smith and R.G. Simmons, “Point-based POMDP algorithms: Improved analysis and im-plementation,” in Conference on Uncertainty in Arti.cial Intelligence, Edinburgh, Scotland (2005), pp. 542–547.
[29] H. Kurniawati, D. Hsu, and W. Lee, “SARSOP: E.cient point-based POMDP planning by approximating optimally reachable belief spaces,” in Proc. Robotics: Science and Systems (2008).
[30] M.L. Littman, A.R. Cassandra, and L.P. Kaelbling, “Learning policies for partially observable environments: Scaling up,” in International Conference on Machine Learning, Tahoe City, CA (1995), pp. 362–370.
[31] M. Hauskrecht, “Value-function approximations for partially observable Markov decision pro-cesses,” Journal of Arti.cial Intelligence Research 13, 33–94 (2000).
[32] J.L. Fernández, R. Sanz, R.G. Simmons, and A.R. Diéguez, “Heuristic anytime approaches to stochastic decision processes,” Journal of Heuristics 12(3), 181–209 (2006).
[33] S. Ross, J. Pineau, S. Paquet, and B. Chaib-draa, “Online planning algorithms for POMDPs,” Journal of Arti.cial Intelligence Research 32, 663–704 (2008).
[34] D. Nikovski and I. Nourbakhsh, “Learning probabilistic models for decision-theoretic naviga-tion of mobile robots,” in International Conference on Machine Learning, Stanford, CA (2000), pp. 671–678.
[35] R.S. Sutton and A.G. Barto, Reinforcement Learning: An Introduction, Cambridge, Mass.: MIT Press (1998).
[36] International Civil Aviation Organization, “Surveillance, radar and collision avoidance,” in International Standards and Recommended Practices, vol. IV, annex 10, 4th ed. (2007).
[37] D.W. Moore, Simplical Mesh Generation with Applications, Ph.D. thesis, Cornell University (1992).
[38] RTCA, “Minimum operational performance standards for Tra.c Alert and Collision Avoidance System II (TCAS II),” DO-185B (2008).
[39] M.J. Kochenderfer, M.W.M. Edwards, L.P. Espindle, J.K. Kuchar, and J.D. Gri.th, “Airspace encounter models for estimating collision risk,” Journal of Guidance, Control, and Dynamics 33(2), 487–499 (2010).
 
中国航空网 www.aero.cn
航空翻译 www.aviation.cn
本文链接地址:Robust Airborne Collision Avoidance through Dynamic Programm(56)