0


蒙特卡罗模拟平衡

Monte-Carlo Simulation Balancing
课程网址: http://videolectures.net/icml09_silver_mcsb/  
主讲教师: David Silver
开课单位: 伦敦大学学院
开课时间: 2009-08-26
课程语种: 英语
中文简介:
本文介绍了第一种有效学习蒙特卡罗搜索仿真策略的算法。我们的主要想法是优化模拟策略的平衡,从而保持模拟结果的准确传播,而不是优化模拟策略的直接强度。我们开发了两种通过梯度下降来平衡模拟策略的算法。第一种算法使用策略梯度算法来优化完全模拟的平衡;而第二种算法则通过每两个模拟步骤来优化平衡。我们将我们的算法与强化学习和监督学习算法进行比较,以最大化仿真策略的强度。我们在5x5计算机go中测试每个算法,使用一个由100个简单模式的权重参数化的SoftMax策略。当在一个简单的蒙特卡罗搜索中使用时,通过模拟平衡学习的策略取得了显著的更好的性能,其平均平方误差为均匀随机策略的一半,总体性能与一个复杂的go引擎相同。
课程简介: In this paper we introduce the first algorithms for efficiently learning a simulation policy for Monte-Carlo search. Our main idea is to optimise the balance of a simulation policy, so that an accurate spread of simulation outcomes is maintained, rather than optimising the direct strength of the simulation policy. We develop two algorithms for balancing a simulation policy by gradient descent. The first algorithm optimises the balance of complete simulations, using a policy gradient algorithm; whereas the second algorithm optimises the balance over every two steps of simulation. We compare our algorithms to reinforcement learning and supervised learning algorithms for maximising the strength of the simulation policy. We test each algorithm in the domain of 5x5 Computer Go, using a softmax policy that is parameterised by weights for a hundred simple patterns. When used in a simple Monte-Carlo search, the policies learnt by simulation balancing achieved significantly better performance, with half the mean squared error of a uniform random policy, and equal overall performance to a sophisticated Go engine.
关 键 词: 蒙特卡洛搜索; 策略梯度算法; 随机政策
课程来源: 视频讲座网
最后编审: 2021-02-04:nkq
阅读次数: 43