From 469ef42a1dc49aa1dbe4e6fd16bef05c200267d7 Mon Sep 17 00:00:00 2001 From: Jan Hartigan Date: Thu, 15 Nov 2018 22:54:09 -0800 Subject: [PATCH] Version 1.3.1 (#11) * removing python 3.3 to avoid travis deploy issues * version bump * cleaning up the readme a bit * more description * personalizing our mit license --- .travis.yml | 1 - LICENSE | 2 +- readme.md | 28 ++++++++++++++++------------ setup.py | 2 +- 4 files changed, 18 insertions(+), 15 deletions(-) diff --git a/.travis.yml b/.travis.yml index af844a8..ee22ca2 100644 --- a/.travis.yml +++ b/.travis.yml @@ -1,6 +1,5 @@ language: python python: -- '3.3' - '3.4' - '3.5' - '3.6' diff --git a/LICENSE b/LICENSE index 6e46a70..2376da0 100644 --- a/LICENSE +++ b/LICENSE @@ -1,6 +1,6 @@ The MIT License -Copyright (c) 2010-2018 Google, Inc. http://angularjs.org +Copyright (c) 2010-2018 ImparaAI https://impara.ai Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal diff --git a/readme.md b/readme.md index 7919c03..926bbdc 100644 --- a/readme.md +++ b/readme.md @@ -1,16 +1,16 @@ A Python3 library for running a [Monte Carlo tree search](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search), either traditionally by drilling down to end game states or with expert policies as might be provided by a neural network. -- **Version:** 1.3.0 +- **Version:** 1.3.1 [![Build Status](https://travis-ci.org/ImparaAI/monte-carlo-tree-search.png?branch=master)](https://travis-ci.org/ImparaAI/monte-carlo-tree-search) -# Monte Carlo tree search Basics +# Monte Carlo tree search basics -The Monte Carlo tree search (MCTS) algorithm is used to make an educated guess about what should be done next. This is commonly applied to games like chess or go where it's useful to know what move should come next if you want to win the game. +The Monte Carlo tree search (MCTS) algorithm can help with making a decision from a number of options. It avoids exploring every possible option by randomly sampling a small number of pathways and picking the move with the highest probability of victory. This is commonly applied to games like chess or go where it's useful to know what move should come next if you want to win the game. -MCTS works by expanding the search tree to figure out which moves are likely to produce a positive result. While time is available, the algorithm continues to explore the tree, always slightly favoring the direction that has either proved to be fruitful or is completely unexplored. When no time is left, the most explored direction is chosen. +MCTS works by expanding the search tree to figure out which moves (or child/subsequent states) are likely to produce a positive result if chosen. While time is available, the algorithm continues to explore the tree, always slightly favoring the direction that has either proven to be fruitful or is less explored. When no time is left, the most explored direction is chosen. -The search tree expansion can be done in two different ways: done through . +The search tree expansion can be done in two different ways: - **Traditional**: At least one random rollout to a game's end state (e.g. win, loss, tie) for each move under evaluation so the algorithm can make a choice. - **Expert policy (i.e. neural network)**: Instead of expensively rolling all the way out to a game's end state ask an expert (a neural network for example) which move is most likely to produce a positive outcome. @@ -40,7 +40,7 @@ montecarlo = MonteCarlo(Node(chess_game)) The root node describes your current game state. This state will be used by you later in the **`child_finder`** and the **`node_evaluator`**. -For the sake of demonstration, we will assume you have an generic `Game` library that can tell you what moves are possible and allows you to make those moves. +For the sake of demonstration, we will assume you have a generic `Game` library that can tell you what moves are possible and allows you to perform those moves to change the game's state. ## Traditional Monte Carlo @@ -77,7 +77,7 @@ def child_finder(self, node): child = Node(deepcopy(node.state)) child.state.move(move) child.player_number = child.state.whose_turn() - child.policy_value = get_child_policy_value(child, expert_policy_values) #should return a value between 0 and 1 + child.policy_value = get_child_policy_value(child, expert_policy_values) #should return a probability value between 0 and 1 node.add_child(child) node.update_win_value(win_value) @@ -114,20 +114,24 @@ montecarlo.root_node = montecarlo.make_exploratory_choice() This won't provide a purely random choice, rather it will be random with a bias favoring the more explored pathways. -## Turn based environments +## Turn-based environments -If you are modeling a turn based environment (e.g. a two player board game), set the `player_number` on each node so the selection process can invert child win values: +If you are modeling a turn-based environment (e.g. a two player board game), set the `player_number` on each node so the selection process can invert child win values: ```python node = Node(state) node.player_number = 1 ``` +It doesn't matter what this number is (you can use 1 and 2 or 5 and 6), only that it is consistent with other nodes. + ## Tweaking the discovery factor -When building a new child node, you can change the rate at which the library prefers to expand undiscovered states over states that have demonstrated value in previous expansions: +When building a new child node, you can change the rate at which discovery is preferred: ```python node = Node(state) -node.discovery_factor = 0.2 #0.35 by default -``` \ No newline at end of file +node.discovery_factor = 0.2 #0.35 by default, can be between 0 and 1 +``` + +The closer this number is to 1, the more discovery will be favored over demonstrated value in later simulations. \ No newline at end of file diff --git a/setup.py b/setup.py index f64e684..e3b5fcf 100644 --- a/setup.py +++ b/setup.py @@ -5,7 +5,7 @@ setuptools.setup( name="imparaai-montecarlo", - version="1.3.0", + version="1.3.1", license='MIT', author="ImparaAI", author_email="author@example.com",