https://github.com/starcolon/q-exp
Reinforcement Learning with Q-learning technique for Node.js app. It also provides policy generalisation in-built.
$ npm install q-exp
To include
q-exp
library to your Node.js app:
var qexp = require('q-exp');
Read the instructions all the way down to learn how to use.
To make reinforcement learning works end-to-end, we implement and include the following features.
- Q-learning
- Exploration-exploitation
- Generalisation with Gradient descent
- Sample usages (Tic-tac-toe and falling-stones)
To create an agent, load its learned policy from a physical file, then let it choose an action which it believes it would maximise the reward it would get, you may do this:
// Initialisation
var agent = ql
.newAgent('johndoe',actionSet=['walk','run','sleep'],alpha=0.35)
.then(ql.bindRewardMeasure( /* reward function here */ ))
.then(ql.bindActionCostMeasure( /* action cost function here */ ))
.then(ql.bindStateGenerator( /* state generator here */ ))
.then(ql.load('./dir'));
// Start!
agent.then(ql.setState(initialState)) // Let the agent know the state
.then(ql.step) // Ask the agent to move
.then(ql.getState) // Now let's see how the agent moved
.then((state) => /* Do something with the state */)
A quick sample implementation is a classic tic-tac-toe game, source code available at /sample/tictactoe.js. This sample does not make use of generalisation, just a plain exploration-exploitation.
$ cd sample
$ node tictactoe.js play
After having your agent intensively trained for thousands games, you'll eventually find out how strong your bot has become.
$ cd sample
$ ./train-tictactoe
Another classic game where two stones are falling from the top edge of the screen at random position. The player are forced to move left or right to escape from those falling stones. If a stone fall onto the player, the game is over.
This sample makes use of generalisation
so it can
survive longer even you train it for just ten or twenty games.
To run it:
$ cd sample
$ node falling-stones.js
After generalisation, the agent can survive slightly longer. However we just fit the reward space with linear plane which might not well fit critical cases. It doesn't guarantee convergence.
Y axis represents the number of moves it survives in a game.
This project is released under Apache 2.0 licence.