Skip to content

A Tensorflow implementation of a Deep Q Network (DQN) for playing Atari games.

License

Notifications You must be signed in to change notification settings

msinto93/DQN_Atari

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Q Network (DQN)

A Tensorflow implementation of a Deep Q Network (DQN) for playing Atari games.

Trained on OpenAI Gym Atari environments.

Based on Human-Level Control through Deep Reinforcement Learning. This implementation includes the improvements to the original DQN detailed in the above paper, namely:

  • Larger network
  • Longer training
  • Target network
  • Huber loss

Requirements

Note: Versions stated are the versions I used, however this will still likely work with other versions.

Usage

The default environment is 'BreakoutDeterministic-v4'. To use a different environment simply pass the environment in via the --env argument when running the following files.

  $ python train.py

This will train the DQN on the specified environment and periodically save checkpoints to the /ckpts folder.

  $ ./run_every_new_ckpt.sh

This shell script should be run alongside the training script, allowing to periodically test the latest network as it trains. This script will monitor the /ckpts folder and run the test.py script on the latest checkpoint every time a new checkpoint is saved.

  $ python play.py

Once we have a trained network, we can visualise its performance in the game environment by running play.py. This will play the game on screen using the trained network.

Results

Result of training the DQN on the 'BreakoutDeterministic-v4' environment:

References

License

MIT License