Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Readme update #17

Merged
merged 7 commits into from
Jun 10, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 28 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,11 @@

The purpose of this project is to define **2D Object Recognition Tests**.

## Current project status

Github is used for the tracking of tasks. See repository Issues.

## Task definition
### Object Schema

Objects exist within a 2D space. The space has a width and height. Each location in the space can be identified with an X and Y integer coordinate. Each location may have a Feature.
Expand All @@ -29,13 +34,13 @@ Initially, all Features consist of a simple data type, but should be extensible

The Object Library can be found in `objects/`. Each YAML file within this directory contains one object definition in the format specified in "Object Schema" above.

There are currently 2 objects in the library.
![example object with agent](doc/images/objectSpace.jpeg)

## Agency

An Agent can exist in a location within an Object space. An Agent observing an object will receive features in space according to the location of its sensors.

![Agent picture](https://discourse-cdn-sjc2.com/standard14/uploads/numenta/original/2X/4/49d9249b29105c9efa9eb0bbfa5b53e7f3ee369a.jpeg)
![Agent picture](doc/images/agentSensors.jpeg)

Each Agent has exactly 4 sensors:
- North (Agent Y - 1)
Expand All @@ -44,17 +49,31 @@ Each Agent has exactly 4 sensors:
- West (Agent X - 1)

At one time step, an Agent can be at only one location in Object space. Each sensor has access to the Feature beneath it.
**Agents should use their sensors to attempt to identify the object under observation at each time step.**

## Algorithm concept

We need to build a 3-layer Network for each sensor which has an object pooling layer as described in the [Columns Paper](https://numenta.com/neuroscience-research/research-publications/papers/a-theory-of-how-columns-in-the-neocortex-enable-learning-the-structure-of-the-world/)

above a 2-layer location/sensor circuit as described in [Columns+](https://numenta.com/neuroscience-research/research-publications/papers/locations-in-the-neocortex-a-theory-of-sensorimotor-object-recognition-using-cortical-grid-cells/):

![Three layer network](doc/images/ThreeLayer.jpeg)

Object layers must share representations between cortical columns via lateral connections:
![Lateral connections](doc/images/lateral.jpeg)

For code examples, see the [supporting paper for the Columns paper](https://github.com/numenta/htmpapers/tree/master/frontiers/a_theory_of_how_columns_in_the_neocortex_enable_learning_the_structure_of_the_world)

Agents should use their sensors to attempt to identify the object under observation at each time step.

### Test Challenge \#1
## Project structure

### JavaScript
See javascript [readme](javascript/).
### Visualization of the objects from the Object Library
See javascript [readme](objectVisualizer/).

Code in the `javascript/` subfolder can be used to visualize Objects in the Object Library.
### Java
This folder now contains just "Hello World!" program in Java. Code here if you want to use Java language.

### Python
See python [readme](python/).
Currently used language for the development.
See python [readme](python/) for instructions how to run.

Code in the `python/` subfolder contains the beginnings of simple `Agent` and `Environment` implementations.
Binary file added doc/images/ThreeLayer.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/agentSensors.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/lateral.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/objectSpace.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
26 changes: 7 additions & 19 deletions python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,39 +13,27 @@ Inside the `python` directory, run:
python -m pip install -r requirements.txt
```

### Code style / formatting

Setup your IDE to 4 spaces indentation.

## Run the tests

```
python -m unittest tests/*.py
```
# Run
# Run experiment

Just run in the python folder
```
python main.py
```

# Using visualization tool
## Using visualization tool

You can use [visualisation tool for HTM systems](https://github.com/htm-community/HTMpandaVis).
Install what is neccessary according to project readme & enable using pandaVis in the main.py by setting appropriate flag at the beginning of the script.
Firstly run the vis tool in terminal, then run this script in terminal. It will get connected through TCP and show state of HTM system.

# Code style / formatting

Project uses flake8(quality code check) and black(code formatter).
Install them globally so your IDE can find them or point your IDE at your py environment.
Now work in progress - TODO howto procedure

```
python -m pip install flake8
python -m pip install black
```

Setup your IDE to 4 spaces indentation.

## Manually apply formatting
Other option is just to apply formatting to file manually.
```
black path/to/file.py
```