Skip to content

Commit

Permalink
Merge pull request #3 from cog-imperial/main
Browse files Browse the repository at this point in the history
Bringing in updates
  • Loading branch information
jezsadler authored Jan 5, 2024
2 parents 7d660c4 + a3d128d commit a2586ea
Show file tree
Hide file tree
Showing 16 changed files with 481 additions and 96 deletions.
65 changes: 60 additions & 5 deletions .readthedocs.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,64 @@
# Read the Docs configuration file
# Read the Docs configuration file for Sphinx projects

# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

conda:
file: docs/environment.yml

# Required

version: 2


# Set the OS, Python version and other tools you might need

build:

os: ubuntu-22.04

tools:

python: "3.8"

# You can also specify other tool versions:

# nodejs: "20"

# rust: "1.70"

# golang: "1.20"


# Build documentation in the "docs/" directory with Sphinx

sphinx:

configuration: docs/conf.py

# You can configure Sphinx to use a different builder, for instance use the dirhtml builder for simpler URLs

# builder: "dirhtml"

# Fail on all warnings to avoid broken references

# fail_on_warning: true


# Optionally build your docs in additional formats such as PDF and ePub

# formats:

# - pdf

# - epub


# Optional but recommended, declare the Python requirements required

# to build your documentation

# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html

python:
version: 3.8
setup_py_install: true

install:

- requirements: docs/requirements.txt
4 changes: 4 additions & 0 deletions docs/api_doc/omlt.neuralnet.layers_activations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ Layer and Activation Functions
Layer Functions
----------------

.. automodule:: omlt.neuralnet.layers.__init__

.. automodule:: omlt.neuralnet.layers.full_space
:members:
:undoc-members:
Expand All @@ -26,6 +28,8 @@ Layer Functions
Activation Functions
---------------------

.. automodule:: omlt.neuralnet.activations.__init__

.. automodule:: omlt.neuralnet.activations.linear
:members:
:undoc-members:
Expand Down
11 changes: 0 additions & 11 deletions docs/environment.yml

This file was deleted.

17 changes: 12 additions & 5 deletions docs/notebooks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,22 +2,29 @@ Jupyter Notebooks
===================

OMLT provides Jupyter notebooks to demonstrate its core capabilities. All notebooks can be found on the OMLT
github `page <https://github.com/cog-imperial/OMLT/tree/main/docs/notebooks/>`_. The notebooks are summarized as follows:
github `page <https://github.com/cog-imperial/OMLT/tree/main/docs/notebooks/>`_.

The first set of notebooks demonstrates the basic mechanics of OMLT and shows how to use it:

* `build_network.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/neuralnet/build_network.ipynb/>`_ shows how to manually create a `NetworkDefinition` object. This notebook is helpful for understanding the details of the internal layer structure that OMLT uses to represent neural networks.

* `import_network.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/neuralnet/import_network.ipynb/>`_ shows how to import neural networks from Keras and PyTorch using ONNX interoperability. The notebook also shows how to import variable bounds from data.

* `neural_network_formulations.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/neuralnet/neural_network_formulations.ipynb>`_ showcases the different neural network formulations available in OMLT.

* `index_handling.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/neuralnet/index_handling.ipynb>`_ shows how to use `IndexMapper` to handle the mappings between indexes.

* `bo_with_trees.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/bo_with_trees.ipynb>`_ incorporates gradient-boosted trees into a Bayesian optimization loop to optimize the Rosenbrock function.

* `linear_tree_formulations.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/trees/linear_tree_formulations.ipynb>`_ showcases the different linear model decision tree formulations available in OMLT.

The second set of notebooks gives application-specific examples:

* `mnist_example_dense.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/neuralnet/mnist_example_dense.ipynb>`_ trains a fully dense neural network on MNIST and uses OMLT to find adversarial examples.

* `mnist_example_convolutional.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/neuralnet/mnist_example_convolutional.ipynb>`_ trains a convolutional neural network on MNIST and uses OMLT to find adversarial examples.

* `auto-thermal-reformer.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/neuralnet/auto-thermal-reformer.ipynb>`_ develops a neural network surrogate (using sigmoid activations) with data from a process model built using `IDAES-PSE <https://github.com/IDAES/idaes-pse>`_.

* `auto-thermal-reformer-relu.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/neuralnet/auto-thermal-reformer-relu.ipynb>`_ develops a neural network surrogate (using ReLU activations) with data from a process model built using `IDAES-PSE <https://github.com/IDAES/idaes-pse>`_.

* `bo_with_trees.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/trees/bo_with_trees.ipynb>`_ incorporates gradient-boosted-trees into a Bayesian optimization loop to optimize the Rosenbrock function.

* `linear_tree_formulations.ipynb <https://github.com/cog-imperial/OMLT/blob/main/docs/notebooks/trees/linear_tree_formulations.ipynb>`_ showcases the different linear model decision tree formulations available in OMLT.
*
257 changes: 257 additions & 0 deletions docs/notebooks/neuralnet/index_handling.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,257 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Index handling\n",
"\n",
"Sometimes moving from layer to layer in a neural network involves rearranging the size from the output of the previous layer to the input of the next layer. This notebook demonstrates how to use this functionality in OMLT.\n",
"\n",
"## Library Setup\n",
"\n",
"Start by importing the libraries used in this project:\n",
"\n",
" - `numpy`: a general-purpose numerical library\n",
" - `omlt`: the package this notebook demonstates.\n",
" \n",
"We import the following classes from `omlt`:\n",
"\n",
" - `NetworkDefinition`: class that contains the nodes in a Neural Network\n",
" - `InputLayer`, `DenseLayer`, `PoolingLayer2D`: the three types of layers used in this example\n",
" - `IndexMapper`: used to reshape the data between layers"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"from omlt.neuralnet import NetworkDefinition\n",
"from omlt.neuralnet.layer import IndexMapper, InputLayer, DenseLayer, PoolingLayer2D"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we define a simple network that consists of a max pooling layer and a dense layer:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0\tInputLayer(input_size=[9], output_size=[9])\n",
"1\tPoolingLayer(input_size=[1, 3, 3], output_size=[1, 2, 2], strides=[2, 2], kernel_shape=[2, 2]), pool_func_name=max\n",
"2\tDenseLayer(input_size=[4], output_size=[1])\n"
]
}
],
"source": [
"# define bounds for inputs\n",
"input_size = [9]\n",
"input_bounds = {}\n",
"for i in range(input_size[0]):\n",
" input_bounds[(i)] = (-10.0, 10.0)\n",
"\n",
"net = NetworkDefinition(scaled_input_bounds=input_bounds)\n",
"\n",
"# define the input layer\n",
"input_layer = InputLayer(input_size)\n",
"\n",
"net.add_layer(input_layer)\n",
"\n",
"# define the pooling layer\n",
"input_index_mapper_1 = IndexMapper([9], [1, 3, 3])\n",
"maxpooling_layer = PoolingLayer2D(\n",
" [1, 3, 3],\n",
" [1, 2, 2],\n",
" [2, 2],\n",
" \"max\",\n",
" [2, 2],\n",
" 1,\n",
" input_index_mapper=input_index_mapper_1,\n",
")\n",
"\n",
"net.add_layer(maxpooling_layer)\n",
"net.add_edge(input_layer, maxpooling_layer)\n",
"\n",
"# define the dense layer\n",
"input_index_mapper_2 = IndexMapper([1, 2, 2], [4])\n",
"dense_layer = DenseLayer(\n",
" [4],\n",
" [1],\n",
" activation=\"linear\",\n",
" weights=np.ones([4, 1]),\n",
" biases=np.zeros(1),\n",
" input_index_mapper=input_index_mapper_2,\n",
")\n",
"\n",
"net.add_layer(dense_layer)\n",
"net.add_edge(maxpooling_layer, dense_layer)\n",
"\n",
"for layer_id, layer in enumerate(net.layers):\n",
" print(f\"{layer_id}\\t{layer}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example, `input_index_mapper_1` maps outputs of `input_layer` (with size [9]) to the inputs of `maxpooling_layer` (with size [1, 3, 3]), `input_index_mapper_2` maps the outputs of `maxpooling_layer` (with size [1, 2, 2]) to the inputs of `dense_layer` (with size [4]). Given an input, we can evaluate each layer to see how `IndexMapper` works: "
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"outputs of maxpooling_layer:\n",
" [[[5. 6.]\n",
" [8. 9.]]]\n",
"outputs of dense_layer:\n",
" [28.]\n"
]
}
],
"source": [
"x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9])\n",
"y1 = maxpooling_layer.eval_single_layer(x)\n",
"print(\"outputs of maxpooling_layer:\\n\", y1)\n",
"y2 = dense_layer.eval_single_layer(y1)\n",
"print(\"outputs of dense_layer:\\n\", y2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Without `IndexMapper`, the output of `maxpooling_layer` is identical to the input of `dense_layer`. When using `IndexMapper`, using `input_indexes_with_input_layer_indexes` can provide the mapping between indexes. Therefore, there is no need to define variables for the inputs of each layer (except for `input_layer`). As shown in the following, we print both input indexes and output indexes for each layer. Also, we give the mapping between indexes of two adjacent layers, where `local_index` corresponds to the input indexes of the current layer and `input_index` corresponds to the output indexes of previous layer."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"input indexes of input_layer:\n",
"[(0,), (1,), (2,), (3,), (4,), (5,), (6,), (7,), (8,)]\n",
"output indexes of input_layer:\n",
"[(0,), (1,), (2,), (3,), (4,), (5,), (6,), (7,), (8,)]\n"
]
}
],
"source": [
"print(\"input indexes of input_layer:\")\n",
"print(input_layer.input_indexes)\n",
"print(\"output indexes of input_layer:\")\n",
"print(input_layer.output_indexes)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"input indexes of maxpooling_layer:\n",
"[(0, 0, 0), (0, 0, 1), (0, 0, 2), (0, 1, 0), (0, 1, 1), (0, 1, 2), (0, 2, 0), (0, 2, 1), (0, 2, 2)]\n",
"output indexes of maxpooling_layer:\n",
"[(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1)]\n",
"input_index_mapping_1:\n",
"local_index: (0, 0, 0) input_index: (0,)\n",
"local_index: (0, 0, 1) input_index: (1,)\n",
"local_index: (0, 0, 2) input_index: (2,)\n",
"local_index: (0, 1, 0) input_index: (3,)\n",
"local_index: (0, 1, 1) input_index: (4,)\n",
"local_index: (0, 1, 2) input_index: (5,)\n",
"local_index: (0, 2, 0) input_index: (6,)\n",
"local_index: (0, 2, 1) input_index: (7,)\n",
"local_index: (0, 2, 2) input_index: (8,)\n"
]
}
],
"source": [
"print(\"input indexes of maxpooling_layer:\")\n",
"print(maxpooling_layer.input_indexes)\n",
"print(\"output indexes of maxpooling_layer:\")\n",
"print(maxpooling_layer.output_indexes)\n",
"print(\"input_index_mapping_1:\")\n",
"for local_index, input_index in maxpooling_layer.input_indexes_with_input_layer_indexes:\n",
" print(\"local_index:\", local_index, \"input_index:\", input_index)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"input indexes of dense_layer:\n",
"[(0,), (1,), (2,), (3,)]\n",
"output indexes of dense_layer:\n",
"[(0,)]\n",
"input_index_mapping_2:\n",
"local_index: (0,) input_index: (0, 0, 0)\n",
"local_index: (1,) input_index: (0, 0, 1)\n",
"local_index: (2,) input_index: (0, 1, 0)\n",
"local_index: (3,) input_index: (0, 1, 1)\n"
]
}
],
"source": [
"print(\"input indexes of dense_layer:\")\n",
"print(dense_layer.input_indexes)\n",
"print(\"output indexes of dense_layer:\")\n",
"print(dense_layer.output_indexes)\n",
"print(\"input_index_mapping_2:\")\n",
"for local_index, input_index in dense_layer.input_indexes_with_input_layer_indexes:\n",
" print(\"local_index:\", local_index, \"input_index:\", input_index)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "OMLT",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.16"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading

0 comments on commit a2586ea

Please sign in to comment.