Skip to content

Commit

Permalink
🔧 chore: release 0.3.0
Browse files Browse the repository at this point in the history
  • Loading branch information
jean-francoisreboud authored Aug 4, 2023
2 parents 6c9f5fe + 5756300 commit bc563a7
Show file tree
Hide file tree
Showing 261 changed files with 44,663 additions and 3,982 deletions.
2 changes: 1 addition & 1 deletion .swiftpm/xcode/xcshareddata/xcschemes/GrAIdient.xcscheme
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@
</Testables>
</TestAction>
<LaunchAction
buildConfiguration = "Debug"
buildConfiguration = "Release"
selectedDebuggerIdentifier = "Xcode.DebuggerFoundation.Debugger.LLDB"
selectedLauncherIdentifier = "Xcode.DebuggerFoundation.Launcher.LLDB"
launchStyle = "0"
Expand Down
3 changes: 2 additions & 1 deletion AUTHORS
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,5 @@
# Name/Organization <email address>
#

Jean-François Reboud <[email protected]>
Peden Aurélien <[email protected]>
Reboud Jean-François <[email protected]>
59 changes: 58 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,63 @@ All notable changes to this project will be documented in this file.

## [unreleased]

## 0.3.0 (2023-08-04)

### Features

🪜 **feat:** BCE1D, BCE2D, VQ2D & VQSeq as losses ([#101](https://github.com/owkin/GrAIdient/pull/101))\
🪜 **layer_seq:** VQSeq ([#100](https://github.com/owkin/GrAIdient/pull/100))\
🪜 **layer_2d:** loosen range contraint in ColorJitterHSV ([#98](https://github.com/owkin/GrAIdient/pull/98))\
🪜 **layer_2d:** SimilarityError2D & dirty losses ([#97](https://github.com/owkin/GrAIdient/pull/97))\
🪜 **layer_2d:** ColorJitterHSV, Image & ImageTests ([#93](https://github.com/owkin/GrAIdient/pull/93))\
🪜 **layer_2d:** Flip2D & config_kernels ([#92](https://github.com/owkin/GrAIdient/pull/92))\
🪜 **layer_2d:** SimilarityBatchError2D ([#88](https://github.com/owkin/GrAIdient/pull/88))\
🪜 **layer_2d:** Normalize2D ([#87](https://github.com/owkin/GrAIdient/pull/87))\
🪜 **layer_2d:** SelfCorrelate2D ([#86](https://github.com/owkin/GrAIdient/pull/86))\
🪜 **layer_2d**: VQ2D ([#81](https://github.com/owkin/GrAIdient/pull/81))\
🪜 **layer_seq**: Adding new layer SelectNeuronsSeq ([#77](https://github.com/owkin/GrAIdient/pull/77))\
⚙️ **core:** GELU activation function ([#73](https://github.com/owkin/GrAIdient/pull/73))\
🪜 **layer_seq:** ValueSeq ([#69](https://github.com/owkin/GrAIdient/pull/69))\
🪜 **layer_seq:** SoftmaxSeq ([#68](https://github.com/owkin/GrAIdient/pull/68))\
🪜 **layer_seq:** QuerySeq ([#67](https://github.com/owkin/GrAIdient/pull/67))\
🪜 **layer_seq:** LayerNormSeq & LayerNormalization ([#66](https://github.com/owkin/GrAIdient/pull/66))\
🪜 **layer_seq:** FullyConnectedSeq ([#65](https://github.com/owkin/GrAIdient/pull/65))\
🪜 **layer_seq:** Constant12Seq & Constant2Seq ([#64](https://github.com/owkin/GrAIdient/pull/64))\
🪜 **layer_seq:** Concat1Seq & Concat2Seq ([#63](https://github.com/owkin/GrAIdient/pull/63))\
🪜 **layer_seq:** SumSeq ([#62](https://github.com/owkin/GrAIdient/pull/62))\
🪜 **layer_2d:** MSE2D & LayerOutput2D ([#61](https://github.com/owkin/GrAIdient/pull/61))\
🪜 **layer_seq:** FullyConnectedPatch & base classes ([#60](https://github.com/owkin/GrAIdient/pull/60))\
🪜 **layer_2d:** Constant2D ([#56](https://github.com/owkin/GrAIdient/pull/56))\
🪜 **layer_2d:** AdaIN ([#55](https://github.com/owkin/GrAIdient/pull/55))\
🪜 **layer_2d:** InstanceNorm2D & InstanceNormalization ([#54](https://github.com/owkin/GrAIdient/pull/54))

### Bug Fixes

🐛 **layer_2d**: align Convolution & Deconvolution on PyTorch ([#84](https://github.com/owkin/GrAIdient/pull/84))\
🐛 **fix**: numerical stability of tanh for GELU ([#83](https://github.com/owkin/GrAIdient/pull/83))\
🐛 **fix:** numerical instability of Softmax ([#76](https://github.com/owkin/GrAIdient/pull/76))\
🐛 **fix:** update ValueSeq operation ([#72](https://github.com/owkin/GrAIdient/pull/72))

### Miscellaneous Tasks

🔨 **refactor:** throwable init ([#103](https://github.com/owkin/GrAIdient/pull/103))\
🔨 **refactor:** dims checks for inputs and outputs ([#102](https://github.com/owkin/GrAIdient/pull/102))\
🔨 **layer_2d:** expose indices in VQ2D ([#99](https://github.com/owkin/GrAIdient/pull/99))\
🔨 **core:** LayerWeightInit ([#96](https://github.com/owkin/GrAIdient/pull/96))\
🚨 **test**: FlowAccumulateTrainer ([#95](https://github.com/owkin/GrAIdient/pull/95))\
🚨 **examples**: compare training with PyTorch ([#94](https://github.com/owkin/GrAIdient/pull/94))\
🔨 **layer_2d:** remove computeVQ ([#91](https://github.com/owkin/GrAIdient/pull/91))\
🔨 **layer_2d:** API for random transforms ([#90](https://github.com/owkin/GrAIdient/pull/90))\
🚀 **perf:** enhance Normalize122D with reduce ([#89](https://github.com/owkin/GrAIdient/pull/89))\
🚨 **integration**: resize alignment with PyTorch ([#85](https://github.com/owkin/GrAIdient/pull/85))\
🔨 **layer_seq**: SelectSeq ([#82](https://github.com/owkin/GrAIdient/pull/82))\
🚀 **examples**: AutoEncoder models ([#79](https://github.com/owkin/GrAIdient/pull/79))\
🚀 **layer_seq**: factorize by nbHeads ([#78](https://github.com/owkin/GrAIdient/pull/78))\
🚀 **examples:** make Transformer example very simple ([#75](https://github.com/owkin/GrAIdient/pull/75))\
🚀 **examples:** adding Transformer training example ([#74](https://github.com/owkin/GrAIdient/pull/74))\
🚨 **integration:** update & validate LayerNormSeq ([#71](https://github.com/owkin/GrAIdient/pull/71))\
🚨 **integration:** validate MultiHeadAttention & fix Softmax stability ([#70](https://github.com/owkin/GrAIdient/pull/70))

## 0.2.0 (2023-02-27)

### Features
Expand Down Expand Up @@ -54,7 +111,7 @@ All notable changes to this project will be documented in this file.
🔨 **refactor:** remove transaction ([#31](https://github.com/owkin/GrAIdient/pull/31))\
🚨 **integration:** activate DecorrelateRGB in test ([#29](https://github.com/owkin/GrAIdient/pull/29))\
🚨 **integration:** test IDFT and complex numbers ([#28](https://github.com/owkin/GrAIdient/pull/28))\
🔨 **tests:** factorize transform tests ([#26](https://github.com/owkin/GrAIdient/pull/26))\
🔨 **test:** factorize transform tests ([#26](https://github.com/owkin/GrAIdient/pull/26))\
👷 **ci:** remove swift action ([#20](https://github.com/owkin/GrAIdient/pull/20))\
👷 **ci:** remove LFS ([#17](https://github.com/owkin/GrAIdient/pull/17))

Expand Down
3 changes: 3 additions & 0 deletions Docs/Architecture/GrAITests.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,9 @@ that every layer, optimizer, activation function ... is tested.
the execution context
(the model CPU will be executed on the GPU and vice versa)

- accumulate tests: compare gradients computed in CPU and GPU
after accumulating them

- inference tests: compare loss in CPU and GPU during the inference phase

- load tests: compare loss in CPU and GPU after loading models from the disk
Expand Down
37 changes: 18 additions & 19 deletions Docs/Concepts/MODEL.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,16 +196,29 @@ cnn.weights = myCNNWeights
classifier.weights = myClassifierWeights
```

### Generate Model's Weights

It is also possible not to set the `weights` at all and have them
generated by the `Model` thanks to its `weightInitClass` API.
The following initialization schemes are available for the moment:

- Xavier uniform
- Xavier normal
- Kaiming uniform
- Kaiming normal

By default, the Xavier uniform initialization scheme is used.

### Model Loaded from the Disk

When a model has been loaded from the disk
(see [previous paragraph](#initialize-links)), there is no need to use
the `weights` API: the cache for the weights and biases values is already set
with the values loaded from the disk.
Note that when a model is loaded from the disk
(see [previous paragraph](#initialize-links)), its weights' cache is setup
automatically: there is no need to use the `weights` API in this use case.

### Initialize "Hard Resources"

The last thing to do is to initialize the "hard resources".
When the cache for the weights is well setup,
we have to initialize the "hard resources".
These are resources that may be time consuming to initialize
depending on the size of the model:

Expand Down Expand Up @@ -234,20 +247,6 @@ be fully loaded into the kernel of the different layers.
- GPU mode: the weights, biases... will be uploaded
to the GPU device

So now, what would have happened if the cache for weights and biases had
not been set earlier ?

=> The values for weights would have been initialized "randomly"
while the values for biases would have been initialized to 0.

To cap it all, the `weights` API is not necessary in the following situations:

- The model has been loaded from the disk
- We want to train a model from scratch

But the `initKernel` API is always necessary for the model to be ready to
train/run.

## Model Transformation

In some scenario, we need to transform the model and preserve the
Expand Down
70 changes: 70 additions & 0 deletions Docs/Examples/AutoEncoder.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# 🚀 Auto Encoder Example

This is the documentation of a
[toy Auto Encoder model](../../Tests/GrAIExamples/AutoEncoderExample.swift),
trained on the GPU.
The dataset used is CIFAR 10.

We want to train the model to encode and generate images of ships (label 8).

Here is a subset of the data input images.

<table align="center" cellspacing="0" cellpadding="0">
<tr>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_0.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_1.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_2.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_3.png"></td>
</tr>
<tr>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_4.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_5.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_6.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_7.png"></td>
</tr>
<tr>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_8.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_9.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_10.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_11.png"></td>
</tr>
<tr>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_12.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_13.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_14.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_15.png"></td>
</tr>
</table>

## Setup

This example has some `Python` dependencies. In order to run
the example, we first have to setup the environment:

```bash
conda create --name graiexamples python=3.9
conda activate graiexamples
cd Tests/GrAIExamples/Base
pip install -e .
```

Now, let us run the tests from Xcode or a `bash` command (here with compiler
optimization):

```bash
swift test -c release --filter GrAIExamples
```

It is finally possible to clean the environment 🌍

```bash
conda deactivate
conda env remove --name graiexamples
```

## Steps

1. Dump the training dataset.
1. Train a simple auto encoder model.
1. Train a UNet like auto encoder model.
1. Train a StyleGAN like auto encoder model.
4 changes: 3 additions & 1 deletion Docs/Examples/EXAMPLES.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,6 @@ or in the [GitHub](https://github.com/owkin/GrAIdient/actions) CI

The following examples are currently available:

- [VGGExample](VGG.md)
- [VGG](VGG.md)
- [Vision Transformer](VisionTransformer.md)
- [Auto Encoder](AutoEncoder.md)
64 changes: 32 additions & 32 deletions Docs/Examples/VGG.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,48 +11,48 @@ Here is a subset of images we find for the label 8 (ships) vs label 5 (dogs).

<table align="center" cellspacing="0" cellpadding="0">
<tr>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_0.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_1.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_2.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_3.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_0.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_1.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_2.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_3.png"></td>
<td> </td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_0.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_1.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_2.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_3.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_0.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_1.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_2.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_3.png"></td>
</tr>
<tr>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_4.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_5.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_6.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_7.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_4.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_5.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_6.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_7.png"></td>
<td> </td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_4.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_5.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_6.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_7.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_4.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_5.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_6.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_7.png"></td>
</tr>
<tr>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_8.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_9.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_10.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_11.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_8.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_9.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_10.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_11.png"></td>
<td> </td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_8.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_9.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_10.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_11.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_8.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_9.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_10.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_11.png"></td>
</tr>
<tr>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_12.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_13.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_14.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR8_15.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_12.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_13.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_14.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR8_15.png"></td>
<td> </td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_12.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_13.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_14.png"></td>
<td><img src="../../Tests/GrAIExamples/Base/data/out/CIFAR5_15.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_12.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_13.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_14.png"></td>
<td><img src="../../Tests/data/out/cifar-10/CIFAR5_15.png"></td>
</tr>
</table>

Expand Down
Loading

0 comments on commit bc563a7

Please sign in to comment.