Skip to content

v0.3.0

Compare
Choose a tag to compare
@jean-francoisreboud jean-francoisreboud released this 04 Aug 14:07
· 26 commits to main since this release
bc563a7

0.3.0 (2023-08-04)

Features

🪜 feat: BCE1D, BCE2D, VQ2D & VQSeq as losses (#101)
🪜 layer_seq: VQSeq (#100)
🪜 layer_2d: loosen range contraint in ColorJitterHSV (#98)
🪜 layer_2d: SimilarityError2D & dirty losses (#97)
🪜 layer_2d: ColorJitterHSV, Image & ImageTests (#93)
🪜 layer_2d: Flip2D & config_kernels (#92)
🪜 layer_2d: SimilarityBatchError2D (#88)
🪜 layer_2d: Normalize2D (#87)
🪜 layer_2d: SelfCorrelate2D (#86)
🪜 layer_2d: VQ2D (#81)
🪜 layer_seq: Adding new layer SelectNeuronsSeq (#77)
⚙️ core: GELU activation function (#73)
🪜 layer_seq: ValueSeq (#69)
🪜 layer_seq: SoftmaxSeq (#68)
🪜 layer_seq: QuerySeq (#67)
🪜 layer_seq: LayerNormSeq & LayerNormalization (#66)
🪜 layer_seq: FullyConnectedSeq (#65)
🪜 layer_seq: Constant12Seq & Constant2Seq (#64)
🪜 layer_seq: Concat1Seq & Concat2Seq (#63)
🪜 layer_seq: SumSeq (#62)
🪜 layer_2d: MSE2D & LayerOutput2D (#61)
🪜 layer_seq: FullyConnectedPatch & base classes (#60)
🪜 layer_2d: Constant2D (#56)
🪜 layer_2d: AdaIN (#55)
🪜 layer_2d: InstanceNorm2D & InstanceNormalization (#54)

Bug Fixes

🐛 layer_2d: align Convolution & Deconvolution on PyTorch (#84)
🐛 fix: numerical stability of tanh for GELU (#83)
🐛 fix: numerical instability of Softmax (#76)
🐛 fix: update ValueSeq operation (#72)

Miscellaneous Tasks

🔨 refactor: throwable init (#103)
🔨 refactor: dims checks for inputs and outputs (#102)
🔨 layer_2d: expose indices in VQ2D (#99)
🔨 core: LayerWeightInit (#96)
🚨 test: FlowAccumulateTrainer (#95)
🚨 examples: compare training with PyTorch (#94)
🔨 layer_2d: remove computeVQ (#91)
🔨 layer_2d: API for random transforms (#90)
🚀 perf: enhance Normalize122D with reduce (#89)
🚨 integration: resize alignment with PyTorch (#85)
🔨 layer_seq: SelectSeq (#82)
🚀 examples: AutoEncoder models (#79)
🚀 layer_seq: factorize by nbHeads (#78)
🚀 examples: make Transformer example very simple (#75)
🚀 examples: adding Transformer training example (#74)
🚨 integration: update & validate LayerNormSeq (#71)
🚨 integration: validate MultiHeadAttention & fix Softmax stability (#70)