diff --git a/README.md b/README.md
index cce20514..02f5913e 100644
--- a/README.md
+++ b/README.md
@@ -2,14 +2,16 @@
## Introduction
-Leaf is a Machine Intelligence Framework engineered by software developers, not
-scientists. It was inspired by the brilliant people behind TensorFlow, Torch,
-Caffe, Rust and numerous research papers and brings modularity, performance and
-portability to deep learning. Leaf is lean and tries to introduce minimal
+Leaf is a open Machine Learning Framework for hackers to build classical, deep
+or hybrid machine learning applications. It was inspired by the brilliant people
+behind TensorFlow, Torch, Caffe, Rust and numerous research papers and brings
+modularity, performance and portability to deep learning.
+
+Leaf has one of the simplest APIs, is lean and tries to introduce minimal
technical debt to your stack.
-Leaf is a few months old, but thanks to its architecture and Rust, it is already one of
-the fastest Machine Intelligence Frameworks in the world.
+Leaf is a few months old, but thanks to its architecture and Rust, it is already
+one of the fastest Machine Intelligence Frameworks available.

@@ -17,13 +19,12 @@ the fastest Machine Intelligence Frameworks in the world.
> See more Deep Neural Networks benchmarks on [Deep Learning Benchmarks][deep-learning-benchmarks-website].
-Leaf is portable. Run it on CPUs, GPUs, FPGAs on machines with an OS or on
+Leaf is portable. Run it on CPUs, GPUs, and FPGAs, on machines with an OS, or on
machines without one. Run it with OpenCL or CUDA. Credit goes to
[Collenchyma][collenchyma] and Rust.
Leaf is part of the [Autumn][autumn] Machine Intelligence Platform, which is
-working on making AI algorithms 100x more computational efficient. It seeks to bring
-real-time, offline AI to smartphones and embedded devices.
+working on making AI algorithms 100x more computational efficient.
We see Leaf as the core of constructing high-performance machine intelligence
applications. Leaf's design makes it easy to publish independent modules to make
@@ -31,15 +32,10 @@ e.g. deep reinforcement learning, visualization and monitoring, network
distribution, [automated preprocessing][cuticula] or scaleable production
deployment easily accessible for everyone.
-For more info, refer to
-* the [Leaf examples][leaf-examples],
-* the [Leaf Documentation][documentation],
-* the [Autumn Website][autumn] or
-* the [Q&A](#qa)
-
[caffe]: https://github.com/BVLC/caffe
[rust]: https://www.rust-lang.org/
[autumn]: http://autumnai.com
+[leaf-book]: http://autumnai.com/leaf/book
[tensorflow]: https://github.com/tensorflow/tensorflow
[benchmarks]: #benchmarks
[leaf-examples]: #examples
@@ -52,22 +48,42 @@ For more info, refer to
## Getting Started
-If you are new to Rust you can install it as detailed [here][rust_download].
-We also recommend taking a look at the [official Getting Started Guide][rust_getting_started].
+### Documentation
+
+To learn how to build classical, deep or hybrid machine learning applications with Leaf, check out the [Leaf - Machine Learning for Hackers][leaf-book] book.
+
+For additional information see the [Rust API Documentation][documentation] or the [Autumn Website][autumn].
+
+Or start by running the **Leaf examples**.
-If you're using Cargo, just add Leaf to your `Cargo.toml`:
+We are providing a [Leaf examples repository][leaf-examples], where we and
+others publish executable machine learning models build with Leaf. It features
+a CLI for easy usage and has a detailed guide in the [project
+README.md][leaf-examples].
+
+Leaf comes with an examples directory as well, which features popular neural
+networks (e.g. Alexnet, Overfeat, VGG). To run them on your machine, just follow
+the install guide, clone this repoistory and then run
+
+```bash
+# The examples currently require CUDA support.
+cargo run --release --no-default-features --features cuda --example benchmarks alexnet
+```
+
+[leaf-examples]: https://github.com/autumnai/leaf-examples
+
+### Installation
+
+> Leaf is build in [Rust][rust]. If you are new to Rust you can install Rust as detailed [here][rust_download].
+We also recommend taking a look at the [official Rust - Getting Started Guide][rust_getting_started].
+
+To start building a machine learning application (Rust only for now. Wrappers are welcome) and you are using Cargo, just add Leaf to your `Cargo.toml`:
```toml
[dependencies]
leaf = "0.2.0"
```
-If you're using [Cargo Edit][cargo-edit], you can
-call:
-
-```bash
-cargo add leaf
-```
[rust_download]: https://www.rust-lang.org/downloads.html
[rust_getting_started]: https://doc.rust-lang.org/book/getting-started.html
[cargo-edit]: https://github.com/killercup/cargo-edit
@@ -88,24 +104,24 @@ opencl = ["leaf/opencl"]
> More information on the use of feature flags in Leaf can be found in [FEATURE-FLAGS.md](./FEATURE-FLAGS.md)
+### Contributing
-## Examples
+If you want to start hacking on Leaf (e.g.
+ [adding a new `Layer`](http://autumnai.com/leaf/book/create-new-layer.html))
+you should start with forking and cloning the repository.
-We are providing a [Leaf examples repository][leaf-examples], where we and
-others publish executable machine learning models build with Leaf. It features
-a CLI for easy usage and has a detailed guide in the [project
-README.md][leaf-examples].
+We have more instructions to help you get started in the [CONTRIBUTING.md][contributing].
-Leaf comes with an examples directory as well, which features popular neural
-networks (e.g. Alexnet, Overfeat, VGG). To run them on your machine, just follow
-the install guide, clone this repoistory and then run
+We also has a near real-time collaboration culture, which happens
+here on Github and on the [Leaf Gitter Channel][gitter-leaf].
-```bash
-# The examples currently require CUDA support.
-cargo run --release --no-default-features --features cuda --example benchmarks alexnet
-```
+> Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as below, without any additional terms or conditions.
-[leaf-examples]: https://github.com/autumnai/leaf-examples
+[contributing]: CONTRIBUTING.md
+[gitter-leaf]: https://gitter.im/autumnai/leaf
+[mj]: https://twitter.com/mjhirn
+[hobofan]: https://twitter.com/hobofan
+[irc]: https://chat.mibbit.com/?server=irc.mozilla.org&channel=%23rust-machine-learning
## Ecosystem / Extensions
@@ -120,7 +136,7 @@ and extensible as possible. More helpful crates you can use with Leaf:
## Support / Contact
-- With a bit of luck, you can find us online on the #rust-machine-learing IRC at irc.mozilla.org,
+- With a bit of luck, you can find us online on the #rust-machine-learning IRC at irc.mozilla.org,
- but we are always approachable on [Gitter/Leaf][gitter-leaf]
- For bugs and feature request, you can create a [Github issue][leaf-issue]
- For more private matters, send us email straight to our inbox: developers@autumnai.com
@@ -128,23 +144,6 @@ and extensible as possible. More helpful crates you can use with Leaf:
[leaf-issue]: https://github.com/autumnai/leaf/issues
-## Contributing
-
-Want to contribute? Awesome! We have [instructions to help you get started][contributing].
-
-Leaf has a near real-time collaboration culture, and it happens here on Github and
-on the [Leaf Gitter Channel][gitter-leaf].
-
-Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0
-license, shall be dual licensed as below, without any additional terms or
-conditions.
-
-[contributing]: CONTRIBUTING.md
-[gitter-leaf]: https://gitter.im/autumnai/leaf
-[mj]: https://twitter.com/mjhirn
-[hobofan]: https://twitter.com/hobofan
-[irc]: https://chat.mibbit.com/?server=irc.mozilla.org&channel=%23rust-machine-learning
-
## Changelog
You can find the release history at the [CHANGELOG.md][changelog]. We are using [Clog][clog], the Rust tool for auto-generating CHANGELOG files.
@@ -152,52 +151,6 @@ You can find the release history at the [CHANGELOG.md][changelog]. We are using
[changelog]: CHANGELOG.md
[Clog]: https://github.com/clog-tool/clog-cli
-## Q&A
-
-#### _Why Rust?_
-
-Hardware has just recently become strong enough to support real-world
-usage of machine intelligence e.g. super-human image recognition, self-driving
-cars, etc. To take advantage of the computational power of the underlying
-hardware, from GPUs to clusters, you need a low-level language that allows for
-control of memory. But to make machine intelligence widely accessible you want
-to have a high-level, comfortable abstraction over the underlying hardware.
-
-Rust allows us to cross this chasm.
-Rust promises performance like C/C++ but with safe memory-control. For now we
-can use C Rust wrappers for performant libraries. But in the future Rust
-rewritten libraries will have the advantage of zero-cost safe memory control,
-that will make large, parallel learning networks over CPUs and GPUs more
-feasible and more reliable to develop. The development of these future libraries
-is already under way e.g. [Glium][glium].
-
-On the usability side, Rust offers a trait-system that makes it easy for
-researchers and hobbyists alike to extend and work with Leaf as if it were
-written in a higher-level language such as Ruby, Python, or Java.
-
-#### _Who can use Leaf?_
-
-We develop Leaf under the MIT open source license, which, paired with the easy
-access and performance, makes Leaf a first-choice option for researchers and
-developers alike.
-
-#### _Why did you open source Leaf?_
-
-We believe strongly in machine intelligence and think that it will have a major
-impact on future innovations, products and our society. At Autumn, we experienced
-a lack of common and well engineered tools for machine learning and therefore
-started to create a modular toolbox for machine learning in Rust. We hope that,
-by making our work open source, we will speed up research and development of
-production-ready applications and make that work easier as well.
-
-#### _Who is Autumn?_
-
-Autumn is a startup working on automated decision making. Autumn was started by
-two developers, MJ and Max. The startup is located in Berlin and recently
-received a pre-seed investment from Axel Springer and Plug&Play.
-
-[glium]: https://github.com/tomaka/glium
-
## License
Licensed under either of
diff --git a/doc/book/index.html b/doc/book/index.html
index 8e85f7f1..e2f3717f 100644
--- a/doc/book/index.html
+++ b/doc/book/index.html
@@ -108,7 +108,7 @@
Leaf - Machine Learning for Hackers
classical, stochastic or hybrids, and solvers for executing and optimizing the
model.
This is already the entire API for machine learning with Leaf. To learn how
-this is possible and how to build machine learning applications, refer to
+this is possible and how to build machine learning applications, refer to chapters
2. Layers and 3. Solvers. Enjoy!
Benefits+
Leaf was built with three concepts in mind: accessibility/simplicity,
diff --git a/doc/book/layer-lifecycle.html b/doc/book/layer-lifecycle.html
index 4464e81b..27aa920c 100644
--- a/doc/book/layer-lifecycle.html
+++ b/doc/book/layer-lifecycle.html
@@ -68,26 +68,24 @@
Layer Lifecycle
-
In 2. Layers we have already seen a little bit about how to
-construct a Layer
from a LayerConfig
. In this chapter, we take
-a closer look at what happens inside Leaf when initializing a Layer
when
-running the .forward
of a Layer
and when running the .backward
. In the
-next chapter 2.2 Create a Network we then
-apply our knowledge to construct deep networks via the container layer.
-
Initialization (::from_config
), .forward
and .backward
are the three most
-important methods of a Layer
and describe basically the entire API. Let's
-take a closer look at what happens inside Leaf, when these methods are called.
+
In chapter 2. Layers we saw how to
+construct a simple Layer
from a LayerConfig
. In this chapter, we take
+a closer look at what happens inside Leaf when initializing a Layer
and when running its
+.forward
and .backward
methods. In the next chapter 2.2 Create a Network we
+apply our knowledge to construct deep networks with the container layer.
+
The most important methods of a Layer
are initialization (::from_config
), .forward
and .backward
.
+They basically describe the entire API, so let's take a closer look at what happens inside Leaf when these methods are called.
Initialization
-
A layer is constructed from a LayerConfig
via the Layer::from_config
+
A layer is constructed from a LayerConfig
with the Layer::from_config
method, which returns a fully initialized Layer
.
let mut sigmoid: Layer = Layer::from_config(backend.clone(), &LayerConfig::new("sigmoid", LayerType::Sigmoid))
let mut alexnet: Layer = Layer::from_config(backend.clone(), &LayerConfig::new("alexnet", LayerType::Sequential(cfg)))
In the example above, the first layer has a Sigmoid worker
-(LayerType::Sigmoid
). The second layer has a Sequential worker.
-Although both Layer::from_config
methods, return a Layer
, the behavior of
-the Layer
depends on the LayerConfig
it was constructed with. The
-Layer::from_config
calls internally the worker_from_config
method, which
+(LayerType::Sigmoid
) and the second layer has a Sequential worker.
+Although both ::from_config
methods return a Layer
, the behavior of
+that Layer
depends on the LayerConfig
it was constructed with. The
+Layer::from_config
internally calls the worker_from_config
method, which
constructs the specific worker defined by the LayerConfig
.
fn worker_from_config(backend: Rc<B>, config: &LayerConfig) -> Box<ILayer<B>> {
match config.layer_type.clone() {
@@ -99,35 +97,34 @@ Initialization
}
}
-
The layer specific ::from_config
(if available or needed) then takes care of
+
The layer-specific ::from_config
(if available or needed) then takes care of
initializing the worker struct, allocating memory for weights and so on.
-
In case the worker layer is a container layer, its ::from_config
takes
+
If the worker is a container layer, its ::from_config
takes
care of initializing all the LayerConfig
s it contains (which were added via its
-.add_layer
method) and connecting them in
-the order they were provided to the LayerConfig
of the container.
-
Every .forward
or .backward
call that is now made to the returned Layer
is
-sent to the worker.
+
.add_layer
method) and connecting them in the order they were provided.
+
Every .forward
or .backward
call that is made on the returned Layer
is
+run by the internal worker.
Forward
-
The forward
method of a Layer
sends the input through the constructed
+
The forward
method of a Layer
threads the input through the constructed
network and returns the output of the network's final layer.
The .forward
method does three things:
- Reshape the input data if necessary
-- Sync the input/weights to the device were the computation happens. This step
-removes the worker layer from the obligation to care about memory synchronization.
-- Call the
forward
method of the worker layer.
+- Sync the input/weights to the device where the computation happens. This step
+removes the need for the worker layer to care about memory synchronization.
+- Call the
forward
method of the internal worker layer.
-
In case, the worker layer is a container layer, the .forward
method of the
-container layer takes care of calling the .forward
methods of its managed
+
If the worker layer is a container layer, the .forward
method
+takes care of calling the .forward
methods of its managed
layers in the right order.
Backward
-
The .backward
of a Layer
works quite similar to its .forward
. Although it
-does not need to reshape the input. The .backward
computes
-the gradient with respect to the input and the gradient w.r.t. the parameters but
-only returns the gradient w.r.t the input as only that is needed to compute the
+
The .backward
method of a Layer
works similarly to .forward
, apart from
+needing to reshape the input. The .backward
method computes
+the gradient with respect to the input as well as the gradient w.r.t. the parameters. However,
+the method only returns the input gradient because that is all that is needed to compute the
gradient of the entire network via the chain rule.
-
In case the worker layer is a container layer, the .backward
method of the
-container layer takes care of calling the .backward_input
and
+
If the worker layer is a container layer, the .backward
method
+takes care of calling the .backward_input
and
.backward_parameter
methods of its managed layers in the right order.
diff --git a/doc/book/layers.html b/doc/book/layers.html
index 3da6429e..46cd5aee 100644
--- a/doc/book/layers.html
+++ b/doc/book/layers.html
@@ -69,7 +69,7 @@
Layers
What is a Layer?
-
Layers are the highest-level and only building
+
Layers are the only building
blocks in Leaf. As we will see later on, everything is a layer. Even when
we construct networks, we are still just
working with layers composed of smalle layers. This makes the API clean and expressive.
@@ -157,15 +157,15 @@
Container Layers
can be found at
src/layers/container.
Why Layers?
-
The benefit of using a layer-based design approach is, that it allows for a very expressive
+
The benefit of using a layer-based design approach is that it allows for a very expressive
setup that can represent, as far as we know, any machine learning algorithm.
That makes Leaf a framework, that can be used to construct practical machine
learning applications that combine different paradigms.
Other machine learning frameworks take a symbolic instead of a layered approach.
-For Leaf, we decided against it, as we found it easier for developers to handle
-layers, than mathematical expressions. More complex algorithms like LSTMs are
-also harder to replicate in a symbolic framework than with layered ones. We
-believe that Leafs layer approach strikes a great balance between,
+For Leaf we decided against it, as we found it easier for developers to work with
+layers than mathematical expressions. More complex algorithms like LSTMs are
+also harder to replicate in a symbolic framework. We
+believe that Leafs layer approach strikes a great balance between
expressiveness, usability and performance.
diff --git a/doc/book/leaf.html b/doc/book/leaf.html
index a9f39cda..6e16b328 100644
--- a/doc/book/leaf.html
+++ b/doc/book/leaf.html
@@ -109,7 +109,7 @@
Leaf - Machine Learning for Hackers
classical, stochastic or hybrids, and solvers for executing and optimizing the
model.
This is already the entire API for machine learning with Leaf. To learn how
-this is possible and how to build machine learning applications, refer to
+this is possible and how to build machine learning applications, refer to chapters
2. Layers and 3. Solvers. Enjoy!
Benefits+
Leaf was built with three concepts in mind: accessibility/simplicity,
diff --git a/doc/book/print.html b/doc/book/print.html
index 497ae00a..d8e75ab5 100644
--- a/doc/book/print.html
+++ b/doc/book/print.html
@@ -109,7 +109,7 @@
Leaf - Machine Learning for Hackers
classical, stochastic or hybrids, and solvers for executing and optimizing the
model.
This is already the entire API for machine learning with Leaf. To learn how
-this is possible and how to build machine learning applications, refer to
+this is possible and how to build machine learning applications, refer to chapters
2. Layers and 3. Solvers. Enjoy!
Benefits+
Leaf was built with three concepts in mind: accessibility/simplicity,
@@ -140,7 +140,7 @@
License
Whatever strikes your fancy.
Layers
What is a Layer?
-
Layers are the highest-level and only building
+
Layers are the only building
blocks in Leaf. As we will see later on, everything is a layer. Even when
we construct networks, we are still just
working with layers composed of smalle layers. This makes the API clean and expressive.
@@ -228,37 +228,35 @@
Container Layers
can be found at
src/layers/container.
Why Layers?
-
The benefit of using a layer-based design approach is, that it allows for a very expressive
+
The benefit of using a layer-based design approach is that it allows for a very expressive
setup that can represent, as far as we know, any machine learning algorithm.
That makes Leaf a framework, that can be used to construct practical machine
learning applications that combine different paradigms.
Other machine learning frameworks take a symbolic instead of a layered approach.
-For Leaf, we decided against it, as we found it easier for developers to handle
-layers, than mathematical expressions. More complex algorithms like LSTMs are
-also harder to replicate in a symbolic framework than with layered ones. We
-believe that Leafs layer approach strikes a great balance between,
+For Leaf we decided against it, as we found it easier for developers to work with
+layers than mathematical expressions. More complex algorithms like LSTMs are
+also harder to replicate in a symbolic framework. We
+believe that Leafs layer approach strikes a great balance between
expressiveness, usability and performance.
Layer Lifecycle
-
In 2. Layers we have already seen a little bit about how to
-construct a Layer
from a LayerConfig
. In this chapter, we take
-a closer look at what happens inside Leaf when initializing a Layer
when
-running the .forward
of a Layer
and when running the .backward
. In the
-next chapter 2.2 Create a Network we then
-apply our knowledge to construct deep networks via the container layer.
-
Initialization (::from_config
), .forward
and .backward
are the three most
-important methods of a Layer
and describe basically the entire API. Let's
-take a closer look at what happens inside Leaf, when these methods are called.
+
In chapter 2. Layers we saw how to
+construct a simple Layer
from a LayerConfig
. In this chapter, we take
+a closer look at what happens inside Leaf when initializing a Layer
and when running its
+.forward
and .backward
methods. In the next chapter 2.2 Create a Network we
+apply our knowledge to construct deep networks with the container layer.
+
The most important methods of a Layer
are initialization (::from_config
), .forward
and .backward
.
+They basically describe the entire API, so let's take a closer look at what happens inside Leaf when these methods are called.
Initialization
-
A layer is constructed from a LayerConfig
via the Layer::from_config
+
A layer is constructed from a LayerConfig
with the Layer::from_config
method, which returns a fully initialized Layer
.
let mut sigmoid: Layer = Layer::from_config(backend.clone(), &LayerConfig::new("sigmoid", LayerType::Sigmoid))
let mut alexnet: Layer = Layer::from_config(backend.clone(), &LayerConfig::new("alexnet", LayerType::Sequential(cfg)))
In the example above, the first layer has a Sigmoid worker
-(LayerType::Sigmoid
). The second layer has a Sequential worker.
-Although both Layer::from_config
methods, return a Layer
, the behavior of
-the Layer
depends on the LayerConfig
it was constructed with. The
-Layer::from_config
calls internally the worker_from_config
method, which
+(LayerType::Sigmoid
) and the second layer has a Sequential worker.
+Although both ::from_config
methods return a Layer
, the behavior of
+that Layer
depends on the LayerConfig
it was constructed with. The
+Layer::from_config
internally calls the worker_from_config
method, which
constructs the specific worker defined by the LayerConfig
.
fn worker_from_config(backend: Rc<B>, config: &LayerConfig) -> Box<ILayer<B>> {
match config.layer_type.clone() {
@@ -270,35 +268,34 @@ Initialization
}
}
-
The layer specific ::from_config
(if available or needed) then takes care of
+
The layer-specific ::from_config
(if available or needed) then takes care of
initializing the worker struct, allocating memory for weights and so on.
-
In case the worker layer is a container layer, its ::from_config
takes
+
If the worker is a container layer, its ::from_config
takes
care of initializing all the LayerConfig
s it contains (which were added via its
-.add_layer
method) and connecting them in
-the order they were provided to the LayerConfig
of the container.
-
Every .forward
or .backward
call that is now made to the returned Layer
is
-sent to the worker.
+
.add_layer
method) and connecting them in the order they were provided.
+
Every .forward
or .backward
call that is made on the returned Layer
is
+run by the internal worker.
Forward
-
The forward
method of a Layer
sends the input through the constructed
+
The forward
method of a Layer
threads the input through the constructed
network and returns the output of the network's final layer.
The .forward
method does three things:
- Reshape the input data if necessary
-- Sync the input/weights to the device were the computation happens. This step
-removes the worker layer from the obligation to care about memory synchronization.
-- Call the
forward
method of the worker layer.
+- Sync the input/weights to the device where the computation happens. This step
+removes the need for the worker layer to care about memory synchronization.
+- Call the
forward
method of the internal worker layer.
-
In case, the worker layer is a container layer, the .forward
method of the
-container layer takes care of calling the .forward
methods of its managed
+
If the worker layer is a container layer, the .forward
method
+takes care of calling the .forward
methods of its managed
layers in the right order.
Backward
-
The .backward
of a Layer
works quite similar to its .forward
. Although it
-does not need to reshape the input. The .backward
computes
-the gradient with respect to the input and the gradient w.r.t. the parameters but
-only returns the gradient w.r.t the input as only that is needed to compute the
+
The .backward
method of a Layer
works similarly to .forward
, apart from
+needing to reshape the input. The .backward
method computes
+the gradient with respect to the input as well as the gradient w.r.t. the parameters. However,
+the method only returns the input gradient because that is all that is needed to compute the
gradient of the entire network via the chain rule.
-
In case the worker layer is a container layer, the .backward
method of the
-container layer takes care of calling the .backward_input
and
+
If the worker layer is a container layer, the .backward
method
+takes care of calling the .backward_input
and
.backward_parameter
methods of its managed layers in the right order.
Create a Network
In the previous chapters, we learned that in Leaf everything is build by