Skip to content

Commit

Permalink
replaced how_to with faq (apache#9575)
Browse files Browse the repository at this point in the history
* replaced how_to with faq

* fixed broken links from 197 report
  • Loading branch information
thinksanky authored and eric-haibin-lin committed Jan 27, 2018
1 parent 671df77 commit 50850af
Show file tree
Hide file tree
Showing 56 changed files with 107 additions and 109 deletions.
2 changes: 1 addition & 1 deletion R-package/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ options(repos = cran)
install.packages("mxnet")
```

To use the GPU version or to use it on Linux, please follow [Installation Guide](http://mxnet.io/get_started/install.html)
To use the GPU version or to use it on Linux, please follow [Installation Guide](http://mxnet.io/install/index.html)

License
-------
Expand Down
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,13 +36,13 @@ What's New
* [MKLDNN for Faster CPU Performance](./MKL_README.md)
* [MXNet Memory Monger, Training Deeper Nets with Sublinear Memory Cost](https://github.com/dmlc/mxnet-memonger)
* [Tutorial for NVidia GTC 2016](https://github.com/dmlc/mxnet-gtc-tutorial)
* [Embedding Torch layers and functions in MXNet](https://mxnet.incubator.apache.org/how_to/torch.html)
* [Embedding Torch layers and functions in MXNet](https://mxnet.incubator.apache.org/faq/torch.html)
* [MXNet.js: Javascript Package for Deep Learning in Browser (without server)
](https://github.com/dmlc/mxnet.js/)
* [Design Note: Design Efficient Deep Learning Data Loading Module](https://mxnet.incubator.apache.org/architecture/note_data_loading.html)
* [MXNet on Mobile Device](https://mxnet.incubator.apache.org/how_to/smart_device.html)
* [Distributed Training](https://mxnet.incubator.apache.org/how_to/multi_devices.html)
* [Guide to Creating New Operators (Layers)](https://mxnet.incubator.apache.org/how_to/new_op.html)
* [MXNet on Mobile Device](https://mxnet.incubator.apache.org/faq/smart_device.html)
* [Distributed Training](https://mxnet.incubator.apache.org/faq/multi_devices.html)
* [Guide to Creating New Operators (Layers)](https://mxnet.incubator.apache.org/faq/new_op.html)
* [Go binding for inference](https://github.com/songtianyi/go-mxnet-predictor)
* [Amalgamation and Go Binding for Predictors](https://github.com/jdeng/gomxnet/) - Outdated
* [Large Scale Image Classification](https://github.com/apache/incubator-mxnet/tree/master/example/image-classification)
Expand All @@ -52,10 +52,10 @@ Contents
* [Documentation](https://mxnet.incubator.apache.org/) and [Tutorials](https://mxnet.incubator.apache.org/tutorials/)
* [Design Notes](https://mxnet.incubator.apache.org/architecture/index.html)
* [Code Examples](https://github.com/dmlc/mxnet/tree/master/example)
* [Installation](https://mxnet.incubator.apache.org/get_started/install.html)
* [Installation](https://mxnet.incubator.apache.org/install/index.html)
* [Pretrained Models](https://github.com/dmlc/mxnet-model-gallery)
* [Contribute to MXNet](https://mxnet.incubator.apache.org/community/contribute.html)
* [Frequent Asked Questions](https://mxnet.incubator.apache.org/how_to/faq.html)
* [Frequent Asked Questions](https://mxnet.incubator.apache.org/faq/faq.html)

Features
--------
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/release_note_0_9.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Version 0.9 brings a number of important features and changes, including a back-

## NNVM Refactor

NNVM is a library for neural network graph construction, optimization, and operator registration. It serves as an intermediary layer between the front-end (MXNet user API) and the back-end (computation on the device). After version 0.9, MXNet fully adopts the NNVM framework. Now it's easier to create operators. You can also register "pass"es that process and optimizes the graph when `bind` is called on the symbol. For more discussion on how to create operators with NNVM, please refer to [How to Create New Operators](../how_to/new_op.md)
NNVM is a library for neural network graph construction, optimization, and operator registration. It serves as an intermediary layer between the front-end (MXNet user API) and the back-end (computation on the device). After version 0.9, MXNet fully adopts the NNVM framework. Now it's easier to create operators. You can also register "pass"es that process and optimizes the graph when `bind` is called on the symbol. For more discussion on how to create operators with NNVM, please refer to [How to Create New Operators](../faq/new_op.md)

Other changes brought by NNVM include:
- Backward shape inference is now supported
Expand Down
4 changes: 2 additions & 2 deletions docs/community/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ We track bugs and new feature requests in the MXNet Github repo in the issues fo
## Contributors
MXNet has been developed and is used by a group of active community members. Contribute to improving it! For more information, see [contributions](http://mxnet.io/community/contribute.html).

Please join the contributor mailing list. [subscribe]('mailto:[email protected]') [archive](https://lists.apache.org/[email protected])
Please join the contributor mailing list. [subscribe](mailto://[email protected]) [archive](https://lists.apache.org/[email protected])

To join the MXNet slack channel send request to the contributor mailing list. [subscribe]('mailto:[email protected]') [archive](https://the-asf.slackarchive.io/mxnet)
To join the MXNet slack channel send request to the contributor mailing list. [subscribe](mailto://[email protected]) [archive](https://the-asf.slackarchive.io/mxnet)

## Roadmap

Expand Down
2 changes: 1 addition & 1 deletion docs/faq/env_var.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ export MXNET_GPU_WORKER_NTHREADS=3
- The number of threads given to prioritized CPU jobs.
* MXNET_CPU_NNPACK_NTHREADS
- Values: Int ```(default=4)```
- The number of threads used for NNPACK. NNPACK package aims to provide high-performance implementations of some layers for multi-core CPUs. Checkout [NNPACK](http://mxnet.io/how_to/nnpack.html) to know more about it.
- The number of threads used for NNPACK. NNPACK package aims to provide high-performance implementations of some layers for multi-core CPUs. Checkout [NNPACK](http://mxnet.io/faq/nnpack.html) to know more about it.

## Memory Options

Expand Down
4 changes: 2 additions & 2 deletions docs/faq/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,10 +48,10 @@ copied_model = mx.model.FeedForward(ctx=mx.gpu(), symbol=new_symbol,
arg_params=old_arg_params, aux_params=old_aux_params,
allow_extra_params=True);
```
For information about copying model parameters from an existing ```old_arg_params```, see this [notebook](https://github.com/dmlc/mxnet-notebooks/blob/master/python/how_to/predict.ipynb). More notebooks please refer to [dmlc/mxnet-notebooks](https://github.com/dmlc/mxnet-notebooks).
For information about copying model parameters from an existing ```old_arg_params```, see this [notebook](https://github.com/dmlc/mxnet-notebooks/blob/master/python/faq/predict.ipynb). More notebooks please refer to [dmlc/mxnet-notebooks](https://github.com/dmlc/mxnet-notebooks).

#### How to Extract the Feature Map of a Certain Layer
See this [notebook](https://github.com/dmlc/mxnet-notebooks/blob/master/python/how_to/predict.ipynb). More notebooks please refer to [dmlc/mxnet-notebooks](https://github.com/dmlc/mxnet-notebooks).
See this [notebook](https://github.com/dmlc/mxnet-notebooks/blob/master/python/faq/predict.ipynb). More notebooks please refer to [dmlc/mxnet-notebooks](https://github.com/dmlc/mxnet-notebooks).


#### What Is the Relationship Between MXNet and CXXNet, Minerva, and Purine2?
Expand Down
2 changes: 1 addition & 1 deletion docs/faq/finetune.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ with these pretrained weights when training on our new task. This process is
commonly called _fine-tuning_. There are a number of variations of fine-tuning.
Sometimes, the initial neural network is used only as a _feature extractor_.
That means that we freeze every layer prior to the output layer and simply learn
a new output layer. In [another document](https://github.com/dmlc/mxnet-notebooks/blob/master/python/how_to/predict.ipynb), we explained how to
a new output layer. In [another document](https://github.com/dmlc/mxnet-notebooks/blob/master/python/faq/predict.ipynb), we explained how to
do this kind of feature extraction. Another approach is to update all of
the network's weights for the new task, and that's the approach we demonstrate in
this document.
Expand Down
2 changes: 1 addition & 1 deletion docs/faq/gradient_compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ A reference `gluon` implementation with a gradient compression option can be fou
mod = mx.mod.Module(..., compression_params={'type’:'2bit', 'threshold':0.5})
```

A `module` example is provided with [this guide for setting up MXNet with distributed training](https://mxnet.incubator.apache.org/versions/master/how_to/multi_devices.html#distributed-training-with-multiple-machines). It comes with the option of turning on gradient compression as an argument to the [train_mnist.py script](https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/train_mnist.py).
A `module` example is provided with [this guide for setting up MXNet with distributed training](https://mxnet.incubator.apache.org/versions/master/faq/multi_devices.html#distributed-training-with-multiple-machines). It comes with the option of turning on gradient compression as an argument to the [train_mnist.py script](https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/train_mnist.py).

### Configuration Details

Expand Down
12 changes: 6 additions & 6 deletions docs/faq/multi_devices.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ gradients are then summed over all GPUs before updating the model.

> To use GPUs, we need to compile MXNet with GPU support. For
> example, set `USE_CUDA=1` in `config.mk` before `make`. (see
> [MXNet installation guide](http://mxnet.io/get_started/install.html) for more options).
> [MXNet installation guide](http://mxnet.io/install/index.html) for more options).
If a machine has one or more GPU cards installed,
then each card is labeled by a number starting from 0.
Expand All @@ -57,17 +57,17 @@ If the available GPUs are not all equally powerful,
we can partition the workload accordingly.
For example, if GPU 0 is 3 times faster than GPU 2,
then we might use the workload option `work_load_list=[3, 1]`,
see [Module](../api/python/module.html#mxnet.module.Module)
see [Module](http://mxnet.io/api/python/module/module.html#mxnet.module.Module)
for more details.

Training with multiple GPUs should yield the same results
as training on a single GPU if all other hyper-parameters are the same.
as training on a single GPU if all other hyper-parameters are the same.f
In practice, the results may exhibit small differences,
owing to the randomness of I/O (random order or other augmentations),
weight initialization with different seeds, and CUDNN.

We can control on which devices the gradient is aggregated
and on which device the model is updated via [`KVStore`](http://mxnet.io/api/python/kvstore.html),
and on which device the model is updated via [`KVStore`](http://mxnet.io/api/python/kvstore/kvstore.html),
the _MXNet_ module that supports data communication.
One can either use `mx.kvstore.create(type)` to get an instance
or use the program flag `--kv-store type`.
Expand Down Expand Up @@ -101,7 +101,7 @@ When using a large number of GPUs, e.g. >=4, we suggest using `device` for bette
### How to Launch a Job

> To use distributed training, we need to compile with `USE_DIST_KVSTORE=1`
> (see [MXNet installation guide](http://mxnet.io/get_started/install.html) for more options).
> (see [MXNet installation guide](http://mxnet.io/install/index.html) for more options).
Launching a distributed job is a bit different from running on a single
machine. MXNet provides
Expand Down Expand Up @@ -210,4 +210,4 @@ export PS_VERBOSE=1; python ../../tools/launch.py ...
### More

- See more launch options by `python ../../tools/launch.py -h`
- See more options of [ps-lite](http://ps-lite.readthedocs.org/en/latest/how_to.html)
- See more options of [ps-lite](http://ps-lite.readthedocs.org/en/latest/faq.html)
2 changes: 1 addition & 1 deletion docs/faq/nnpack.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ $ cd ~
* Set lib path of NNPACK as the environment variable, e.g. `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$YOUR_NNPACK_INSTALL_PATH/lib`
* Add the include file of NNPACK and its third-party to `ADD_CFLAGS` in config.mk, e.g. `ADD_CFLAGS = -I$(YOUR_NNPACK_INSTALL_PATH)/include/ -I$(YOUR_NNPACK_INSTALL_PATH)/third-party/pthreadpool/include/`
* Set `USE_NNPACK = 1` in config.mk.
* Build MXNet from source following the [install guide](http://mxnet.io/get_started/install.html).
* Build MXNet from source following the [install guide](http://mxnet.io/install/index.html).

### NNPACK Performance

Expand Down
4 changes: 2 additions & 2 deletions docs/faq/perf.md
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ where the batch size for Alexnet is increased by 8x.

If more than one GPU or machine are used, MXNet uses `kvstore` to communicate data.
It's critical to use the proper type of `kvstore` to get the best performance.
Refer to [multi_device.md](http://mxnet.io/how_to/multi_devices.html) for more
Refer to [multi_device.md](http://mxnet.io/faq/multi_devices.html) for more
details.

Besides, we can use [tools/bandwidth](https://github.com/dmlc/mxnet/tree/master/tools/bandwidth)
Expand Down Expand Up @@ -225,7 +225,7 @@ by summarizing at the operator level, instead of a function, kernel, or instruct

In order to be able to use the profiler, you must compile _MXNet_ with the `USE_PROFILER=1` flag in `config.mk`.

The profiler can then be turned on with an [environment variable](http://mxnet.io/how_to/env_var.html#control-the-profiler)
The profiler can then be turned on with an [environment variable](http://mxnet.io/faq/env_var.html#control-the-profiler)
for an entire program run, or programmatically for just part of a run.
See [example/profiler](https://github.com/dmlc/mxnet/tree/master/example/profiler)
for complete examples of how to use the profiler in code, but briefly, the Python code looks like:
Expand Down
2 changes: 1 addition & 1 deletion docs/faq/s3_integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Following are detailed instructions on how to use data from S3 for training.

## Step 1: Build MXNet with S3 integration enabled

Follow instructions [here](http://mxnet.io/get_started/install.html) to install MXNet from source with the following additional steps to enable S3 integration.
Follow instructions [here](http://mxnet.io/install/index.html) to install MXNet from source with the following additional steps to enable S3 integration.

1. Install `libcurl4-openssl-dev` and `libssl-dev` before building MXNet. These packages are required to read/write from AWS S3.
2. Append `USE_S3=1` to `config.mk` before building MXNet.
Expand Down
4 changes: 2 additions & 2 deletions docs/faq/visualize_graph.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,12 @@ from which the result can be read.
## Prerequisites
You need the [Jupyter Notebook](http://jupyter.readthedocs.io/en/latest/)
and [Graphviz](http://www.graphviz.org/) libraries to visualize the network.
Please make sure you have followed [installation instructions](http://mxnet.io/get_started/install.html)
Please make sure you have followed [installation instructions](http://mxnet.io/install/index.html)
in setting up above dependencies along with setting up MXNet.

## Visualize the sample Neural Network

```mx.viz.plot_network``` takes [Symbol](http://mxnet.io/api/python/symbol.html), with your Network definition, and optional node_attrs, parameters for the shape of the node in the graph, as input and generates a computation graph.
```mx.viz.plot_network``` takes [Symbol](http://mxnet.io/api/python/symbol/symbol.html), with your Network definition, and optional node_attrs, parameters for the shape of the node in the graph, as input and generates a computation graph.

We will now try to visualize a sample Neural Network for linear matrix factorization:
- Start Jupyter notebook server
Expand Down
4 changes: 2 additions & 2 deletions docs/install/amazonlinux_setup.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
<!-- This page should be deleted after sometime (Allowing search engines
to update links) -->
<meta http-equiv="refresh" content="3; url=http://mxnet.io/get_started/install.html" />
<meta http-equiv="refresh" content="3; url=http://mxnet.io/install/index.html" />
<!-- Just in case redirection does not work -->
<p>
<a href="http://mxnet.io/get_started/install.html">
<a href="http://mxnet.io/install/index.html">
This content is moved to a new MXNet install page. Redirecting... </a>
</p>
2 changes: 1 addition & 1 deletion docs/install/build_from_source.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Build MXNet from Source

**NOTE:** For MXNet with Python installation, please refer to the [new install guide](http://mxnet.io/get_started/install.html).
**NOTE:** For MXNet with Python installation, please refer to the [new install guide](http://mxnet.io/install/index.html).

This document explains how to build MXNet from sources. Building MXNet from sources is a 2 step process.

Expand Down
4 changes: 2 additions & 2 deletions docs/install/centos_setup.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
<!-- This page should be deleted after sometime (Allowing search engines
to update links) -->
<meta http-equiv="refresh" content="3; url=http://mxnet.io/get_started/install.html" />
<meta http-equiv="refresh" content="3; url=http://mxnet.io/install/index.html" />
<!-- Just in case redirection does not work -->
<p>
<a href="http://mxnet.io/get_started/install.html">
<a href="http://mxnet.io/install/index.html">
This content is moved to a new MXNet install page. Redirecting... </a>
</p>
4 changes: 2 additions & 2 deletions docs/install/osx_setup.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Installing MXNet froum source on OS X (Mac)

**NOTE:** For prebuild MXNet with Python installation, please refer to the [new install guide](http://mxnet.io/get_started/install.html).
**NOTE:** For prebuild MXNet with Python installation, please refer to the [new install guide](http://mxnet.io/install/index.html).

Installing MXNet is a two-step process:

Expand Down Expand Up @@ -217,5 +217,5 @@ After you build the shared library, run the following command from the MXNet sou
## Next Steps

* [Tutorials](http://mxnet.io/tutorials/index.html)
* [How To](http://mxnet.io/how_to/index.html)
* [How To](http://mxnet.io/faq/index.html)
* [Architecture](http://mxnet.io/architecture/index.html)
4 changes: 2 additions & 2 deletions docs/install/raspbian_setup.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
<!-- This page should be deleted after sometime (Allowing search engines
to update links) -->
<meta http-equiv="refresh" content="3; url=http://mxnet.io/get_started/install.html" />
<meta http-equiv="refresh" content="3; url=http://mxnet.io/install/index.html" />
<!-- Just in case redirection does not work -->
<p>
<a href="http://mxnet.io/get_started/install.html">
<a href="http://mxnet.io/install/index.html">
This content is moved to a new MXNet install page. Redirecting... </a>
</p>
4 changes: 2 additions & 2 deletions docs/install/tx2_setup.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
<!-- This page should be deleted after sometime (Allowing search engines
to update links) -->
<meta http-equiv="refresh" content="3; url=http://mxnet.io/get_started/install.html" />
<meta http-equiv="refresh" content="3; url=http://mxnet.io/install/index.html" />
<!-- Just in case redirection does not work -->
<p>
<a href="http://mxnet.io/get_started/install.html">
<a href="http://mxnet.io/install/index.html">
This content is moved to a new MXNet install page. Redirecting... </a>
</p>
4 changes: 2 additions & 2 deletions docs/install/ubuntu_setup.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Installing MXNet on Ubuntu

**NOTE:** For MXNet with Python installation, please refer to the [new install guide](http://mxnet.io/get_started/install.html).
**NOTE:** For MXNet with Python installation, please refer to the [new install guide](http://mxnet.io/install/index.html).

MXNet currently supports Python, R, Julia, Scala, and Perl. For users of R on Ubuntu operating systems, MXNet provides a set of Git Bash scripts that installs all of the required MXNet dependencies and the MXNet library.

Expand Down Expand Up @@ -262,5 +262,5 @@ Before you build MXNet for Perl from source code, you must complete [building th
## Next Steps

* [Tutorials](http://mxnet.io/tutorials/index.html)
* [How To](http://mxnet.io/how_to/index.html)
* [How To](http://mxnet.io/faq/index.html)
* [Architecture](http://mxnet.io/architecture/index.html)
2 changes: 1 addition & 1 deletion docs/install/windows_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -296,5 +296,5 @@ To install the MXNet Scala package into your local Maven repository, run the fol
## Next Steps

* [Tutorials](http://mxnet.io/tutorials/index.html)
* [How To](http://mxnet.io/how_to/index.html)
* [How To](http://mxnet.io/faq/index.html)
* [Architecture](http://mxnet.io/architecture/index.html)
Loading

0 comments on commit 50850af

Please sign in to comment.