Skip to content

Commit

Permalink
Bump the hard-wired version numbers to 0.11
Browse files Browse the repository at this point in the history
  • Loading branch information
elibarzilay committed Jan 25, 2018
1 parent 4bddc3e commit 528527a
Show file tree
Hide file tree
Showing 3 changed files with 20 additions and 20 deletions.
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,9 +123,9 @@ MMLSpark can be conveniently installed on existing Spark clusters via the
`--packages` option, examples:

```bash
spark-shell --packages Azure:mmlspark:0.10
pyspark --packages Azure:mmlspark:0.10
spark-submit --packages Azure:mmlspark:0.10 MyApp.jar
spark-shell --packages Azure:mmlspark:0.11
pyspark --packages Azure:mmlspark:0.11
spark-submit --packages Azure:mmlspark:0.11 MyApp.jar
```

This can be used in other Spark contexts too, for example, you can use
Expand All @@ -141,7 +141,7 @@ Spark installed via pip with `pip install pyspark`. You can then use
```
import pyspark
sp = pyspark.sql.SparkSession.builder.appName("MyApp") \
.config("spark.jars.packages", "Azure:mmlspark:0.10") \
.config("spark.jars.packages", "Azure:mmlspark:0.11") \
.getOrCreate()
import mmlspark
```
Expand All @@ -157,7 +157,7 @@ script actions, see [this
guide](https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-customize-cluster-linux#use-a-script-action-during-cluster-creation).

The script action url is:
<https://mmlspark.azureedge.net/buildartifacts/0.10/install-mmlspark.sh>.
<https://mmlspark.azureedge.net/buildartifacts/0.11/install-mmlspark.sh>.

If you're using the Azure Portal to run the script action, go to `Script
actions``Submit new` in the `Overview` section of your cluster blade. In the
Expand All @@ -173,7 +173,7 @@ To install MMLSpark on the
[library from Maven coordinates](https://docs.databricks.com/user-guide/libraries.html#libraries-from-maven-pypi-or-spark-packages)
in your workspace.

For the coordinates use: `com.microsoft.ml.spark:mmlspark:0.10`. Then, under
For the coordinates use: `com.microsoft.ml.spark:mmlspark:0.11`. Then, under
Advanced Options, use `https://mmlspark.azureedge.net/maven` for the repository.
Ensure this library is attached to all clusters you create.

Expand All @@ -188,7 +188,7 @@ your `build.sbt`:

```scala
resolvers += "MMLSpark Repo" at "https://mmlspark.azureedge.net/maven"
libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "0.10"
libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "0.11"
```

### Building from source
Expand Down
8 changes: 4 additions & 4 deletions docs/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ You can now select one of the sample notebooks and run it, or create your own.
In the above, `microsoft/mmlspark` specifies the project and image name that you
want to run. There is another component implicit here which is the *tag* (=
version) that you want to use — specifying it explicitly looks like
`microsoft/mmlspark:0.10` for using the `0.10` tag.
`microsoft/mmlspark:0.11` for the `0.11` tag.

Leaving `microsoft/mmlspark` by itself has an implicit `latest` tag, so it is
equivalent to `microsoft/mmlspark:latest`. The `latest` tag is identical to the
Expand All @@ -47,7 +47,7 @@ that you will probably want to use can look as follows:
-e ACCEPT_EULA=y \
-p 127.0.0.1:80:8888 \
-v ~/myfiles:/notebooks/myfiles \
microsoft/mmlspark:0.10
microsoft/mmlspark:0.11
```

In this example, backslashes are used to break things up for readability; you
Expand All @@ -59,7 +59,7 @@ path and line breaks looks a little different:
-e ACCEPT_EULA=y `
-p 127.0.0.1:80:8888 `
-v C:\myfiles:/notebooks/myfiles `
microsoft/mmlspark:0.10
microsoft/mmlspark:0.11
```

Let's break this command and go over the meaning of each part:
Expand Down Expand Up @@ -143,7 +143,7 @@ Let's break this command and go over the meaning of each part:
model.write().overwrite().save('myfiles/myTrainedModel.mml')
```

* **`microsoft/mmlspark:0.10`**
* **`microsoft/mmlspark:0.11`**

Finally, this specifies an explicit version tag for the image that we want to
run.
Expand Down
18 changes: 9 additions & 9 deletions docs/gpu-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ to check availability in your data center.
MMLSpark provides an Azure Resource Manager (ARM) template to create a
default setup that includes an HDInsight cluster and a GPU machine for
training. The template can be found here:
<https://mmlspark.azureedge.net/buildartifacts/0.10/deploy-main-template.json>.
<https://mmlspark.azureedge.net/buildartifacts/0.11/deploy-main-template.json>.

It has the following parameters that configure the HDI Spark cluster and
the associated GPU VM:
Expand All @@ -48,16 +48,16 @@ the associated GPU VM:
- `gpuVirtualMachineSize`: The size of the GPU virtual machine to create

There are actually two additional templates that are used from this main template:
- [`spark-cluster-template.json`](https://mmlspark.azureedge.net/buildartifacts/0.10/spark-cluster-template.json):
- [`spark-cluster-template.json`](https://mmlspark.azureedge.net/buildartifacts/0.11/spark-cluster-template.json):
A template for creating an HDI Spark cluster within a VNet, including
MMLSpark and its dependencies. (This template installs MMLSpark using
the HDI script action:
[`install-mmlspark.sh`](https://mmlspark.azureedge.net/buildartifacts/0.10/install-mmlspark.sh).)
- [`gpu-vm-template.json`](https://mmlspark.azureedge.net/buildartifacts/0.10/gpu-vm-template.json):
[`install-mmlspark.sh`](https://mmlspark.azureedge.net/buildartifacts/0.11/install-mmlspark.sh).)
- [`gpu-vm-template.json`](https://mmlspark.azureedge.net/buildartifacts/0.11/gpu-vm-template.json):
A template for creating a GPU VM within an existing VNet, including
CNTK and other dependencies that MMLSpark needs for GPU training.
(This is done via a script action that runs
[`gpu-setup.sh`](https://mmlspark.azureedge.net/buildartifacts/0.10/gpu-setup.sh).)
[`gpu-setup.sh`](https://mmlspark.azureedge.net/buildartifacts/0.11/gpu-setup.sh).)

Note that these child templates can also be deployed independently, if
you don't need both parts of the installation. Particularly, to scale
Expand All @@ -69,7 +69,7 @@ GPU VM setup template at experimentation time.
### 1. Deploy an ARM template within the [Azure Portal](https://ms.portal.azure.com/)

[Click here to open the above main
template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fmmlspark.azureedge.net%2Fbuildartifacts%2F0.10%2Fdeploy-main-template.json)
template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fmmlspark.azureedge.net%2Fbuildartifacts%2F0.11%2Fdeploy-main-template.json)
in the Azure portal.

(If needed, you click the **Edit template** button to view and edit the
Expand All @@ -87,11 +87,11 @@ We also provide a convenient shell script to create a deployment on the
command line:

* Download the [shell
script](https://mmlspark.azureedge.net/buildartifacts/0.10/deploy-arm.sh)
script](https://mmlspark.azureedge.net/buildartifacts/0.11/deploy-arm.sh)
and make a local copy of it

* Create a JSON parameter file by downloading [this template
file](https://mmlspark.azureedge.net/buildartifacts/0.10/deploy-parameters.template)
file](https://mmlspark.azureedge.net/buildartifacts/0.11/deploy-parameters.template)
and modify it according to your specification.

You can now run the script — it takes the following arguments:
Expand Down Expand Up @@ -124,7 +124,7 @@ you for all needed values.
### 3. Deploy an ARM template with the MMLSpark Azure PowerShell

MMLSpark also provides a [PowerShell
script](https://mmlspark.azureedge.net/buildartifacts/0.10/deploy-arm.ps1)
script](https://mmlspark.azureedge.net/buildartifacts/0.11/deploy-arm.ps1)
to deploy ARM templates, similar to the above bash script. Run it with
`-?` to see the usage instructions (or use `get-help`). If needed,
install the Azure PowerShell cmdlets using the instructions in the
Expand Down

0 comments on commit 528527a

Please sign in to comment.