Skip to content

Commit

Permalink
Bump hard-wired version numbers for v0.10
Browse files Browse the repository at this point in the history
  • Loading branch information
elibarzilay committed Nov 15, 2017
1 parent 501da9d commit 36740ea
Show file tree
Hide file tree
Showing 3 changed files with 19 additions and 19 deletions.
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,9 +103,9 @@ MMLSpark can be conveniently installed on existing Spark clusters via the
`--packages` option, examples:

```bash
spark-shell --packages Azure:mmlspark:0.9
pyspark --packages Azure:mmlspark:0.9
spark-submit --packages Azure:mmlspark:0.9 MyApp.jar
spark-shell --packages Azure:mmlspark:0.10
pyspark --packages Azure:mmlspark:0.10
spark-submit --packages Azure:mmlspark:0.10 MyApp.jar
```

<img title="Script action submission" src="http://i.imgur.com/oQcS0R2.png" align="right" />
Expand All @@ -119,7 +119,7 @@ script actions, see [this
guide](https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-customize-cluster-linux#use-a-script-action-during-cluster-creation).

The script action url is:
<https://mmlspark.azureedge.net/buildartifacts/0.9/install-mmlspark.sh>.
<https://mmlspark.azureedge.net/buildartifacts/0.10/install-mmlspark.sh>.

If you're using the Azure Portal to run the script action, go to `Script
actions``Submit new` in the `Overview` section of your cluster blade. In the
Expand All @@ -135,7 +135,7 @@ To install MMLSpark on the
[library from Maven coordinates](https://docs.databricks.com/user-guide/libraries.html#libraries-from-maven-pypi-or-spark-packages)
in your workspace.

For the coordinates use: `com.microsoft.ml.spark:mmlspark:0.9`. Then, under
For the coordinates use: `com.microsoft.ml.spark:mmlspark:0.10`. Then, under
Advanced Options, use `https://mmlspark.azureedge.net/maven` for the repository.
Ensure this library is attached to all clusters you create.

Expand All @@ -150,7 +150,7 @@ your `build.sbt`:

```scala
resolvers += "MMLSpark Repo" at "https://mmlspark.azureedge.net/maven"
libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "0.9"
libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "0.10"
```

### Building from source
Expand Down
8 changes: 4 additions & 4 deletions docs/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ You can now select one of the sample notebooks and run it, or create your own.
In the above, `microsoft/mmlspark` specifies the project and image name that you
want to run. There is another component implicit here which is the *tag* (=
version) that you want to use — specifying it explicitly looks like
`microsoft/mmlspark:0.9` for using the `0.9` tag.
`microsoft/mmlspark:0.10` for using the `0.10` tag.

Leaving `microsoft/mmlspark` by itself has an implicit `latest` tag, so it is
equivalent to `microsoft/mmlspark:latest`. The `latest` tag is identical to the
Expand All @@ -47,7 +47,7 @@ that you will probably want to use can look as follows:
-e ACCEPT_EULA=y \
-p 127.0.0.1:80:8888 \
-v ~/myfiles:/notebooks/myfiles \
microsoft/mmlspark:0.9
microsoft/mmlspark:0.10
```

In this example, backslashes are used to break things up for readability; you
Expand All @@ -59,7 +59,7 @@ path and line breaks looks a little different:
-e ACCEPT_EULA=y `
-p 127.0.0.1:80:8888 `
-v C:\myfiles:/notebooks/myfiles `
microsoft/mmlspark:0.9
microsoft/mmlspark:0.10
```

Let's break this command and go over the meaning of each part:
Expand Down Expand Up @@ -143,7 +143,7 @@ Let's break this command and go over the meaning of each part:
model.write().overwrite().save('myfiles/myTrainedModel.mml')
```

* **`microsoft/mmlspark:0.9`**
* **`microsoft/mmlspark:0.10`**

Finally, this specifies an explicit version tag for the image that we want to
run.
Expand Down
18 changes: 9 additions & 9 deletions docs/gpu-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ to check availability in your data center.
MMLSpark provides an Azure Resource Manager (ARM) template to create a setup
that includes an HDInsight cluster and/or a GPU machine for training. The
template can be found here:
<https://mmlspark.azureedge.net/buildartifacts/0.9/deploy-main-template.json>.
<https://mmlspark.azureedge.net/buildartifacts/0.10/deploy-main-template.json>.

It has the following parameters that configure the HDI Spark cluster and
the associated GPU VM:
Expand All @@ -48,16 +48,16 @@ the associated GPU VM:
- `gpuVirtualMachineSize`: The size of the GPU virtual machine to create

There are actually two additional templates that are used from this main template:
- [`spark-cluster-template.json`](https://mmlspark.azureedge.net/buildartifacts/0.9/spark-cluster-template.json):
- [`spark-cluster-template.json`](https://mmlspark.azureedge.net/buildartifacts/0.10/spark-cluster-template.json):
A template for creating an HDI Spark cluster within a VNet, including
MMLSpark and its dependencies. (This template installs MMLSpark using
the HDI script action:
[`install-mmlspark.sh`](https://mmlspark.azureedge.net/buildartifacts/0.9/install-mmlspark.sh).)
- [`gpu-vm-template.json`](https://mmlspark.azureedge.net/buildartifacts/0.9/gpu-vm-template.json):
[`install-mmlspark.sh`](https://mmlspark.azureedge.net/buildartifacts/0.10/install-mmlspark.sh).)
- [`gpu-vm-template.json`](https://mmlspark.azureedge.net/buildartifacts/0.10/gpu-vm-template.json):
A template for creating a GPU VM within an existing VNet, including
CNTK and other dependencies that MMLSpark needs for GPU training.
(This is done via a script action that runs
[`gpu-setup.sh`](https://mmlspark.azureedge.net/buildartifacts/0.9/gpu-setup.sh).)
[`gpu-setup.sh`](https://mmlspark.azureedge.net/buildartifacts/0.10/gpu-setup.sh).)
Note that these child templates can also be deployed independently, if
you don't need both parts of the installation.

Expand All @@ -66,7 +66,7 @@ you don't need both parts of the installation.
### 1. Deploy an ARM template within the [Azure Portal](https://ms.portal.azure.com/)

[Click here to open the above
template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fmmlspark.azureedge.net%2Fbuildartifacts%2F0.9%2Fdeploy-main-template.json)
template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fmmlspark.azureedge.net%2Fbuildartifacts%2F0.10%2Fdeploy-main-template.json)
in the Azure portal.

(If needed, you click the **Edit template** button to view and edit the
Expand All @@ -84,11 +84,11 @@ We also provide a convenient shell script to create a deployment on the
command line:

* Download the [shell
script](https://mmlspark.azureedge.net/buildartifacts/0.9/deploy-arm.sh)
script](https://mmlspark.azureedge.net/buildartifacts/0.10/deploy-arm.sh)
and make a local copy of it

* Create a JSON parameter file by downloading [this template
file](https://mmlspark.azureedge.net/buildartifacts/0.9/deploy-parameters.template)
file](https://mmlspark.azureedge.net/buildartifacts/0.10/deploy-parameters.template)
and modify it according to your specification.

You can now run the script — it takes the following arguments:
Expand Down Expand Up @@ -121,7 +121,7 @@ you for all needed values.
### 3. Deploy an ARM template with the MMLSpark Azure PowerShell

MMLSpark also provides a [PowerShell
script](https://mmlspark.azureedge.net/buildartifacts/0.9/deploy-arm.ps1)
script](https://mmlspark.azureedge.net/buildartifacts/0.10/deploy-arm.ps1)
to deploy ARM templates, similar to the above bash script. Run it with
`-?` to see the usage instructions (or use `get-help`). If needed,
install the Azure PowerShell cmdlets using the instructions in the
Expand Down

0 comments on commit 36740ea

Please sign in to comment.