diff --git a/README.md b/README.md
index 3b1baa9ab3..8c51333e2a 100644
--- a/README.md
+++ b/README.md
@@ -155,9 +155,9 @@ MMLSpark can be conveniently installed on existing Spark clusters via the
`--packages` option, examples:
```bash
- spark-shell --packages Azure:mmlspark:0.16
- pyspark --packages Azure:mmlspark:0.16
- spark-submit --packages Azure:mmlspark:0.16 MyApp.jar
+ spark-shell --packages Azure:mmlspark:0.17
+ pyspark --packages Azure:mmlspark:0.17
+ spark-submit --packages Azure:mmlspark:0.17 MyApp.jar
```
This can be used in other Spark contexts too. For example, you can use MMLSpark
@@ -172,14 +172,14 @@ cloud](http://community.cloud.databricks.com), create a new [library from Maven
coordinates](https://docs.databricks.com/user-guide/libraries.html#libraries-from-maven-pypi-or-spark-packages)
in your workspace.
-For the coordinates use: `Azure:mmlspark:0.16`. Ensure this library is
+For the coordinates use: `Azure:mmlspark:0.17`. Ensure this library is
attached to all clusters you create.
Finally, ensure that your Spark cluster has at least Spark 2.1 and Scala 2.11.
You can use MMLSpark in both your Scala and PySpark notebooks. To get started with our example notebooks import the following databricks archive:
-```https://mmlspark.blob.core.windows.net/dbcs/MMLSpark%20Examples%20v0.16.dbc```
+```https://mmlspark.blob.core.windows.net/dbcs/MMLSpark%20Examples%20v0.17.dbc```
### Docker
@@ -188,14 +188,14 @@ The easiest way to evaluate MMLSpark is via our pre-built Docker container. To
do so, run the following command:
```bash
- docker run -it -p 8888:8888 -e ACCEPT_EULA=yes microsoft/mmlspark
+ docker run -it -p 8888:8888 -e ACCEPT_EULA=yes mcr.microsoft.com/mmlspark/release
```
Navigate to in your web browser to run the sample
notebooks. See the [documentation](docs/docker.md) for more on Docker use.
> To read the EULA for using the docker image, run \
-> `docker run -it -p 8888:8888 microsoft/mmlspark eula`
+> `docker run -it -p 8888:8888 mcr.microsoft.com/mmlspark/release eula`
### GPU VM Setup
@@ -212,7 +212,7 @@ the above example, or from python:
```python
import pyspark
spark = pyspark.sql.SparkSession.builder.appName("MyApp") \
- .config("spark.jars.packages", "Azure:mmlspark:0.16") \
+ .config("spark.jars.packages", "Azure:mmlspark:0.17") \
.getOrCreate()
import mmlspark
```
@@ -228,7 +228,7 @@ running script actions, see [this
guide](https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-customize-cluster-linux#use-a-script-action-during-cluster-creation).
The script action url is:
-.
+.
If you're using the Azure Portal to run the script action, go to `Script
actions` → `Submit new` in the `Overview` section of your cluster blade. In
@@ -244,7 +244,7 @@ your `build.sbt`:
```scala
resolvers += "MMLSpark Repo" at "https://mmlspark.azureedge.net/maven"
- libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "0.16"
+ libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "0.17"
```
### Building from source
diff --git a/docs/R-setup.md b/docs/R-setup.md
index c8ba756567..4cba3e0156 100644
--- a/docs/R-setup.md
+++ b/docs/R-setup.md
@@ -10,7 +10,7 @@ To install the current MMLSpark package for R use:
```R
...
- devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.16.zip")
+ devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.17.zip")
...
```
@@ -23,7 +23,7 @@ It will take some time to install all dependencies. Then, run:
library(sparklyr)
library(dplyr)
config <- spark_config()
- config$sparklyr.defaultPackages <- "Azure:mmlspark:0.16"
+ config$sparklyr.defaultPackages <- "Azure:mmlspark:0.17"
sc <- spark_connect(master = "local", config = config)
...
```
@@ -83,7 +83,7 @@ and then use spark_connect with method = "databricks":
```R
install.packages("devtools")
- devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.16.zip")
+ devtools::install_url("https://mmlspark.azureedge.net/rrr/mmlspark-0.17.zip")
library(sparklyr)
library(dplyr)
sc <- spark_connect(method = "databricks")
diff --git a/docs/docker.md b/docs/docker.md
index 0eab5098ff..8d9fc7e405 100644
--- a/docs/docker.md
+++ b/docs/docker.md
@@ -6,7 +6,7 @@ Begin by installing [Docker for your OS][docker-products]. Then, to get the
MMLSpark image and run it, open a terminal (powershell/cmd on Windows) and run
```bash
- docker run -it -p 8888:8888 microsoft/mmlspark
+ docker run -it -p 8888:8888 mcr.microsoft.com/mmlspark/release
```
In your browser, go to — you'll see the Docker image
@@ -14,7 +14,7 @@ EULA, and once you accept it, the Jupyter notebook interface will start. To
skip this step, add `-e ACCEPT_EULA=yes` to the Docker command:
```bash
- docker run -it -p 8888:8888 -e ACCEPT_EULA=y microsoft/mmlspark
+ docker run -it -p 8888:8888 -e ACCEPT_EULA=y mcr.microsoft.com/mmlspark/release
```
You can now select one of the sample notebooks and run it, or create your own.
@@ -26,13 +26,13 @@ You can now select one of the sample notebooks and run it, or create your own.
## Running a specific version
-In the above, `microsoft/mmlspark` specifies the project and image name that you
+In the above, `mcr.microsoft.com/mmlspark/release` specifies the project and image name that you
want to run. There is another component implicit here which is the *tag* (=
version) that you want to use — specifying it explicitly looks like
-`microsoft/mmlspark:0.16` for the `0.16` tag.
+`mcr.microsoft.com/mmlspark/release:0.17` for the `0.17` tag.
-Leaving `microsoft/mmlspark` by itself has an implicit `latest` tag, so it is
-equivalent to `microsoft/mmlspark:latest`. The `latest` tag is identical to the
+Leaving `mcr.microsoft.com/mmlspark/release` by itself has an implicit `latest` tag, so it is
+equivalent to `mcr.microsoft.com/mmlspark/release:latest`. The `latest` tag is identical to the
most recent stable MMLSpark version. You can see the current [mmlspark tags] on
our [Docker Hub repository][mmlspark-dockerhub].
@@ -47,7 +47,7 @@ that you will probably want to use can look as follows:
-e ACCEPT_EULA=y \
-p 127.0.0.1:80:8888 \
-v ~/myfiles:/notebooks/myfiles \
- microsoft/mmlspark:0.16
+ mcr.microsoft.com/mmlspark/release:0.17
```
In this example, backslashes are used to break things up for readability; you
@@ -59,7 +59,7 @@ path and line breaks looks a little different:
-e ACCEPT_EULA=y `
-p 127.0.0.1:80:8888 `
-v C:\myfiles:/notebooks/myfiles `
- microsoft/mmlspark:0.16
+ mcr.microsoft.com/mmlspark/release:0.17
```
Let's break this command and go over the meaning of each part:
@@ -143,7 +143,7 @@ Let's break this command and go over the meaning of each part:
model.write().overwrite().save('myfiles/myTrainedModel.mml')
```
-* **`microsoft/mmlspark:0.16`**
+* **`mcr.microsoft.com/mmlspark/release:0.17`**
Finally, this specifies an explicit version tag for the image that we want to
run.
@@ -157,7 +157,7 @@ additional flag that is useful for this is `--name` that gives a convenient
label to the running image:
```bash
- docker run -d --name my-mmlspark ...flags... microsoft/mmlspark
+ docker run -d --name my-mmlspark ...flags... mcr.microsoft.com/mmlspark/release
```
When running in this mode, you can use
@@ -212,7 +212,7 @@ fires up the Jupyter notebook server. This makes it possible to use the Spark
environment directly in the container if you start it as:
```bash
- docker run -it ...flags... microsoft/mmlspark bash
+ docker run -it ...flags... mcr.microsoft.com/mmlspark/release bash
```
This starts the container with bash instead of Jupyter. This environment has
@@ -241,21 +241,21 @@ This means that you need to explicitly tell Docker to check for a new version
and pull it if one exists. You do this with the `pull` command:
```bash
- docker pull microsoft/mmlspark
+ docker pull mcr.microsoft.com/mmlspark/release
```
Since we didn't specify an explicit tag here, `docker` adds the implied
-`:latest` tag, and checks the available `microsoft/mmlspark` image with this tag
+`:latest` tag, and checks the available `mcr.microsoft.com/mmlspark/release` image with this tag
on Docker Hub. When it finds a different image with this tag, it will fetch a
copy to your machine, changing the image that an unqualified
-`microsoft/mmlspark` refers to.
+`mcr.microsoft.com/mmlspark/release` refers to.
Docker normally knows only about the tags that it fetched, so if you've always
-used `microsoft/mmlspark` to refer to the image without an explicit version tag,
+used `mcr.microsoft.com/mmlspark/release` to refer to the image without an explicit version tag,
then you wouldn't have the version-tagged image too. Once the tag is updated,
the previous version will still be in your system, only without any tag. Using
`docker images` to list the images in your system will now show you two images
-for `microsoft/mmlspark`, one with a tag of `latest` and one with no tag, shown
+for `mcr.microsoft.com/mmlspark/release`, one with a tag of `latest` and one with no tag, shown
as ``. Assuming that you don't have active containers (including detached
ones), `docker system prune` will remove this untagged image, reclaiming the
used space.
diff --git a/docs/gpu-setup.md b/docs/gpu-setup.md
index ee3d5f9e2f..9402e9885e 100644
--- a/docs/gpu-setup.md
+++ b/docs/gpu-setup.md
@@ -26,7 +26,7 @@ to check availability in your data center.
MMLSpark provides an Azure Resource Manager (ARM) template to create a
default setup that includes an HDInsight cluster and a GPU machine for
training. The template can be found here:
-.
+.
It has the following parameters that configure the HDI Spark cluster and
the associated GPU VM:
@@ -69,7 +69,7 @@ GPU VM setup template at experimentation time.
### 1. Deploy an ARM template within the [Azure Portal](https://ms.portal.azure.com/)
[Click here to open the above main
-template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fmmlspark.azureedge.net%2Fbuildartifacts%2F0.16%2Fdeploy-main-template.json)
+template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fmmlspark.azureedge.net%2Fbuildartifacts%2F0.17%2Fdeploy-main-template.json)
in the Azure portal.
(If needed, you click the **Edit template** button to view and edit the
diff --git a/src/io/http/src/main/scala/services/CognitiveServiceBase.scala b/src/io/http/src/main/scala/services/CognitiveServiceBase.scala
index 26b5e1ef14..01c4f5a3c1 100644
--- a/src/io/http/src/main/scala/services/CognitiveServiceBase.scala
+++ b/src/io/http/src/main/scala/services/CognitiveServiceBase.scala
@@ -177,7 +177,7 @@ object URLEncodingUtils {
object CognitiveServiceUtils {
def setUA(req: HttpRequestBase): Unit = {
- req.setHeader("User-Agent", "mmlspark/0.16")
+ req.setHeader("User-Agent", "mmlspark/0.17")
}
}