From 34cc5d75dc2a1f18229ba944d1c7fe6b971b0b8d Mon Sep 17 00:00:00 2001 From: Deyu Fu Date: Mon, 13 Feb 2023 13:25:40 +0800 Subject: [PATCH] update README and version --- README.md | 4 +++- version.txt | 2 +- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 3038cbe..a75d6db 100644 --- a/README.md +++ b/README.md @@ -5,6 +5,8 @@ distributed-embeddings is a library for building large embedding based (e.g. recommender) models in Tensorflow 2. It provides a scalable model parallel wrapper that automatically distribute embedding tables to multiple GPUs, as well as efficient embedding operations that cover and extend Tensorflow's embedding functionalities. +Refer to [NVIDIA Developer blog](https://developer.nvidia.com/blog/fast-terabyte-scale-recommender-training-made-easy-with-nvidia-merlin-distributed-embeddings/) about Terabyte-scale Recommender Training for more details. + ## Features ### Distributed model parallel wrapper @@ -22,7 +24,7 @@ Python 3, CUDA 11 or newer, TensorFlow 2 ### Containers ### You can build inside 22.03 or later NGC TF2 [image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow): ```bash -docker pull nvcr.io/nvidia/tensorflow:22.10-tf2-py3 +docker pull nvcr.io/nvidia/tensorflow:23.01-tf2-py3 ``` ### Build from source diff --git a/version.txt b/version.txt index 0ea3a94..0d91a54 100644 --- a/version.txt +++ b/version.txt @@ -1 +1 @@ -0.2.0 +0.3.0