Skip to content

Commit

Permalink
fix: fixing the call to train.py directly in python
Browse files Browse the repository at this point in the history
else we have issues with the versions
  • Loading branch information
bcm-at-zama committed Sep 27, 2024
1 parent bfcc6f0 commit b71b8f5
Show file tree
Hide file tree
Showing 6 changed files with 66 additions and 12 deletions.
54 changes: 47 additions & 7 deletions use_case_examples/deployment/breast_cancer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,54 @@ To run this example on AWS you will also need to have the AWS CLI properly setup
To do so please refer to [AWS documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html).
One can also run this example locally using Docker, or just by running the scripts locally.

1. To train your model you can use `train.py`, or `train_with_docker.sh` to use Docker (recommended way).
#### On the developer machine:

1. To train your model you can
- use `train_with_docker.sh` to use Docker (recommended way),
- or, only if you know what you're doing and will manage synchronisation between versions, use `python train.py`

This will train a model and [serialize the FHE circuit](../../../docs/guides/client_server.md) in a new folder called `./dev`.
1. Once that's done you can use the script provided in Concrete ML in `use_case_examples/deployment/server/`, use `deploy_to_docker.py`.

- `python use_case_examples/deployment/server/deploy_to_docker.py --path-to-model ./dev`
#### On the server machine:

1. Copy the './dev' directory from the developer machine.
1. Launch the server via:

```
python use_case_examples/deployment/server/deploy_to_docker.py --path-to-model ./dev
```

This will let a server open on Port 5000.

#### On the client machine:

##### If you go for a Docker part on the client side:

1. Launch the `build_docker_client_image.py` to build a client Docker image.
1. Run the client with `client.sh` script. This will run the container in interactive mode.
1. Then, in this Docker, you can launch the client script to interact with the server:

```
URL="<my_url>" python client.py
```

where `<my_url>` is the content of the `url.txt` file (if you don't set URL, the default is `0.0.0.0`; this defines the IP to use when running server in Docker on localhost).

#### If you go for client side done in Python:

1. Prepare the client side:

```
python3.8 -m venv .venvclient
source .venvclient/bin/activate
python build_docker_client_image.py
pip install -r client_requirements.txt
```
1. Run the client script:

3. Once that's done you can launch the `build_docker_client_image.py` script to build a client Docker image.
1. You can then run the client by using the `client.sh` script. This will run the container in interactive mode.
To interact with the server you can launch the `client.py` script using `URL="<my_url>" python client.py` where `<my_url>` is the content of the `url.txt` file (default is `0.0.0.0`, ip to use when running server in Docker on localhost).
```
URL="http://localhost:8888" python client.py
```

And here it is you deployed a Concrete ML model and ran an inference using Fully Homormophic Encryption.
And here it is! Whether you use Docker or Python for the client side, you deployed a Concrete ML model and ran an inference using Fully Homormophic Encryption. In particular, you can see that the FHE predictions are correct.
13 changes: 11 additions & 2 deletions use_case_examples/deployment/breast_cancer/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@

from concrete.ml.deployment import FHEModelClient

URL = os.environ.get("URL", f"http://localhost:5000")
URL = os.environ.get("URL", f"http://localhost:8888")
STATUS_OK = 200
ROOT = Path(__file__).parent / "client"
ROOT.mkdir(exist_ok=True)
Expand Down Expand Up @@ -105,4 +105,13 @@
encrypted_result = result.content
decrypted_prediction = client.deserialize_decrypt_dequantize(encrypted_result)[0]
decrypted_predictions.append(decrypted_prediction)
print(decrypted_predictions)
print(f"Decrypted predictions are: {decrypted_predictions}")

decrypted_predictions_classes = numpy.array(decrypted_predictions).argmax(axis=1)
print(f"Decrypted prediction classes are: {decrypted_predictions_classes}")

# Let's check the results and compare them against the clear model
clear_prediction_classes = y[0:10]
accuracy = (clear_prediction_classes == decrypted_predictions_classes).mean()
print(f"Accuracy between FHE prediction and expected results is: {accuracy*100:.0f}%")

Empty file modified use_case_examples/deployment/breast_cancer/client.sh
100644 → 100755
Empty file.
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
grequests
requests
tqdm
numpy
scikit-learn
concrete-ml
2 changes: 1 addition & 1 deletion use_case_examples/deployment/breast_cancer/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@
model.fit(X_train, y_train)
model.compile(X_train)
dev = FHEModelDev("./dev", model)
dev.save()
dev.save(via_mlir=True)
6 changes: 4 additions & 2 deletions use_case_examples/deployment/server/deploy_to_docker.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,11 +97,13 @@ def main(path_to_model: Path, image_name: str):
if args.only_build:
return

PORT_TO_CHOOSE=8888

# Run newly created Docker server
try:
with open("./url.txt", mode="w", encoding="utf-8") as file:
file.write("http://localhost:5000")
subprocess.check_output(f"docker run -p 5000:5000 {image_name}", shell=True)
file.write(f"http://localhost:{PORT_TO_CHOOSE}")
subprocess.check_output(f"docker run -p {PORT_TO_CHOOSE}:5000 {image_name}", shell=True)
except KeyboardInterrupt:
message = "Terminate container? (y/n) "
shutdown_instance = input(message).lower()
Expand Down

0 comments on commit b71b8f5

Please sign in to comment.