-
Notifications
You must be signed in to change notification settings - Fork 1
Model servers
Note: it's best to make the model tests pass (see Building your own model) before running the server
To run the server, use the serve
command, like so:
$ catwalk serve --model-path /path/to/your/model --debug
The --debug
flag runs a Flask development server, avoid the production-ready nginx and gUnicorn stack (see below).
By default, catwalk
will run the model and server tests before starting the server itself.
You can disable this with the --no-run-tests
option.
The server can be hit with requests using curl (or any other REST tool).
E.g. for the RNG example:
$ catwalk serve --model-path example_models/rng --debug
$ curl -H "Content-Type: application/json" \
-d '{"input": {"seed": 0, "seed_version": 2, "mu": 0.0, "sigma": 1.0}}' \
http://localhost:9090/predict
Note how the response includes the input and output data, along with the model that was run and a generated correlation_id
(UUID).
In production, the correlation_id
helps with linking API calls together.
You can optionally specify your own correlation_id in a request.
E.g. for the RNG example:
$ curl -H "Content-Type: application/json" \
-d '{"correlation_id": "1A", "input": {"seed": 0, "seed_version": 2, "mu": 0.0, "sigma": 1.0}}' \
http://localhost:9090/predict
You can optionally specify the model to run in the request. The server will return a 404 if that specific model and version is not loaded. E.g. for the RNG example:
$ curl -H "Content-Type: application/json" \
-d '{"model": {"name": "RNGModel", "version": "0.0.1"}, "input": {"seed": 0, "seed_version": 2, "mu": 0.0, "sigma": 1.0}}' \
http://localhost:9090/predict
Model servers also have /status
and /info
end points.
Both are GET requests, and one returns a 200
if the server is healthy, while the other returns the information
contained in model.yml
.
Sometimes in a stateless API chain you would like to send some additional data along with the request.
Whatever's in this key should not be not validated or touched by the server, and be returned in the result.
You can do this with an extra_data
key in the JSON.
E.g. for the RNG example:
$ curl -H "Content-Type: application/json" \
-d '{"extra_data": {"foo": "bar"}, "input": {"seed": 0, "seed_version": 2, "mu": 0.0, "sigma": 1.0}}' \
http://localhost:9090/predict
The default port for catwalk
is 9090. You can, of course, set this to whatever you wish using the --port
option:
$ catwalk server --debug --port 9091
The server can load a YAML configuration file, which controls various parameters such as logging level and
SSL. Once creates, server configuration should be set via the --server-config
option:
$ catwalk serve --debug --server-config /path/to/config.yml
The REST API is over http by default, to enable https, add the following to the config file:
server:
ssl:
enabled: true
cert: /path/to/cert.pem
key: /path/to/key.pem
Where /path/to/cert.pem
(CA cert) and /path/to/key.pm
(private key) are replaced with the correct locations.
When running the dockerized server (e.g. in a deployment) the certs
folder should be mounted on the container at the path specified in the config file.
The server is implemented with Flask, but Flask's own development server is not safe or scalable for production.
Following the examples from Amazon SageMaker
and Ansible, Catwalk uses nginx
and
gUnicorn
to productionise the model server.
This creates a fast, concurrent WSGI server.
If you have a local installation of nginx, you can run the server in production mode:
$ catwalk serve --model-path /path/to/model
Note that this command is the same as the development server, but without the --debug
flag.
catwalk serve
can use environment variables to set arguments such as model path, config path and server port.
$ export MODEL_PATH=/path/to/env/model
$ export SERVER_CONFIG=/path/to/env/conf/application.yml
$ export SERVER_PORT=<some port number>
$ export RUN_TESTS=false
$ catwalk serve
Copyright 2020 Leap Beyond Emerging Technologies B.V. (CC BY 4.0 )