-
Notifications
You must be signed in to change notification settings - Fork 0
Integration Spike of Tensorflow #26
Comments
Installed Tensorflow at server
Need to deploy Sentance encoder there and server based on that model. |
|
We can make REST calls to our own servable endpoint at : Example: API: Docs: https://www.tensorflow.org/tfx/serving/api_rest |
There is no documented way to check the REST API of saved_modes at this moment. We can extract API documentation using example: `MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['__saved_model_init_op']: signature_def['serving_default']: Concrete Functions: Function Name: '_default_save_signature' Function Name: 'call_and_return_all_conditional_losses' |
Spike implemented with Mulitilingual Universal-sentance-encoder API available at: Steps:
|
Integration Spike completed. See previous post for current way of deploying saved models. Work to be continued in: #33 |
The text was updated successfully, but these errors were encountered: