-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tensorflow server signature #27
Comments
@nidhikamath91 sorry I'm not very familiar with TensorFlow serving. You'd be better off posting this on their issues itself. Although this seems like CLI argument error. |
Thanks. But could you help me with how can I create place holders for the input data and use them in data.py . Eg get a placeholder for input query |
@nidhikamath91 according to what I could gather from the MNIST example on TF serving, I think your
|
And what would be x? The input query, how do i represent that?
Regards
Nidhi
…On Tue, Oct 16, 2018, 4:20 PM Prabhsimran Singh ***@***.***> wrote:
@nidhikamath91 <https://github.com/nidhikamath91> according to what I
could gather from the MNIST example on TF serving, I think your
tensor_info_y needs to be the score outputs or in this case the softmax
prediction defined here
<https://github.com/tensorlayer/seq2seq-chatbot/blob/master/main.py#L97>
...
y = tf.nn.softmax(net.outputs)
...
...
tensor_info_y = tf.saved_model.utils.build_tensor_info(y)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#27 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/Ac4isDJvT8qa621FVA_UNuzSgk1laM-kks5ulesegaJpZM4XeQBM>
.
|
@nidhikamath91 your def of x looks fine to me. However I'm not sure this is gonna work since we're feeding the encoder |
In the below part
input_seq = input('Enter Query: ')
sentence = inference(input_seq)
Arent you feeding the input query to the inference method? How can i
take that input sequence as my x?
…On Tue, Oct 16, 2018, 4:31 PM Prabhsimran Singh ***@***.***> wrote:
Your def of x looks fine to me. However I'm not sure this is gonna work
since we're feeding the encoder state to the decoder during inference
then feeding the decoded sequence ids from the previous time steps one by
one to the decoder until it outputs the end_id, so AFAIK you'll need to
find a workaround for that. You should take a look at the inference method
here
<https://github.com/tensorlayer/seq2seq-chatbot/blob/master/main.py#L115>
and see what works for you. Best of luck!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#27 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/Ac4isFYwMuaf8fj4B70X3g8gaG-1_2tZks5ule3BgaJpZM4XeQBM>
.
|
@nidhikamath91 we're manually converting the input query into |
@nidhikamath91 apparently TF serving doesn't support stateful models. |
Interesting. Could you send me a link to where you found this?
…On Tue, Oct 16, 2018, 4:56 PM Prabhsimran Singh ***@***.***> wrote:
@nidhikamath91 <https://github.com/nidhikamath91> apparently TF serving
doesn't support stateful models.
tensorflow/serving#724 <tensorflow/serving#724>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#27 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/Ac4isGbepuVnKDpzzvOacTaEYHJqygDxks5ulfOjgaJpZM4XeQBM>
.
|
Can i convert it to stateless in any way?
…On Tue, Oct 16, 2018, 5:13 PM Prabhsimran Singh ***@***.***> wrote:
https://stackoverflow.com/questions/49471395/adding-support-for-stateful-rnn-models-within-the-tf-serving-api
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#27 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/Ac4isKeREZn-C84PIzofOI9i7H7TfGHhks5ulfeUgaJpZM4XeQBM>
.
|
@nidhikamath91 I don't think that's possible since it's an autoencoding RNN application. |
Hello, So I was thinking of the below solution, tell me what do you thinking about it. I will create a inference graph with issue and solution placeholders and then serve the model. encode_seqs2 = tf.placeholder(dtype=tf.int64, shape=[1, None], name="encode_seqs") issue = tf.placeholder(dtype=tf.string, shape=[1, None], name="issue") table = tf.contrib.lookup.index_table_from_file(vocabulary_file=str(word2idx.keys()), num_oov_buckets=0) state = sess.run(net_rnn.final_state_encode, {encode_seqs2: seed_id}) But I am getting the below error TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, numpy ndarrays, or TensorHandles.For reference, the tensor object was Tensor("hash_table_Lookup:0", shape=(1, ?), dtype=int64) which was passed to the feed with key Tensor("encode_seqs_1:0", shape=(1, ?), dtype=int64). How do I proceed ? @pskrunner14 |
@nidhikamath91 you can't feed tensors into input placeholders, just convert them to numpy arrays before doing so. |
But converting them to numpy arrays is possible ony during runtime right ?
with eval()
[image: Mailtrack]
<https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&>
Sender
notified by
Mailtrack
<https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&>
30.10.18,
15:02:28
…On Tue, Oct 30, 2018 at 2:57 PM Prabhsimran Singh ***@***.***> wrote:
@nidhikamath91 <https://github.com/nidhikamath91> you can't feed tensors
into input placeholders, just convert them to numpy arrays before doing so.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#27 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/Ac4isLIidFd1nAlJVxYihBWNpiaA7jT_ks5uqFqvgaJpZM4XeQBM>
.
|
@nidhikamath91 yes you could try constructing the graph in such a way that you only need to feed the seed id into issue placeholder at runtime which in turn passes the lookup tensor to the net rnn directly, bypassing encode_seqs2 entirely. |
Right now i was building a inference graph such that I pass the issue
string to the issue placeholder, create lookup tensor and pass it to net
rnn but how do i pass it to net rnn?
On Tue, Oct 30, 2018, 3:15 PM Prabhsimran Singh <[email protected]>
wrote:
@nidhikamath91 <https://github.com/nidhikamath91> yes you could try
constructing the graph in such a way that you only need to feed the seed id
into issue placeholder at runtime which in turn passes the lookup tensor to
the net rnn directly, bypassing encode_seqs2 entirely.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#27 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/Ac4isOeW-K9S-_PnFGaEnmyLfHPahifoks5uqF7wgaJpZM4XeQBM>
.
On Oct 30, 2018 3:15 PM, "Prabhsimran Singh" <[email protected]> wrote:
@nidhikamath91 <https://github.com/nidhikamath91> yes you could try
constructing the graph in such a way that you only need to feed the seed id
into issue placeholder at runtime which in turn passes the lookup tensor to
the net rnn directly, bypassing encode_seqs2 entirely.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#27 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/Ac4isOeW-K9S-_PnFGaEnmyLfHPahifoks5uqF7wgaJpZM4XeQBM>
.
|
I am trying to serve the model over tensorflow serving and I have created the below signature. But it doesnt seem to work. Please help me @pskrunner14
encode_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="encode_seqs")
decode_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="decode_seqs")
Inference Data Placeholders
encode_seqs2 = tf.placeholder(dtype=tf.int64, shape=[1, None], name="encode_seqs")
decode_seqs2 = tf.placeholder(dtype=tf.int64, shape=[1, None], name="decode_seqs")
export_path_base = './export_base/'
export_path = os.path.join(
tf.compat.as_bytes(export_path_base),
tf.compat.as_bytes(str(1)))
print('Exporting trained model to', export_path)
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
I have the below signature,
C:\Users\d074437\PycharmProjects\seq2seq>saved_model_cli show --dir ./export_base/1 --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['predict_solution']:
The given SavedModel SignatureDef contains the following input(s):
inputs['issue'] tensor_info:
dtype: DT_INT64
shape: (1, -1)
name: encode_seqs_1:0
The given SavedModel SignatureDef contains the following output(s):
outputs['solution'] tensor_info:
dtype: DT_INT64
shape: (1, -1)
name: decode_seqs_1:0
Method name is: tensorflow/serving/predict
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_INT64
shape: (32, -1)
name: encode_seqs:0
The given SavedModel SignatureDef contains the following output(s):
outputs['classes'] tensor_info:
dtype: DT_INT64
shape: (32, -1)
name: decode_seqs:0
Method name is: tensorflow/serving/classify
But when I try to run it, I get an error as below
C:\Users\d074437\PycharmProjects\seq2seq>saved_model_cli run --dir ./export_base --tag_set serve --signature_def predict_solution --inputs='this is the text'
usage: saved_model_cli [-h] [-v] {show,run,scan} ...
saved_model_cli: error: unrecognized arguments: is the text'
The text was updated successfully, but these errors were encountered: