You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is the example supposed to panic if the input is too large? With ~2400 characters I get:
ggml_new_tensor_impl: not enough space in the context's memory pool (needed 271388624, available 260703040)
SIGSEGV: segmentation violation
PC=0x4d2645 m=0 sigcode=1
signal arrived during cgo execution
The text was updated successfully, but these errors were encountered:
As a workaround I've added an N *= 2; in bert_embeddings, with that things work fine: if the input it too large, I get an Too many tokens, maximum is 512 printed to the output.
Obviously this isn't the right solution, but it seems somewhere a calculation is off.
Is the example supposed to panic if the input is too large? With ~2400 characters I get:
The text was updated successfully, but these errors were encountered: