You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ WASMTIME_BACKTRACE_DETAILS=1 go test -v ./trealla === RUN TestInterop/custom_function2023/07/19 18:29:51 query: interop_test(X).2023/07/19 18:29:51 query: X is 1 + 1.=== NAME TestInterop interop_test.go:34: trealla: query error: error while executing at wasm backtrace: 0: <unknown>!pl_query 1: 0x1fe1db - fn_sys_host_call_2 at /Users/guregu/code/trealla/fork/src/predicates.c:7692:12 2: 0x234bc6 - start at /Users/guregu/code/trealla/fork/src/query.c:1611:14 3: 0x23b115 - execute at /Users/guregu/code/trealla/fork/src/query.c:1823:9 4: 0xf8327 - run at /Users/guregu/code/trealla/fork/src/parser.c:3630:3 5: 0x223e56 - pl_query at /Users/guregu/code/trealla/fork/src/prolog.c:162:12 Caused by: wasm trap: call stack exhausted
Works ok on macOS+arm64 so probably something weird going on. This is running in a VM though.
The text was updated successfully, but these errors were encountered:
Hmm, looks like there's a 1MB (maybe 2MB?) stack size limit by default: bytecodealliance/wasmtime#900
This would explain a lot... the Trealla wasm build uses a 8MB stack.
There's no way to set this size yet for wasmtime-go so I will add it then fix this.
Pretty surprising it doesn't read the stack size from the binary.
Hmm... it looks like it's more complex than I thought.
wasmtime divides the stack space every time a guest -> host -> guest (reentrant?) call happens
Even with the larger stack, if there's more than 2 or so reentrant queries happening the stack just becomes too small.
This can be worked around by calling Clone() on the instance passed to the native predicate, but incurs an overhead.
It might be possible to avoid this with a more complex locking system but probably not worth it.
Works ok on macOS+arm64 so probably something weird going on. This is running in a VM though.
The text was updated successfully, but these errors were encountered: