You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The main interpreter loop here has grown very large. It uses an indirect-threaded VM and inlines decode and dispatch logic for all bytecodes. This approach made sense when the number of bytecodes was small. However, now that we've reached more than 400 bytecodes, having a gigantic function is unmaintainable.
We should consider extracting decode and bytecode logic into separate per-bytecode functions and dispatching to functions rather than go-to labels. The primary concern is the potential drop in performance. I propose we first measure this drop, if any, using a benchmark composed of compute-heavy (i.e., arithmetic, loops, and branching) and SQL-heavy programs.
I suspect we'll identify categories of bytecodes that will we can mark always inline, and those whose invocation frequency and code size don't warrant this.
The text was updated successfully, but these errors were encountered:
The main interpreter loop here has grown very large. It uses an indirect-threaded VM and inlines decode and dispatch logic for all bytecodes. This approach made sense when the number of bytecodes was small. However, now that we've reached more than 400 bytecodes, having a gigantic function is unmaintainable.
We should consider extracting decode and bytecode logic into separate per-bytecode functions and dispatching to functions rather than go-to labels. The primary concern is the potential drop in performance. I propose we first measure this drop, if any, using a benchmark composed of compute-heavy (i.e., arithmetic, loops, and branching) and SQL-heavy programs.
I suspect we'll identify categories of bytecodes that will we can mark always inline, and those whose invocation frequency and code size don't warrant this.
The text was updated successfully, but these errors were encountered: