Skip to content
This repository has been archived by the owner on Dec 7, 2024. It is now read-only.

What features should be specified? #4

Open
tlively opened this issue Mar 2, 2022 · 4 comments
Open

What features should be specified? #4

tlively opened this issue Mar 2, 2022 · 4 comments

Comments

@tlively
Copy link
Member

tlively commented Mar 2, 2022

In principle features could be arbitrarily fine- or course-grained and could be retroactively applied to any instructions already in the spec. In practice, though, only features corresponding to toolchain-level target features that contain targeted, performance sensitive instructions that don't have a short path to widespread adoption will be useful.

I propose that the only feature we define to start out is "simd128", corresponding to the merged SIMD proposal. (If relaxed-simd ships before feature detection, it should have a feature as well.)

Here's why we might not want to define separate features for other proposals:

  • non-trapping float-to-int conversions, sign-extension operations, and bulk memory operations: These instructions are already widely supported, so the feature would not be useful.
  • exception handling, tail calls: It would be difficult to use these instructions conditionally while preserving program semantics without changing the entire compilation scheme, which would lead to code bloat, so these features would not be useful in practice.
  • threads and atomics: Properly using this feature conditionally would require changes to the memory section, which this feature detection proposal does not support. Embedders not wishing to support multithreading should implement this proposal but not provide any facilities to create new threads.

Are there any other features it would make sense to define for a feature detection MVP?

@rossberg
Copy link
Member

rossberg commented Mar 7, 2022

Before discussing specifics, I would like to understand the long-term vision better. Right now it seems like SIMD will be a never-ending story. Is the idea that we'd introduce an ever-growing set of SIMD-related feature bits? At what granularity? With what frequency? And how do we evaluate these choices?

@tlively
Copy link
Member Author

tlively commented Mar 7, 2022

It's hard to predict what future proposals would look like, so the mechanism the proposals introduces is meant to be able to accommodate arbitrarily fine- or coarse-grained features. As a policy, though, I agree that it would be better to tend toward more coarse-grained features to reduce the number of features users have to learn and reason about.

Regarding the number and frequency of new SIMD proposals, I would expect that eventually we will have picked the low-hanging fruit and the marginal benefit of additional proposals (as opposed to just adding some new codegen optimizations in the engine) will not be worth it. Personally, I would guess with high confidence that we will end up with fewer than a dozen SIMD proposals over the next decade (but that's still just a guess, and it hasn't been run by anyone else). Both the number and frequency of proposals will ultimately be limited by the capacity and willingness of the CG to do work on them, so I wouldn't expect either to be overwhelming, almost by definition.

@dtig
Copy link
Member

dtig commented Mar 8, 2022

  • threads and atomics: Properly using this feature conditionally would require changes to the memory section, which this feature detection proposal does not support. Embedders not wishing to support multithreading should implement this proposal but not provide any facilities to create new threads.

Can you elaborate on how invasive would the changes to the memory section be? While I agree with the reasoning for the first two points, I think that it would be valuable for Threads to be included in the set of proposals feature detection targets.

Before discussing specifics, I would like to understand the long-term vision better. Right now it seems like SIMD will be a never-ending story. Is the idea that we'd introduce an ever-growing set of SIMD-related feature bits? At what granularity? With what frequency? And how do we evaluate these choices?

I don't expect that we would introduce an ever-growing set of SIMD proposals. The evaluation of any new proposals should follow the same mechanism as previous SIMD proposals i.e. that the proposal is solving a concrete pain point, can be scoped to some logical subset of operations, and has proven performance benefits. This is not codified specifically anywhere, but this is my impression of how the CG has previously evaluated previous proposals. Regarding the granularity and the frequency - there is significant overhead of scoping a proposal, driving consensus in the CG, prototyping + obtaining performance data and the actual work to standardize a proposal which establishes a significantly high threshold for a proposal to be merged into the standard, especially with CG input and feedback at every phase I don't think that we should expect a significantly large number of SIMD proposals.

@tlively
Copy link
Member Author

tlively commented Mar 8, 2022

  • threads and atomics: Properly using this feature conditionally would require changes to the memory section, which this feature detection proposal does not support. Embedders not wishing to support multithreading should implement this proposal but not provide any facilities to create new threads.

Can you elaborate on how invasive would the changes to the memory section be? While I agree with the reasoning for the first two points, I think that it would be valuable for Threads to be included in the set of proposals feature detection targets.

It's not a matter of how invasive a change is, it's about whether the change is outside the code section. This proposal only supports conditional validation inside the code section, so it does not provide a way to conditionally declare a shared memory. Conditionally validated atomic operations could only be used in modules that used MVP unshared memories, in which case the atomics are not useful, or on engines that support shared memory declarations, in which case the engine probably supports atomic operations as well, so they wouldn't need to be conditional.

Similarly, this proposal will only be of limited help for proposals that introduce new types (e.g. a hypothetical v256). If those types appear in the type section or the global section or anywhere except the code section, engines would still have to support decoding the type, even if they don't choose to support the instructions that use it. A toolchain-level workaround for running on older engines would be to ensure that the new type is only ever used in the code section.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants