-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do the benefits of signals as a language feature outweigh the costs of solidification into a standard? #220
Comments
@devmachiine @mlanza You might be surprised to hear that there is a lot of precedent elsewhere, even including the name "signal" (almost always as an analogy to physical electrical signals, if such a physical signal isn't being processed directly). I detailed an extremely incomplete list in #222 (comment), in an issue where I'm suggesting a small variation of the usual low-level idiom for signal/intereupt change detection. It's slow making its way to the Web, but if you squint a bit, this is only a minor variation of a model of reactivity informally used before even transistors were invented in 1926. Hardware description languages are likewise necessarily based on a similar model. And this kind of logic paradigm is everywhere in control-oriented applications, from robotics to space satellites. And almost out of necessity. Here's a simple behavioral model of an 8-bit single-operand adder-subtractor in Verilog, to show the similarity to hardware.And yes, this is fully synthesizable. // Adder-subtractor with 4 opcodes:
// 0 = no-op (no request)
// 1 = stored <= in
// 2 = out <= stored + in
// 3 = out <= stored - in
module adder_subtractor(clk, rst, op, in, out);
input clk, rst;
input[1:0] op;
input[7:0] in;
output reg[7:0] out;
reg[7:0] stored = 0;
always @ (posedge clk) begin
if (rst)
stored <= 0;
else case (op)
2'b00 : begin
// do nothing
end
2'b01 : begin
stored <= in;
end
2'b10 : begin
out <= stored + in;
end
2'b01 : begin
out <= stored - in;
end
endcase
end
endmodule Here's an idiomatic translation to JS signals, using methods instead of opcodes: class AdderSubtractor {
#stored = new Signal.State(0)
reset() {
this.#stored.set(0)
}
nop() {}
set(value) {
this.#stored.set(value & 0xFF)
}
add(value) {
return (this.stored.get() + value) & 0xFF
}
subtract(value) {
return (this.stored.get() - value) & 0xFF
}
} To allow for external monitoring in physical circuits, you'll need two pins:
Then, consuming circuits can detect the output's rising edge and handle it accordingly. This idiom is very common in hardware and embedded. And these aren't always one-to-one connections.Here's a few big ones that come to mind:
It's not as common as you might think inside a single integrated circuit, though, since you can usually achieve what you want through simple boolean logic and (optionally) an internal clock output pin. It's between circuits where it's most useful. Haskell's had similar for well over a decade as well, though (as a pure functional language) it obviously did signal composition differently: https://wiki.haskell.org/Functional_Reactive_Programming And keyboard/etc events are much easier to manage performantly in interactive OpenGL/WebGL-based stuff like simple games if you convert keyboard events to persistent boolean "is pressed" states, save mouse position updates to dedicated fields to then handle deltas next frame, and so on. In fact, this is a very common way to manage game state, and the popularity of just rendering every frame like this is also why Dear Imgui is so popular in native code-based games. For similar reasons, that library also has some traction in highly interactive, frequently-updating native apps that are still ultimately window- or box-based (like most web apps). If anything, the bigger question is why it took so long for front end JS to realize how to tweak this very mature signal/interrupt-based paradigm to suit their needs for more traditional web apps. |
As for other questions/concerns:
A single shared library isn't on its own a reason to do that. And sometimes, that library idiom isn't even the right way to go. Sometimes, it is truly one library, and the library has the best semantics: Sometimes, it's a few libraries dueling it out, like Moment and date-fns. The very heavy (stage 3) Sometimes, it's numerous libraries offering the same exact utility, like Object.keys = (o) ->
(k for own k, v in o)
Object.values = (o) ->
(v for own k, v in o)
Object.entries = (o) ->
([k, v] for own k, v in o)
Object.fromEntries = (entries) ->
o = {}
for [k, v] in entries
o[k] = v
o Speaking of CoffeeScript, that's even inspired additions of its own. And there's been many cases of that and/or other JS dialects inspiring additions.
There's also other cases (not just |
For context, I myself coming in was hesitant to even support the idea of signals until I saw this repo and dug deeper into the model to understand what was truly going on. And yes, there's been some pushback. In fact, I myself have been pushing back on two major components of the current design:
I also pushed back against I've also been pushing hard for the addition a secondary tracked (and writable) "is pending" state to make async function-based signals definable in userland. |
@mlanza Welcome to the world of the average new stage 1 proposal, where everything is wildly underspecified, somehow both hand-wavy and not, and extremely under flux. 🙃 https://tc39.es/process-document/ should give an idea what to expect at this stage. Stage "0" is the wildest dreams, and stage 1 is just the first attempt to bring a dose of reality into it. Stage 2 is where the rubber actually meets the road with most proposals. It's where the committee has solidified on a particular solution. Note that I'm not a TC39 member. I happen to be a former Mithril.js maintainer who's still somewhat active behind the scenes in that project. I have some specific interest in this as I've been investigating the model for a possible future version of Mithril.js. |
Good point, I agree. I can see the similarity between game designers and gamers who propose balances/changes which wouldn't benefit the game(rs) as a whole and/or have unintended consequences.
Interesting. Especially the circuitry example! Because X exists in Y isn't enough justification on its own to include X in Z. I don't think javascript is geared towards those use cases, it's more in the domain of c/zig.
Good point! I found the same to be true with the I reconsidered some of the pro's of signals being baked into the language(or DOM api) which I stated
There were similar concerns regarding the conclusion of not moving forward with the Observable proposal I think there will be a lot of repeat discussion:
I can appreciate the standardization of signals, but I'm not convinced that tc39 is the appropriate home for signals. The functionality can be provided via a library, which is much easier to extended and improve across contexts. |
@devmachiine You won't likely see But just as importantly, memory usage is a concern. If you have 50k signals in a complex monitoring app (say, 500 items, with 15 discrete visible text fields, 50 fields across 4 dropdowns, 20 error indicators, and 5 inputs), and you can shave off an average of about 20 bytes of each of those signals by simply removing a layer of indirection (2x4=8 bytes) and not allocating arrays for single-reference sets (2x32=64 bytes per impacted object, conservatively assumes about 20% are single-listener + single-parent), you could've shaved off around entire entire megabyte of memory usage. And that could be noticeable. |
To me, the biggest advantage of this being part of the language is interoperability. If you want multiple libraries and UI components to interoperate by being able to watch signals in a single way, a standard feature is the only way to accomplish that. It's infeasible to have every library use the same core signals library, and eliminate all sources of duplication (from package managers, CDNs, etc) which would bifurcate the signal graph. |
Well, I also think that the built-in APIs should be as ergonomic as reasonably possible. We shouldn't require that a library be used to get decent DX. I personally think the current API is near a sweet spot because it makes the common things easy (state and computed signals), and the complex things possible (watchers). Needing utility code for watchers makes sense, but IMO basic signal creation and dependency tracking should be usable with the raw APIs.
I struggle to think of what lower-level primitives could be useful. You need centralized state for dependency tracking. Maybe you could separate a signals local state from its tracked state - say, State doesn't contain it's own value - but you still need objects of some sort to store dependency and dirtiness data. I don't know what a lower-API would even get you over the current State and Computed. |
@mlanza To be concrete, what would your Atom implementation look like under the current API, vs an alternative that may be lower-level? Is State a hinderance? |
I mean, is the class |
I'm just trying to get concrete here. Are these primitives in the current proposal minimal and complete? What would be lower level? You seemed to be saying that the current API could be built from lower-level primitives. So I'm asking, what are those lower-level primitives? And how does your library look like implemented with the current proposed API vs those lower-level primitives? |
If we were talking significant performance improvements I'd have a significantly different opinion on this, but we aren't. I don't see any of the examples given here as convincing of the need for baking a concept into the language as a feature. Sure, some languages have signals - many don't. Many that don't have their own messaging/queue systems. When I look at proposals to any language, framework, etc., there's two criteria that I consider:
This proposal fails on both criteria, IMO. Signals can already be implemented - as they have been in so many languages that all have their own pros and cons. Sure, they may adopt native Signals internally, but now they both have to work around the limitations that are not intrinsic to the language to provide the same experience and their value add no longer is signals but /their/ view of signals which may already have countless in agreement on as being optimal. So if/when they get added, the JavaScriptCore, SpiderMonkey and V8 teams will have to implement them in a compatible way. The Chromium, Safari and Firefox teams will have to validate and ensure their functionality within their browser environments. The Bun, Deno and NodeJS teams will have to validate and ensure their functionality within their non-browser environments. If even one of these gets something wrong, it leads to fragmentation, which leads to poly fills, which leads to additional complexity for something that already exists today and can be used with relative ease even without the use of third party libraries. Performance is quite frankly the only argument here and it's quite weak (again, just my opinion). A better approach would be to identify the specific issues that signals libraries face and try to solve for those - which often leads to resolving issues outside of the topic at hand while keeping ECMAScript from becoming a bloated mess of hundreds of reserved keywords and features that lead to the last 5 lines of code looking like a completely different language than the next 5. |
The thing that's missing for me in those two criteria is interop. Many language's standard library features could be done in userland, but gain a lot of ecosystem value when packaged with the language. Maps can be implemented in libraries of course, but if you have a standard interface then lots of code can easily interact via the standard Map. Signals has an even bigger interop benefit by being built-in than utility classses because of its shared global state. It would be difficult to bridge multiple userland signals graphs correctly. By being built in code can use standard signals and be useful to anyone working in any library or framework that uses standard signals. You can see this already with the Additionally on the web a very big benefit of being a standard is being able to be relied on by other standards. There's a lot of potential in having the DOM support signals directly in some way, but that's only possible if signals are either built into JS or built into the DOM. |
@justinfagnani then where do you draw the line? JSX? Type checking? JavaScript shouldn’t be an “ecosystem,” we shouldn’t be looking to add libraries as core language features unless there’s significant challenges that can’t otherwise be solved and that simply isn’t the case here. Map solves significant limitations in the language that can’t be solved in userland without significant drawbacks. The memory cost of a userland approach alone justifies its existence as a language feature. |
Memory, speed, and interop are the three huge benefits i'm expecting with built-in signals. |
Memory can only be so optimized in a generalized system (hence my call to reconsider signals as a whole opposed to adding features that better facilitate signals) Speed can only be optimized so far as making certain assumptions, which frameworks can often make better assumptions. Interoperability isn’t particularly an issue and I’ve yet to see good examples of how this could provide better interoperability. |
Reactive data interoperability has been a huge issue, in my experience. Unfortunately, I can't give the details around most of that due to NDAs. But it's a very serious issue. Standard signals would be worth it to me and my stakeholders even if the only thing it delivered was interoperability and none of the memory or performance hopes were realized. |
@EisenbergEffect signals isn't synonymous with reactivity, signals enable reactivity but it's not everything that encompasses signals, as do many proposals that IMO were far better approaches to solving this issue than in-built signals. Signals are reactivity, messaging, concurrency, data management, so on and so forth. Using signals just to address reactive processes is overkill. |
I think there are a lot of people who would disagree with that. |
|
@dead-claudia this still doesn't address the key point, and if anything only makes things worse for signals. Signals - the concept - can use concurrency. Any person implementing this spec could choose to use concurrency at the engine level. Some signals libraries use concurrency. My point is that even in the proposal, native Signals do not aim to implement but a small fraction of what many libraries offer, and the idea that "It's not intended for end users but library developers" is a massive red flag. The point everyone keeps landing on is messaging - and yes, we do need better messaging. Many proposals mentioned were designed to address this but abandoned. Adding signals is not how we get there IMO. As for making DOM APIs exposed to signals, that is a massive and drastic reworking of much of the API that can have some serious repercussions. Making various APIs return Signals would most certainly have performance penalties. If a library wants to do this - that's fine, the person using the library has accepted the ramifications. Most of these can easily - with minimal code - be handled with event listeners. I've yet to see a compelling example of built-in signals being anything more than a QOL feature only for some people and nothing more. |
@jlandrum Replying in multiple parts.
In theory, yes. In practice, the spec and entire core data structure precludes implementation concurrency. In fact, the very in-practice lack of concurrency also makes
About the only thing this doesn't natively offer 1. that frameworks commonly do and 2. isn't deeply specific to the application domain (like HTML rendering) is effects, and that's by far one of the biggest feature requests in the whole repo. But even that is very straightforward: function effect(fn) {
const tracker = new Signal.Computed(() => { fn() })
// Run this part in your parent computed
return function track() {
queueMicrotask(() => tracker.get())
}
}
It's not that signals are not intended for use by end users. End user usage is being considered, just not as a sole concern. Syntax is being considered in the abstract to help simplify usage by end users. It's just been deferred to a follow-on proposal, like how
Keep in mind, signals aren't a general-purpose reactivity abstraction. You can't process streams with them, for one. (That's what async generators and callbacks are for.) They're specifically designed to simplify work with singular values that can change over time, and only singular values.
This is incorrect. Browsers can avoid a lot of work:
And much of the stuff I listed before can uniquely be accelerated further:
The input's value can be read directly from the element, and a second "scheduled value" and a third "external input" value can be used to track the actual external state.
Combined with
Pointer and mouse button/etc. state could be read directly, using double buffering. Unless coalesced values are needed, this is just a simple ref-counted object.
Near identical situation to inputs, just the source of updates (navigation rather than user axtion) is different.
Near identical situation to inputs, just the source of updates (navigation rather than user axtion) is different and it only needs to double-buffer a 1-byte enum primitive rather than allocate a whole structure. |
@dead-claudia perhaps I need a more concrete example of what this might look like for these examples. Will it replace existing properties? If so, this would potentially break code not looking to use signals if they're trying to get a property of a DOM element as a string and instead get a Signal. Will they be additional properties? If so, we already have a lot of property bloat to handle legacy code. Will the properties be somehow convertible to Signals? If so, what about mixed use code bases? And how do they internally know that a non-reactive property needs to update on change? What's being described sounds like a layer on top of the current ecosystem too - in which it most certainly would increase resource usage not reduce it, especially in many cases where a simple object with a getter/setter/subscription model would suffice as well as offer the flexibility of asynchronous updates. I greatly appreciate the write up and explanations but this still sounds like something that promises one thing but will in reality be something completely different. |
@jlandrum First and foremost, these would never replace existing properties. New properties would necessarily need to be used in browsers. There's not much property bloat in JS, but the ship for treating the property bloat problem sailed before we even had the problem in the first place. There's been very few things removed from browsers after being once broadly available, and only because of one of a few reasons:
Mouse location is currently not directly queryable. A hypothetical Such a signal would simplify and accelerate a couple needs that currently rely extensively on highly advanced use cases for pointer events:
|
@dead-claudia new properties would be such a massive undertaking given that when such events would need to be triggered varies so much. Plus they would absolutely need to be lazy to avoid the issue of overhead caused by unsubscribed signals. As for mouse position, there's no need to poll for it; the event gets fired every time it changes and doesn't get fired if it doesn't change. Yes - the event system is very inefficient given the amount of Event instances that get created but that's a separate issue entirely - and you can't just make it a signal, there would be a reasonable amount of pushback to have to call get() when you want this information without using signals. |
The subscription would obviously be lazy, and signals as proposed today already have such semantics. The
Evidence? My experience differs on this one.
|
@dead-claudia Why would WebGPU have influenced off web APIs? They're completely different use cases and environments. Is it possible individuals have applied WebGPU semantics to their projects? Absolutely - but this furthers my point of signals being best made possible through changes to ECMAScript opposed to putting them in directly, it should be up to those implementing their flavor of signals to implement them. Reporting API isn't something that changes how developers would interact with things, I'm not sure what the point of mentioning it is. Mutation observers are a questionable feature that were the result of being a product of their time; they likely wouldn't exist as they do if we had many of the alternatives we could have with modern language capabilities. WebSQL wasn't standardized for a reason, but it too lives on it's own and doesn't exist within the DOM or other APIs unless explicitly used. |
@jlandrum Responding in parts.
The API surface is small, but it involves a lot of work on browsers' part. Similarly, a theoretical
Mutation observers serve important roles in page auditing, and they enable userscripts and extensions to perform types of page augmentations they wouldn't be able to do otherwise.
But IndexedDB was. And my point for bringing that (and WebGPU) up is to raise awareness that browsers don't shy away from highly complex APIs as long as they actually bring something to the table. Anyways, this is starting to get off-topic, and I'm starting to get the impression confirmation bias may be getting in the way of meaningful discussion here. So consider me checked out of this particular discussion. 🙂 |
This comment has been minimized.
This comment has been minimized.
Gentle reminder that all participants are expected to abide by the TC39 Code of Conduct. In summary:
|
I think its great that there is a drive towards standardization of signals, but that it is too specialized to be standardized here
(
please let us know in an issue
- that's what this issue is for)Signals is a interesting way to approach state management, but it is much more complicated of a concept compared to something like a PriorityQueue/Heap/BST etc, which I think would be more generally useful as part of javascript itself.
What problem domains besides some UI frameworks would benefit out of it? Are there examples of signals as part of a language feature in other programming languages ?
What would be the benefit of having signals baked-in as a language feature over having a library for doing it?
When something is part of a standard, it's more work & time involved to make changes/additions, than if it was a stand-alone library. For signals to be a part of javascript, I think there would have to be a big advantage over a library.
I can imagine some pros being
Benefits being part of javascript, the same being true if it is part of the DOM api instead
I can imagine some cons being
I think in the very least a prerequisite for this as a language feature should be that
almost all of the web frameworks
use a shared library for theircore model of Signals
, then it would be proven that there is a use-case for signals as a standard, and much easier to use that shared library as an API reference for a standard implementation.If anyone could please elaborate more on why signals should be a language feature instead of a library, this issue could serve as a reference for motivation to include it in javascript. 🙃
The text was updated successfully, but these errors were encountered: