-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add first-class support for differential script loading #4432
Comments
I'm -1 on this idea for the following reasons:
As to whether this is a problem worth solving, it depends on what you mean. I agree it's a worthwhile thing to do for authors to serve browsers code based on the syntax and capabilities they support. I think that problem is already solvable with today's technology though. |
I like the general idea of differential loading but I don't think this solution is the right one. My main problem is surrounding how these yearly feature sets will be defined. I think it would be difficult to gain consensus on what is included. I can also see a scenario where a Popular Website uses I don't have a firm alternative, but I feel like some combination of import maps and top-level-await provide the primitives needed for differential loading. I could see a future feature of import maps that makes it a bit cleaner to do. |
Some initial responses:
It may be “solvable” through UA sniffing and differential serving, but in practice this approach somehow hasn’t gotten much traction. We commonly see websites shipping megabytes of unnecessary JavaScript. To apply the technique you describe, currently developers have to implement and maintain:
If instead, we could somehow standardize on some idea of “feature sets”, then browsers and tooling could align around that, and reduce this friction altogether. Developers could then perform a one-off change to their build configuration and reap the benefits.
Which division are you seeing? There’s no reason npm and Node.js couldn’t adopt the same “feature sets” we standardize on.
Why do other entry points such as dynamic
It would depend on the chosen process. We can make this as complicated or as simple as we want. It could be as simple as just picking a date. The date then maps to a list of latest versions of stable browsers at that point in time. That list of browsers then maps to a set of features that are fully supported (by some heuristic, e.g. 100% Test262 pass rate for ECMAScript-specific features). There’s no point in arguing about which features should be included if we can just look at browser reality and figure it out from there. |
I don't think this alignment necessitates new browser features.
The proposal includes language features, but not web platform features.
Because they are other ways of loading scripts, and if the problem statement is differential script loading, then you need to ensure those also allow differential script loading.
As I tried to point out, it is not that simple. A concept such as "latest versions of stable browsers" is itself super-fraught. |
There’s no reason it cannot include web platform features. |
Given all of about 15 minutes worth of thought I am a little hesitant to share a anything like a 'real' opinion here, but my gut reaction was kind of similar to what @domenic said except that I fall way short of
That's not to say "it does" either, just that that I also fully accept @mathiasbynens general premise what "can" technically be done doesn't seem to have caught on and is probably more challenging than it should be - but I don't know how to fix that either. |
FYI: In the node modules working group, we're currently exploring extending the existing import map alternatives pattern to support this kind of environment matching: jkrems/proposal-pkg-exports#29 |
The User-Agent is usable for many scenarios to provide a varied document or script response, but not all scenarios. For instance, within a Signed HTTP exchange, how would an author vary the response for either a document or subresource script resource based on the user-agent header? When hosting a simple static document, how would the document author vary a script source based on user-agent? Additionally, User-Agent requires document authors to correctly parse and leverage the information within. There are efforts to reduce the complexity of this burden, but it's still not clear if they will happen. Allowing the User-Agent to provide a clear signal (via This proposal attempts to provide a similar mechanism as
This is an interesting point, the intention is the
This proposal doesn't attempt to reduce transpilation to zero for specific
A goal of this proposal is to reduce the complexity in safely shipping differential JavaScript. This would require browser vendors working with one another to establish the items included in each yearly revision. However, I and other Web Developers would hope this is achievable... the goal is to make documents use more of the code they were authored with. If a User-Agent doesn't pass the defined set of tests for a yearly revision, they should not report that version in the
All of the above items are addressable with support added to Would specifying the behaviour for these items independently (as done with
Not intentional. This proposal starts with a smaller target than the entire web platform, but no division is intended. |
There's more than one way to get at this sort of information; I wonder what you'd recommend. I like the idea of making the decision on the client side, as import maps does. I've heard it can be impractical to deploy UA testing in some scenarios. If inefficient JavaScript is being served today, I'm wondering why. Is not efficient enough to do the tests? Are tools authors unaware of the technique? Is it impractical to deploy for some reason? I bet framework and bundler authors would have some relevant experience. |
that seems too ambitious of a solution... (agreeing on which feature are in which "group" seems to be way too hard... and it can fastly different on which technology you are using) if (this and that feature is supported) {
load(this);
} else if (other and more is supported) {
load(other);
} else if (...) {} imho it's about having a way of getting these checks auto-generated into your |
The
It's impossible to solve this with a lowest-common-denominator approach. So you can either ship (and maintain!) multiple highly-specialized builds to each browser, or you can ship and maintain LCD builds. Having just a yearly LCD build seems like an excellent middle ground compared to compile-everything-to-es5 or every-browser-version-gets-its-own-build.
I agree. This is the most hand-wavey part of the design, and will probably make it more difficult for devs to determine what needs to be done to generate an LCD build. But what if we change it a bit? Instead of making the browsers vendors (or any standards body) determine what needs to be natively supported for Chrome to include "2018" support, we make it the build year. Every Chrome/Safari/Firefox/X built in 2018 advertises "2018". The community can then decide what 2018 means in an LCD build. Eg, Chrome XX shipped in 2018 and advertises "2018". Firefox YY shipped in 2018 and advertises "2018". We know YY supports some feature (say, Private Fields) that XX doesn't. So, we know that if we want to ship a 2018 build that all 2018 browsers can understand, we need to transpile Private Fields. If Chrome adds support for Private Fields in 2018, the transpile is still necessary, because the 2018 LCD doesn't support it. By the time 2019 rolls around, everything supports Private Fields, and we know we no longer need to transpile it in the 2019 LCD.
The 2018 build should be responsible for only loading 2018 build files. The 2017 build should be responsible for only loading 2017 build files. What's needed is the way to load the build's entry point, not the way for the build to load other files from the same build. |
I very much like the idea at a conceptual level. In a way it is feature grouping. I believe that in today's browser landscape, most developers would conceptually divide the browsers they support in 2 levels, 3 at best. I share the concern of others on how you would possibly define these levels in a democratic and neutral way, but I'm not pessimistic about it. For the simple reason that if it would be skewed to any particular interest or be highly opinionated, it still does not necessarily harm the ecosystem, as you could just not use it and keep using low level feature detection. So it seems a progressive enhancement to me. I would imagine it as feature group detection, not just at the JS module level, also at CSS level and inline JS level. So anywhere in the code base you would be able to test for it (so also via @supports). This idea is wider in scope than the proposal, and would only work if all browsers have support for this type of testing, which may be a showstopper, I realise. If feature grouping would be a thing, organisations can simply decide to support a "year" (or 2,3) instead of the infinite matrix of possible browsers, and the individual features they do or do not support. It could get rid of a whole lot of looking up what is supported, and a whole lot of low level feature detection. It would greatly simplify feature detection code and it would be far simpler to retire a level of support. Test for 3 things, not 60, to sum it up. Another benefit as a side-effect: perhaps it would streamline coordination of feature delivery across browsers. Meaning, if browser 1 ships A yet browser 2 prioritizes B, feature A is not usable by developers without a lot of pain. A great example of coordinated delivery is of course CSS Grid support. Whilst being dreamy, I might as well go completely off-track: being able to define such a feature group yourself, to bypass the problem of trying to define one for the world. It's inherently an opinionated thing. Don't take this one too serious though, I haven't considered implementation at all. |
The problem might technically be solvable currently, but feature detection based on user agent strings runs counter to well-established best practices. It also puts the implementation burden on application developers rather than browser vendors. @kristoferbaxter already raised this, but I think it's worth reiterating — a lot of sites are entirely static, and if anything frameworks are encouraging more sites to be built this way. That rules out differential loading based on user agent as a general solution. So without speaking to the merits of this particular proposal, which others are better qualified to judge, it does address a real problem that doesn't currently have a solution. |
Conceptually and at a general level, a feature such as this will most definitely be valuable as the ECMAScript specification advances. However, the use of the As a further consideration, The On the topic of the feature sets, the years already have a well defined meaning (i.e., they map directly to the ECMAScript specification). Creating a parallel definition will most likely lead to developer confusion and broken applications as the distinction would not be obvious. Unfortunately, using a different categorization system (levels, for instance), would essentially have the effect of creating an alternate specification. This could also lead to confusion and potential bifurcation of the standards process. Strict adherence to the specification versions may be the only long term viable and supportable option. I think the main draw of a feature such as this would be to leverage more advanced syntactical capabilities which would provide greater potential for reduced code size and increased performance. At a minimum allowing for declarative feature detection of capabilities such as dynamic import or async iterators would be a boon. |
@clydin I agree Subresource Integrity should be supported somehow, eventually. I don't think lack of SRI support should block an initial version of this proposal to land (just like it didn't block |
A note on naming: could we call this Differentiated script loading rather than Differential? The latter initially made me think this involved sending script patches over the wire. |
This requires the script being external correct? What about inline scripts? <script type="module">
// What syntax am I?
// What syntax is this worker?
new Worker('./worker.js');
</script> |
To expand on the @daKmoR’s point (#4432 (comment)). What if we target features instead of years? Just like CSS does with This might look like this: <script
src="app.bundled-transpiled.js"
support="
(async, dynamic-import) app.modern.js,
(async, not dynamic-import) app.bundled.js
"
></script> Pros:
Cons:
|
While I agree on the utility of this feature and that getting it in the hands of developers sooner rather than later would be useful, I don't think it is prudent to make security related concerns an afterthought for a design that changes the semantics of code loading and execution. The integrity attribute is also one of multiple current and future attributes that would potentially need to be added to the As an alternative, what about a markup based solution? (naming/element usage for illustrative purposes):
Allows full reuse of the existing script element with semantics similar to |
I don't think this is a problem worth solving. On one hand I think it is easy enough to solve this for people who want to today; which I imagine to be a tiny fraction of developers; I imagine most folks will continue to use Babel as a compilation step, a huge portion of these folks will only output one target (probably whatever if (feaureDetectEs2018()) {
import('./index.2018.js')
} else if (featureDetectEs2017()) {
import('./index.2017.js')
} Perhaps effort would be better put into a My second point which coincides with a few commenters here is that there really is no way of knowing what something like
Issues like the above Edge bug lead me to my next major concern with this; what happens if bugs are discovered after the browser begins shipping support for this years syntax? What recourse do I have if Edge begins optimistically fetching my |
Quite a good point. The value for supported |
If folks are interested in more granular feature testing, in the style of |
I also have this concern. For this reason, I think that Using |
If scripts would be in different files(file name patterns) - it would be a pain to If scripts would be in different directories - that would solve some problems - <script type="module"
srcset="2018/index.js 2018, 2019/index.js 2019"
src="2017/index.js"></script>
<script nomodule src="index.js"></script> ^ it's the same Plus - it's much easier to generate these directories - just create the first bundle, keeping all language features, and then transpile a whole directory to a lower syntax, which could be done much faster and safer. Here is a proof of concept. |
I think CSS And even if we make it less granular ( Tying this to a specific set-in-stone supports list is the wrong way to approach this. Instead, we need a way to easily group browsers into a category, and let the community decided what is supported by that category. The category should be granular enough that we can get reasonable "clean breaks" in feature support (eg, how module implies a ton of ES6 support), but not so granular that it is valuable for fingerprinting. That's why I think browser's build-year is an excellent compromise. Having a full year in a single category means there's not much information to fingerprint (even Safari privacy-stance is allowing the OS version in the Plus, it'll be soooo damn easy for Babel to spit out "2018", "2019", "2020" builds using |
I'd definitely support Babel or other parts of the tooling ecosystem work on producing babel-preset-env configurations based on browser release year. Then someone could invest in the server-side tooling to find the release year in the UA string and serve up the appropriate babel output. That makes sense as the sort of drop-in configuration change being proposed here, and best yet, it works with existing browser code so you can use it today in all situations. |
Sorry, thought I was done w this thread, but dinner gave me more to think about. Let me assume the principle behind this proposal is sound (after all, @jridgewell certainly has sound, convincing arguments), are there better solves based on this premise? Case #1: FF releases in January 2019, without a feature that would raise the lowest common denominator. In Mar 2019, they release a feature that does raise the LCD. Because the 1/19 version of Firefox will load a “2019” bundle, all browsers miss out on untranspiling that feature until 2020 when it is finally unambiguously supported by browsers released in that 2020. Wouldnt YYYY-MM-DD format be preferred? Case #2: new fangled browser enters the market, and doesn’t support the current LCD of 2019. Do we expect tool authors to downlevel the meaning of 2019? Or do we pressure new-fangled browser to advertise a lower release year? It seems there is still a use case in which a browser might want to lie about support to stifle competition? |
That does not make a sense. So - it's 2020 New Year Eve, and just after fireworks... I should go and reconfigure and redeploy all the sites? And test the result for sure. What I will get from it?
So:
Look like it still just two bundles to ship - IE11 and ESModules targets. Fine tuning for a specific language/browser version is a cool feature, but it is actually needed? Would it help ship smaller bundles? Valuable smaller. Would it make the code faster? Valuable faster? That is the question to answer - would fine grain tuning and polyfills neat picking solve anything. Or it might be a bit more coarse. We are not fighting here for kilobytes, we have to fight megabytes. |
How many years have we been trying to use feature detection, and avoid versions? Year or ECMA edition is just another version number. If feature detection is needed for scripts, then it should be modelled upon the media attribute CSS equivalent for this e.g. I would hope we don't introduce yet another mini-language for detecting script features. Also note that Firefox used to have a similar (Aside: maybe the CSS media query would be useful for script loading too - one major distinction in our bundles is small screen. If I could bundle mouse and touch support separately, I probably would) |
Declarative feature detection - is a partially dead end. Combinatorial explosion - there is no way you will pre-create 100500 bundles you might decide to load. Every new condition is doubling the existing bundles count. So client-side feature detection? A big problem of feature detection is a location of such detection - you have to ship feature detection code to the browser at first, next you might load code you need.
The problem - actual code load is deferred by a whole time needed to load, parse and execute an initial bundle. For me in Australia and servers usually located in the USA (plus some extra latency from a mobile network) - that would be at least 500ms.
|
There's nothing stopping user agents from downloading and even starting to parse files that might never be needed (and in fact preloading is already similar in this respect). If use is contingent upon JavaScript evaluation, as I believe makes sense, then they can make an educated guess and confirm once the page catches up (which in the case of thoughtful authors using simple tests delivered inline and preceding other downloads would be practically immediately). |
Lots of fascinating discussion above. I'm not sure what the right combination is between coarse presets, fine-grained declarative feature tests, imperative feature tests in JavaScript, or UA sniffing, but I think we can figure this out. At a high level, I'm wondering, should the discussion about this proposal and about import maps somehow be connected? These are both about feature testing and deciding which code to load. What is it about built-in modules that make them make more sense to check individually, whereas JavaScript syntax features would be checked by year of browser publication? |
my addition to the proposal: Add (dynamically calculated when page is served) hash of the source file as a Browser should calculate this hash for all cached files when it saves them. When cached hash matches with This would in in practice merge the performance of all CDNs worldwide. |
@andraz That seems like a separate, orthogonal proposal. |
@andraz Unfortunately, sharing caches between origins leads to privacy issues, letting one origin get information about which things have been hit by another origin (through measuring timing). I'm not sure where to find a good link for this, but it's why some browsers use a so-called double-keyed cache. |
That's one possibility, but the more granular we get to the more valuable it is as a fingerprint. Yearly arbitrarily seemed good enough.
The 10ms was just two syntax tests to see if they'd throw or not. It rose to 18ms when inspecting I can definitely see delivering over-transpiled code as a negative. But this code is parsed and complied off-thread, and so it won't block the initial paint. I'd personally prioritize minimizing the first paint with over-transpiled code rather than delaying first paint to decide on the perfect bundle. I can't back this up, but maybe even time to first interactive will be faster, since a declarative approach won't block the start of the request (trade off being start of request vs smaller parse).
Yes, this is the biggest trade-off we'll have to make. But I see this as being worth it for the chance to ship any new syntax at all. Right now, the easiest I break between old and new code is just the module/nomodule detection. (And to make it clear, I would still feel this way even if Firefox shipped that new feature in February, after not having it in a January release)
Thinking about this, I'd equate it to "what if a new browser shipped with module/nomodule, but without any other ES6 syntax". I'm not sure I would start transpiling my module build down to ES5+module imports. As the years progress, the current LCD becomes par for the course. If a new browser doesn't meet that, they risk developers choosing not to support them.
Wouldn't this double/triple/quadruple the amount of JS downloaded? Taking FB as an example, it's bundle is already 140kb. Even taking into account that smaller bundle sizes, I'd imagine we'd be downloading 400kb, 200-300kb of which would be inert? That seems bad, especially for users with low bandwidth.
I feel like import maps is such a generic underlying tech that it could do both yearly-LCD and feature-tests relatively easily. 😃 I would be fine if we didn't add |
Another thought I had is about the entropy of this. Lower entropy translates pretty easily into higher cacheability. One of the explicit reasons I can't use the But, something simple like a |
Using YYYY-MM-DD is less fingerprintable than UA. In fact, it’s a derived attribute of UA... all you need is a server side map of UA to release date. Re: cache hit rate, YYYY-MM-DD would be equally cacheable to YYYY, depending on what you specify in the script tag. This is the first time you mentioned My point about YYYY-MM-DD was that you’d say |
For now, but I imagine that might change. Safari originally intended on permanently freezing the UA string. They allowed it to change based on the OS's version in https://bugs.webkit.org/show_bug.cgi?id=182629#c6, mainly to allow this exact "ship newer JS" feature. If a less granular option to accomplish that were made, they may reconsider a permanent freeze.
I originally mention it in #4432 (comment), on why server-side varying based on
I think both of these ways has merit. If we decided YYYY-MM-DD, I'd be perfectly happy. |
You omitted the second part of that comment: "…which in the case of thoughtful authors using simple tests delivered inline and preceding other downloads would be practically immediately". User agents are in an ideal position to make the best tradeoff between delaying progress vs. downloading too much, but neither of those are necessary at all unless they guess wrong, and that will only happen when page content mucks with the environment or employs complex tests (both of which authors are disincentivized to do). What browser would download the wrong file from a block like this? <scriptchoice>
<script when="[].flat" type="module" src="2019.mjs"></script>
<script when="Object.entries" type="module" src="2018.mjs"></script>
<script type="module" src="2017.mjs"></script>
</scriptchoice>
<script nochoice src="legacy.js"></script> We could even specify evaluation of each condition in its own realm, guaranteeing default primordials but at the expense of bindings available in the outer realm—which honestly might be worthwhile even if it would result in ugliness like <!-- As of version 12, Safari supports ES2018 except regular expression lookbehind. -->
<script when="/(?<=a)a/" type="module" src="2018.mjs"></script>
<script when="/./s" type="module" src="2018-nolookbehind.mjs"></script> |
The Angular team is watching capabilities and discussions like this closely, we'd love continue our work to ship only the JavaScript each user needs. If this becomes a standard, we'd love to implement this. This is awesome! |
Hi, this seems Versioning a Web and browser strongly care about it.
personally, it's seems better to use feature base detection for each spec not a version number. |
I don't think the browser bug issue matters hugely given that the only way around that is to:
Unless you're gonna ship the whole With that in mind I actually think a combination of extremely granular features and instead sending a request with just those capabilities that are not supported might work. As a concrete example, I include on the <script type="module" features="asyncIteration regexpLookbehind generators asyncGenerators" src="script.mjs"></script> Now when the browser sees this script it looks at the features it doesn't support and sends that as a list with the request. When the server receives this list it can perform any logic it wants to determine the best thing to send back. This would work relatively well in a world of constantly updating browsers as the set of features between your best case and worst case are likely to be small and fluctuating rather than ever growing. For really old browsers you may just want to use |
In what situations do people rely on google's caching?
I don't quite understand this, since AMP doesn't allow you to load arbitrary scripts. |
Well, I work for Google. I'm interested in solutions that will for everyone, including my employer. But focusing specifically on my Google example is missing the point. The UA header has extremely high entropy. If you Vary on it, you're effectively making the response un-cacheable. That's not Google specific.
Something has to serve |
@jridgewell gotcha. Your post makes a lot more sense now that I know you work for Google. |
The
type=module
/nomodule
pattern gave developers a “clean break” to ship small, modern JavaScript bundles (with minimal polyfilling + transpilation) vs. legacy bundles (with lots of polyfills + transpilated code), which is great not just for module adoption but also for web performance. However, as more features are added to the JavaScript language, more polyfilling and transpilation becomes necessary even for these “modern”type=module
script bundles.@kristoferbaxter, @philipwalton, and I have been thinking about ways to address this problem in a future-facing way, and have explored several potential solutions. One way we could introduce a new “clean break” once a year is by adding a new attribute to
<script type="module">
, perhapssyntax
orsrcset
:(Note that this is just an example of what a solution could look like.) The
2018
and2019
descriptors would then refer to feature sets that browsers recognize (in particular, they do NOT refer to ECMAScript version numbers or anything like that). For more details, read our exploration doc.At this stage we’d like to get feedback on whether others agree this is a problem worth solving. Feedback on any particular solution (such as
<script type="module" srcset>
vs.<script type="module" syntax>
vs. something else) is also welcome, but less important at this time.The text was updated successfully, but these errors were encountered: