Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

brainstorming #1

Open
mattdesl opened this issue May 21, 2015 · 32 comments
Open

brainstorming #1

mattdesl opened this issue May 21, 2015 · 32 comments

Comments

@mattdesl
Copy link
Owner

Right now this repo is just brainstorming ideas about creating a new WebGL framework.

This might be my new pet project, or maybe something more collaborative, or maybe nothing will happen and we'll just continue getting shit done with ThreeJS. 😄

what's wrong with ThreeJS?

It's awesome! But my main gripes:

  • really bloated
  • the size/breadth makes it difficult to contribute to, prone to bugs
  • no official support for package managers
  • no real versioning; difficult/impossible to build features on top of it that survive more than a single version
  • no clearly defined scope; aims to do everything under the sun
  • custom shaders are clunky to write, lots of magic under the hood
  • targets a mostly non-WebGL audience, so a lot of low-level stuff is hidden form user (e.g. dynamic texture atlasing would be difficult/impossible in ThreeJS)
  • gamma handling is currently broken [see here]

what about StackGL?

Also awesome, and not mutually exclusive to a new framework. Tons of modules will be useful, especially glslify. And many parts of the framework can be modularized and "given back" to stackgl ecosystem. However:

  • immediate mode does not scale well for a high performant game engine
  • there is a lot of redundancy in state switches and calls to gl.getParameter
  • for the sake of modularity, there are some things an opinionated framework can do without, e.g. the caching in gl-shader [see here]
  • right now there is some bloat/complexity due to Buffer, ndarray-ops etc which may not be needed
  • in a multi pass G-buffer pipeline there ends up being a lot of coupling; hard to stay totally modular and you end up jumping through a lot of hoops instead of getting real work done
  • it can be hard to know how to glue all the modules together
  • in some ways it is more focused on research/etc than gamedev/VR
  • the "Open" nature of stackgl means you end up with a lot of different styles & programming backgrounds, and not as clear a vision/direction for the project as a whole

so what about this imaginary new framework?

Let's call it glo for now (name pending) since light is going to be important. Nothing is clear but here are a few ideas:

  • focused on G-Buffer / deferred render pipeline / physically-based rendering
  • play nicely with WebVR
  • gamma/linear should be handled correctly through entire pipeline
  • uses glslify to keep shaders modular
  • target audience should have a decent understanding of WebGL/GLSL
    • do not try to hide or dumb-down GL things
  • optimize for perf & memory during rendering
    • e.g. don't hold onto unrolled vertices like in THREE.Geometry
    • but also don't fear concise and functional code, especially during initial development
  • look into WebGL2 features
  • explore modern practices like:

few notes on implementation / structure

  • use feross/standard for style
  • unit tests with smokestack 🎉
  • follow SemVer; stay at 0.0.x for a while until things stabilize
  • provides some abstraction for "Geometry", "Mesh", "Light" like in ThreeJS
  • should not make sweeping assumptions about attribute types (i.e. should be more like THREE.BufferGeometry than THREE.Geometry)
  • keep it slim; encourage growth in npm rather than the framework, e.g.
    • keep scope limited; I don't have time to manage 1000s of issues
    • instead of a built-in OBJ parser, user needs to use parse-wavefront-obj
    • same for triangulation, camera abstractions, torus-mesh, etc
    • debug/prototype stuff that never makes it into production should not be bundled into framework (like THREE.ArrowHelper)

big questions

Well.. there's a lot of things to figure out. But let's get the obvious questions out of the way:

  • Vector types: should they be arrays or objects? [discussion]
  • How wide is the scope of this framework?
  • How high-level does the light abstraction go? Do we have hemisphere, area, directional, etc like in ThreeJS? Or do we just provide a thin interface and encourage growth in npm / userland?
  • Same with materials: include Phong/Lambert/Glass/etc? Or should they be separate modules?
  • What is the airspeed of an unladen swallow?
  • What if the user doesn't want G-Buffer, just a simple pipeline?
@mattdesl mattdesl changed the title what is this? brainstorming May 21, 2015
@hughsk
Copy link

hughsk commented May 21, 2015

This all sounds awesome. Agree with your points re: stackgl too, would especially like to run through some of the modules to reduce state switches/checks where possible.

If you'd like any input/assistance let me know :)

@stevekane
Copy link

I'm chiming in to say that our desires here are very aligned. I adore building webgl tech stacks for various use cases. I have done a lot of experimentation with the low level APIs and like the stackgl crew have acquired a taste for mild sugar with heavy doses of composition.

I would recommend we begin the effort by laying out a canonical application spec that this "framework" will service. This gives a clearer goal of the target and also is more fun as there's a bit more of a ...product at completion.

@nickdesaulniers
Copy link

Oculus' recent decision to only support will be a serious blow to webVR. A lot of people at Mozilla aren't too enthused; we don't like to ship APIs that only work on one platform. But there are other devices.

stackGL is great; but I'm not sure there's enough focus on docs/evangelism. Maybe there's lower level documentation on a per module basis; but I feel it's not clear what the common patterns are. If I were to start a project today, it's not clear what the common patterns or modules I would use are. Maybe some are very common or always used, while others are more niche. More integrated examples shows people how to get started.

Meanwhile, Three.js and Babylon have a lower barrier to entry IMO, so you get more people talking about them. There's no reason why stackGL can't both be have research explorations and be beginner friendly. I think stackGL's modularity gives it the advantage here; smaller well written modules allow for small research explorations, and allow higher level abstractions to be built on top.

If three.js' API could be implemented in stackGL, what would the benefit be?

I'm curious if preprocessing objects that follow a certain pattern or convention might prove to be worth while. We can profile calls against the GL context at runtime, I wonder if there's a way to close the feedback loop? What about having the user hint at objects that don't have dynamic properties or stay on screen, and packing them into an interleaved array of values, rather than objects that might have their own buffers and thus have to do more draw calls?

Just a brain dump.

@marklundin
Copy link

Agree with many of the points above.

One of the reasons I think three.js is so well known is View Source though.

@benaadams
Copy link

custom shaders are clunky to write, lots of magic under the hood

Do you use THREE.RawShaderMaterial?

@mrpeu
Copy link

mrpeu commented May 21, 2015

@marklundin

One of the reasons I think three.js is so well known is View Source though.
What do you mean? That the sources are easy to read?

@marklundin
Copy link

Sorry, that was a bit vague.

If we were to use transforms to transpile certain syntactical features does that not put an extra barrier up for someone wanting to view source on a transpiled demo?

@mrpeu
Copy link

mrpeu commented May 21, 2015

That's where JS in general is going anyway. Even for THREE productions, they are minified etc. Sharing its Javascript source is transitionning from being passive to active.
I see another problem though. It means that tools adapters would be needed or at least strongly advised to use glo. With all the hassle(special tooling dev/update) and problems(keep tools adapted to the framework etc).

@gregtatum
Copy link

Count me in. I'd like to see something where in a few lines I can plop some geometry into a scene, light it, and bam I've got a result. I think it's important to include the base lighting and shading models, and a simple data-structure driven geometry model. I think the more everything focuses on the simple data, with functional interfaces the more appealing a framework like this would be for me.

I love the idea of being able to drill the abstraction down to the raw GL. I'd like to be able to load some model data, set up some lights, and add a shading model. Then when I realize I want to do something more custom in the shader, easily be able to break down the abstraction and write some custom shader code to do what I want.

All of my more custom three.js stuff involves mostly ignoring the existing lighting abstraction if I'm doing my own custom stuff.

So for me scope would ideally include basic Phong/Lambert/etc. shading models, scene abstraction, and lighting models. I would splurge on scope and include more lighting rather than less, or at least a focused separating module core that could work with the base framework.

@mattdesl
Copy link
Owner Author

Glad there is a bit of interest in this.

@nickdesaulniers It's a pity for WebVR and will probably slow down its pace/interest, at least until a cross-platform solution comes around, or until OSX/Linux steps up their GPU game.

If three.js' API could be implemented in stackGL, what would the benefit be?

I don't think it would be possible without a very highly coupled and "frameworky" set of APIs, which is antithetical to stackgl.

@benaadams

Do you use THREE.RawShaderMaterial?

Yup, it is horribly clunky (sorry) trying to build, for example, a custom phong shader. You basically end up copy-pasting ShaderChunks without rhyme or reason, and turning on/off defines and flags (like lights and USE_MAP) until all the attributes and uniforms fall into place. See this for an example in practice.

Worst of all, the next version of ThreeJS breaks your custom shader, so you need to start over again.

Compare to this phong shader with glslify, which could be modularized further and is not lock-stepped to any framework version.

@marklundin

If we were to use transforms to transpile certain syntactical features does that not put an extra barrier up for someone wanting to view source on a transpiled demo?

I'm still not 100% sold on using Babel/etc to author it. But I agree with @mrpeu; JS is often transpiled/bundled/compressed in modern workflows. Also; I'm not trying to create another ThreeJS, and this kind of a framework (so much emphasis on npm) isn't very useful without Browserify/Webpack/etc.

@gregtatum
Copy link

Also, I'm exploring a trie-like scene graph with a memoized update model. It creates a more functional approach for updating the state of the scene. More of a personal plug of what I'm into right now, but it might be interesting to explore. It stops you from having to have to use dirty flag checking and all of the complexity that goes along with it. All you're doing is a quick === at the base of your trie structure to see if a node in your graph has been changed, and then recursively re-processing it if it has. The code then reads like you're calculating everything from scratch each time without the if statements, but then only recomputes things as needed because of the memoization. So then the interface would look like this:

updateShadingModel( getCurrentScene(), mesh, {
  type: Phong,
  color: [1,0,0]
})

Then during update:

previousScene === getCurrentScene()
>> false

So it starts walking up the graph and === checking all the nodes.

Probably the biggest potential issue with this is how much memory this sheds in a real time application, but most of the application state changes are going to be mutating individual matrices and arrays which wouldn't need to change the trie. It would only be for adding geometry, lights, or changing shading models or shader setup.

@mattdesl
Copy link
Owner Author

@mrpeu

I see another problem though. It means that tools adapters would be needed or at least strongly advised to use glo. With all the hassle(special tooling dev/update) and problems(keep tools adapted to the framework etc).

The user would need to install Browserify or Webpack for any real use of the framework. Not just because it is mainly intended to be consumed through npm (so users can receive versions), but also because it encourages growth in npm and doesn't come bundled with things like primitives or OBJ parsers.

This hypothetical framework makes a lot of opinions, and will probably shun some JS devs because of that. But hopefully it also contributes back to npm in the form of isolated modules, so that the next time somebody says "I need to build a new 3D engine," they will have a lot of ground to stand on, and there will be a lot more shared code between frameworks.

@marklundin
Copy link

@mattdesl @mrpeu It's a fair point; tooling is an integral part of the modern workflow, and personally, the idea of swizzling and even operator overloading would be awesome as some form of transform.

There's a few upcoming proposals that might prove relevant for WebGL and should definitely be considered when designing a lib:

SIMD
Immutable Data Structures
Value Types which could might make operator overloading much feasible

@benaadams
Copy link

Sorry is a bit of an aside:
@mattdesl

Compare to this phong shader with glslify, which could be modularized further and is not lock-stepped to any framework version.

Worst of all, the next version of ThreeJS breaks your custom shader, so you need to start over again.

glslify is a build preprocess, is it not, if you want to use something like glslify-optimize as part of it? Why don't you make your shaders with glslify rather than ShaderChunks. If you can convert the output into a javascript string that's all THREE.RawShaderMaterial cares about. The only real constraint in the shader code being you need to name some matrices viewMatrix,modelViewMatrix,projectionMatrix,normalMatrix and the position cameraPosition if you want auto link up. (Assuming you are already using BufferGeometry)

e.g. https://gist.github.com/benaadams/6804d29753ff58f6f4f8

@mattdesl
Copy link
Owner Author

@benaadams I actually didn't realize there was a difference between RawShaderMaterial and ShaderMaterial, thanks for pointing that out. Does it also include light uniforms etc? Or are you on your own?

@benaadams
Copy link

@mattdesl you are on your own with completely empty shaders; however if you add any of the standard uniforms to the shader source it will wire them up; so either stay away from the standard uniforms or use them depending. It only auto-wires up the global types, so matrices, fog and lights for example; and only if you include them in the source.

Also you do know you can change what's included in three.js by using the build.js as part of your build and altering what's included by changing common.json and extras.json?

@silviopaganini
Copy link

Absolutely agree, mainly on the shaders / textures that I think could be separated modules to be imported whenever needed to use.

ThreeJS is great as it's humanly readable, but because of that, it's bloated..

@nickdesaulniers
Copy link

One of the reasons I think three.js is so well known is View Source though.
ThreeJS is great as it's humanly readable...

Both Three.js and stackGL can either be obfuscated, or not. Can someone show an example where Three.js is more readable than stackGL, or at least where stackGL becomes unreadable? This would help with the design of glo.

@stevekane
Copy link

I would like to add that I would greatly appreciate a serious focus on speed over newbie-friendliness. I think elegant solutions tend to be more instructive to new-comers anyway as it teaches them the real patterns of performant system design more-so than a slick high-level interface.

ps I'm sorry so many hyphens crept into this post...

@marklundin
Copy link

I think the argument was not whether to build glo using webpack/browserify, but whether to force end users/developers into a specific toolchain or not.

A transform would allows syntactic sugar like vector component access and twizzling but still have a raw data underneath which should help with SIMD.

Personally I think any transform should be a secondary discussion.

@mattdesl
Copy link
Owner Author

Yup @marklundin it definitely should not be forced on the end-user. Made #3 for further discussion on that

@nickdesaulniers
Copy link

I'm more curious is transformation can be employed as a sort of ahead of time optimization; if we have lots of static meshes in separate modules, and we recognize they have the same material-like shading, can we combine them ahead of time (combining buffers via degenerate triangles, for example) that way we can draw multiple geometries with 1 call.

For instance, there's quite a few classical compiler optimizations done for ahead of time compiled languages. I'm curious if there are opportunities for us to do the same, but instead of loop-invariant-code-motion and friends, geometry merging.

@marklundin
Copy link

Potentially yes, and definifley interesting but I suspect these sort of optimisations would be a better suited further up the asset pipeline.

Personally, I thing the dev should have full control of the gl state. The lib should not try to cover for common mispractice.

@vorg
Copy link

vorg commented May 22, 2015

I've developed http://vorg.github.io/pex/ for similar reasons as above so I think it would add to the discussion to say what went wrong / good as it stands betweet THREE (monolithic framework) and stackgl (micomodules)

Why:

  • cross environment : http://plask.org, browsers, ejecta
  • to learn and experiment
  • low level gl access
  • modularity
  • bloated three js (not really issue at the beginning, started around the same time 3+ years ago)

Good:

  • couple of years of use in production
  • it works, fun to work with, managed to implement Tiled Deferred Rendering, custom PBR for several projects etc
  • ok core modules - glu, geom, sys, materials, fx/postprocessing
  • lots of goodies built in, super fast to kickstart projects
  • moving to npm 1 year ago vastly improved reusability of code, no doubt about that
  • npm for the win: triangulation, obj parsing,
  • browserify for the win
  • plask for the win (fullscreen instalations, high res prints, all the nodejs goodies fs, databases libs, big file streaming etc)
  • big core modules help to find things
  • npm versioning works

Bad:

  • core modules grew to big (octree in core? maybe shouldn't be, arcball camera in glu, same, materials)
  • big modules = always delays with docs
  • big modules = should this go into core or not -> constant dilemma (reason why PBR is scattered in several project and still not published)
  • automagic (delayed uniform setting, global gl context in singleton, automatic geometry buffers creation and update etc) super convinient but introduces ugly bugs, makes it difficult to reason about performance, fragile gl state
  • automagic is biggest obstacle for other people trying to use the lib
  • custom Vec3 and others, again convinient but lots of GC issues and marshalling for array driven 3rd party modules
  • Unfamiliar class names : RenderTarget (like in unity) should be called FBO
  • neither low level, nor a scene graph (eg has mesh with transfor and material but no child/parent relationships)
  • not much functional programming
  • still below 1.0.0, unstable, partialy because of big/medium core midules

Next:

  • still cross plask/browser
  • solid low level wrappers with gl state exposed: Texture, VBO, FBO, Program borrowing as much goodies from WebGL2 as possible
  • no automagic
  • shaders with glslify (already started), way to give back to npm
  • performance oriented architecture with immutable state objects, draw call commands, renderer (command executor) etc inspired but Vulkan/Metal but not so hardcore (Cesium seems to have nice arch)
  • vector math in arrays (just a wish now, Object vectors are soooooo convinient, but if i kill object array based geometries in favor of flat VBO then maybe it will be easier to swallow and say good bye to position.x)
  • convinience abstractions as modules (intermediate mode with stack, scene graph)
  • pbr / deferred renderer as a module
  • computational geometry as a separate module(s) - subdivisiobs, spatial trees etc

Hard things and questions to glo:

  • vec3 vs arrays
  • materials (programs + state, part of scene graph or pbr module ecosystem?)
  • conventions (eg uniform naming, combining postprocessing effects)
  • global gl state unless using one centralized renderer
  • module granularity

@mattdesl
Copy link
Owner Author

@vorg thanks for your thoughts. Plask sounds awesome (could it theoretically support any desktop OpenGL features?). I'd love to use it for installations/prints so I'd be happy to support it as a target (gotta figure out how to set up Plask first).

I agree with solid low-level GL wrappers for shader, texture, cube, FBO, etc and that's were I'm going to start. I also think those are the easiest to modularize, so Pex might benefit from the work I'm doing. See #4 for some early discussion on that.

I suspect my first iteration of all this will be pretty rough around the edges, and maybe not even usable for a real production. But hopefully in the process some crisp modules/shaders will come out of it that can benefit stackgl, pex, pixi, ThreeJS as well as any subsequent iterations of glo.

@mikolalysenko
Copy link

I think this is good. One thing I've been meaning to do is kill off gl-shader eventually and switch over to a command buffer based interface (sort of like how Vulkan does things). This kind of what I was getting at with the commutative rendering note that I wrote a while back.

General things that I would like to see come out of this:

  • A coherent approach to physically based rendering in glslify. Ideally this stuff would be in the form of various shader modules where you plug in material properties and get some lighting value out. This could be used in shadertoy like experiments as well.
  • A general solution for shadows and multipass rendering. This stuff is so hard to do without making some assumptions about how your geometry is set up.
  • Maybe some solution for transparent materials. Again really hard to do in WebGL 1, though with WebGL 2 might be more tractable due to standardized multiple render targets.

There are also some things that I think would be good to avoid:

  • Collision detection, ray casting and physics: Please leave this up to libraries that call the rendering engine!
  • Hierarchical scene graph. I'm not sure that these are a great idea. It might be better to have the engine simply maintain some structure full of objects that it just does a pass over and renders, rather than trying to store some weird over engineered octree like thing. Relative/recursive positioning and transformation is easy enough to do outside the engine and doesn't really do much for the rendering itself.
  • Fancy geometry generators/asset importers. Again, let's use npm to handle this problem

@mattdesl
Copy link
Owner Author

I haven't looked into Vulkan so I'm pretty green on how a command buffer interface would look in practice or how it would make gl-shader (or something similar) obsolete.

I'm in agreement though with all the stuff you listed. The main thing I want is a pipeline for multi-pass rendering, which includes lighting, shadows, and post-fx, and provides a clearer focus on gamedev and other artistic experiences.

I am also really hesitant on a full scene graph since there are so many ways of tackling it and it creates a lot of lock-in. They are great to prototype with, but I'd like to develop it independently of the render pipeline if such a thing is possible.

@backspaces
Copy link

Any chance of being es6 friendly? Stackgl is firmly in the browserify/require() camp which is unnecessary workflow if already using import/export. Possibly jspm can converge the two worlds.

@hughsk
Copy link

hughsk commented May 26, 2015

@backspaces fwiw, you can still use es6 imports with stackgl and babelify.

@mattdesl
Copy link
Owner Author

It should be ES6 friendly if you are using babel and a bundler that supports npm. It will be authored in ES5 for the time being, see #3 for discussion.

Sent from my iPhone

On May 25, 2015, at 7:09 PM, Owen Densmore [email protected] wrote:

Any chance of being es6 friendly? Stackgl is firmly in the browserify/require() camp which is unnecessary workflow if already using import/export. Possibly jspm can converge the two world.


Reply to this email directly or view it on GitHub.

@backspaces
Copy link

Can someone post a Gist showing es6 (babel and/or traceur) and a module loader (preferably one by-passing browserify/npm but OK if not possible) using basic git and npm development?

I've seen Guy Bedford posts saying that a module loader, possibly jspm, can import git/npm etc so I suspect it may be possible for stackgl to have a workflow without browserify, vastly simplifying development.

Is there something in using stackgl that requires more than simply importing it? I.e. does glslify require additional workflow fu?

I'm worried the project is painting itself into a corner. As wonderful as small modules are, and boy am I a believer, requiring complex workflow is a non-starter.

@mattdesl
Copy link
Owner Author

Sadly, ES6 does not simplify development. The main reason is modularity.

One of the goals of this project is to produce new modules that are independent of glo. This way; whether or not the framework "succeeds," at least it will have contributed a lot of new features to npm that can be used in other projects (like ThreeJS, Pex, and unrelated fields). Since starting this project, dozens of modules have already been spawned and split off from its codebase:

For these tiny modules, transpiling adds a lot of overhead when testing, publishing and consuming the module. If the source of glo is written in ES5, it is easier to just split the code out and publish it immediately.

Also, bear in mind that users are expected to interact with npm and modules to build an application with glo. I am not planning on bringing ray intersection or OBJ model parsing into this framework, since those features can easily live independently on npm.

Also, most of the shaders are encouraged to be made with glslify (which needs a build step), to take advantage of shared GLSL components. Example

I don't have a gist, but most of my recent projects are in ES6 even though most of the modules I'm importing are ES5. The build step requires two lines and leads to a very fast development workflow. More info here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests