-
-
Notifications
You must be signed in to change notification settings - Fork 656
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Roadmap: 2023-24 #1830
Comments
Regarding opening up for custom (native) renderer on top of laptos_reactive - I noticed that's what i.e. https://github.com/lapce/floem/tree/main/reactive are doing, and it would be cool to see the core of leptos flexible enough for a project like that to build on top of leptos with a custom renderer, rather than fork it and replace all the dom specific code which appear to be the current situation. |
Yep I've seen it. They'd previously been using Leptos for reactivity but decided to go their own way for 0.5 because they were using Scope in a pretty important way, IIRC. But yeah if you think of the framework stack as being split into 1) change detection/reactive system, 2) rendering library 3) the actual renderer (DOM or native UI toolkit) then Leptos and Floem were sharing 1 with distinct 2 and 3. My goal here is actually to make the three completely modular, so that you can build Leptos web (Leptos reactivity/Tachys/DOM) or Leptos native (Leptos reactivity/Tachys/some native toolkit) or your own framework (X/Tachys/Y) with the shared "rendering library" layer quite loosely coupled. |
Roadmap looks amazing (as is all the work up to this point!) and I'm especially happy you're finding time to work in the spaces that give you energy. |
Just to provide an update on the roadmap toward 0.6: This work has basically what I'd describe as the 80/20 point: I have reimplemented about 80% of Leptos 0.5, and the remaining 20% of the work will take about 80% of the time 😄 In many ways, the outcome looks basically identical to what you'd expect from a Leptos app right now. For example, this afternoon I just finished implementing the new form of the There's nothing super surprising // dropping create_ prefixes in several places in line with Rust idiom
// compare `signal(0)` to `channel(0)` for this pattern
let (count, set_count) = signal(0);
// all looks the same
view! {
<button
on:click=move |_| spawn_local(async move {
let new_count = server_fn_1(count.get()).await.expect("server fn failed");
set_count.set(new_count);
})
>
"JSON " {move || count.get()}
</button> I've seen pretty significant wins in terms of binary size, memory use, and performance, without having to break much in terms of APIs. For example, with comparable build settings the The performance is also "smoothed out" quite a bit in the case of less-than-ideal user code. For example, in the current version move || view! { <p>{count.get()}</p> } will create a new view! { <p>{move || count.get()}</p> } And in fact this multiplies fairly well. So, for example, if you have multiple signals changing at once, then the "coarser grained" approach actually performs slightly better than the finer-grained one // this used to be really bad, but now it's quite good!
move || view! { <p>{count1.get()} {count2.get()}</p> } There are also some nice treats and wins like typed attributes meaning that you can't accidentally type What's LeftMost of this work has been happening in other repos (tachys and server_fns) to avoid having to deal with massive merge conflicts and so on. On one level, the technical steps from here are fairly straightforward
Other pieces require a bit more work And some are just polish Open Questions in API DesignThere are also a few open questions. I've been trying to lean more into some native Rust idioms—not for the sake of being more idiomatic, but because there are nice performance, simplicity, and maintainability benefits. For example, I've been reworking both async and error handling a bit. For async, we'll probably still provide Suspense/Transition/Await components, but you can also use let value = Resource::new(/ *something */);
view! {
<p>{
async {
value.await.unwrap_or_default()
}
// Suspense trait implemented on Futures provides .suspend()
.suspend()
}</p>
} Likewise, I'm playing with a similar pattern for error handling: move || {
view! {
<pre>
"f32: " {value.get().parse::<f32>()} "\n"
"u32: " {value.get().parse::<u32>()}
</pre>
}
.catch(|err| {
view! {
<pre style="border: 1px solid red; color: red">
"error"
//{err.to_string()}
</pre>
}
})
} Again, this can have a nice component wrapper but doesn't require it. These are built on the idea that I should probably write something up in a bit more detail about some of those decisions as we get closer. In any case, it's definitely possible to implement the same old So: Yeah, 0.5 is in pretty good shape right now, and I think things are relatively stable on that level. I'm trying to take into account as many of the less-tractable 0.5 issues as possible in designing for the future, and the new release is making good progress. I wouldn't want to put a timeline on it, but I'm very excited for the future ahead. |
I'll jump in here and talk a little bit about the server fn rewrite, which I am very excited about. Current Server FnsCurrently, Server Fns are based around Serde's Serialize and Deserialize traits, and feature a fixed number of encoding types(Url,Json, Cbor) built into the macro. Inputs and outputs from them have been required to implement Serialize/Deserialize. This has a few limitations, which I'll talk about below New Server FnsThese will be based around the Request and Response types of each framework(http::Request, http::Response) for most things, HttpRequest/HttpResponse for Axum, and consist of four different traits that are implemented for an Encoding and for a data type. FromReq/IntoReq and FromRes/IntoRes. Here's the benefits of this approach: Looks pretty similar to the old one right? #[server(endpoint = "/my_server_fn", input = GetUrl)]
pub async fn my_server_fn(value: i32) -> Result<i32, ServerFnError> {
println!("on server");
Ok(value * 2)
}
/// Pass arguments as a URL-encoded query string of a `GET` request.
pub struct GetUrl;
/// Pass arguments as the URL-encoded body of a `POST` request.
pub struct PostUrl;
impl Encoding for GetUrl {
const CONTENT_TYPE: &'static str = "application/x-www-form-urlencoded";
}
impl<T, Request> IntoReq<Request, GetUrl> for T
where
Request: ClientReq,
T: Serialize + Send,
{
fn into_req(self, path: &str) -> Result<Request, ServerFnError> {
let data =
serde_qs::to_string(&self).map_err(|e| ServerFnError::Serialization(e.to_string()))?;
Request::try_new_post(path, GetUrl::CONTENT_TYPE, data)
}
}
impl<T, Request> FromReq<Request, GetUrl> for T
where
Request: Req + Send + 'static,
T: DeserializeOwned,
{
async fn from_req(req: Request) -> Result<Self, ServerFnError> {
let string_data = req.as_query().unwrap_or_default();
let args = serde_qs::from_str::<Self>(string_data)
.map_err(|e| ServerFnError::Args(e.to_string()))?;
Ok(args)
}
}
impl<T, Response> IntoRes<Response, Json> for T
where
Response: Res,
T: Serialize + Send,
{
async fn into_res(self) -> Result<Response, ServerFnError> {
let data = serde_json::to_string(&self)
.map_err(|e| ServerFnError::Serialization(e.to_string()))?;
Response::try_from_string(Json::CONTENT_TYPE, data)
}
}
impl<T, Response> FromRes<Response, Json> for T
where
Response: ClientRes + Send,
T: DeserializeOwned + Send,
{
async fn from_res(res: Response) -> Result<Self, ServerFnError> {
let data = res.try_into_string().await?;
serde_json::from_str(&data).map_err(|e| ServerFnError::Deserialization(e.to_string()))
}
}
#[server(endpoint = "/my_server_fn", input = GetUrl)]
// Add a timeout middleware to the server function that will return an error if the function takes longer than 1 second to execute
#[middleware(tower_http::timeout::TimeoutLayer::new(std::time::Duration::from_secs(1)))]
pub async fn timeout() -> Result<(), ServerFnError> {
tokio::time::sleep(std::time::Duration::from_secs(2)).await;
Ok(())
}
pub enum MyAppError{
#[error("An error occured")]
Errored
}
impl From<ServerFnError> for MyAppError{
fn from(err: Error) -> ServerFnError<MyAppError>{
server_fn_error!(err)
}
}
fn add(val1: i32, val2: i32) -> Result<i32, MyAppError>{
val1+val2
}
#[server(endpoint = "/my_server_fn", input = GetUrl)]
pub async fn my_server_fn(val1: i32, val2) -> Result<i32, ServerFnError<MyAppError>> {
add(val1, val2)?
} |
@gbj Since you’re tackling the split-up and refactoring of the new rendering system (which sounds amazing btw), I was wondering if you had thought about how/if functionality like HMR/HSR (Hot Module/State Reloading) would be an integrated part of the reactivity/rendering system, or if it’s something that can be built on top? From reading how Perseus achieved HSR for their framework (based on Sycamore), it sounded like having it deeply embedded into the reactivity machinery (so it can seamlessly serialize/deserialize things, if that’s the way one goes), could be beneficial. They described their approach here https://framesurge.sh/perseus/en-US/docs/0.4.x/state/freezing-thawing/. I’m mostly interested to hear if you had any thoughts around where you think such functionality would best live in the leptos stack/ecosystem :) Thought it would be relevant to bring up in case it influenced any design choices while we are looking to replace things anyways. |
@Tehnix This might be worth discussing at more length in a a separate issue as well. Reading through the Perseus docs on this my take-aways are the following:
The big benefit of HMR/state preservation in a JS world comes from the 0ms compile times of JS, which means you can update a page and immediately load the new data. Not so with Rust, so this is mostly "when my app reloads 10 seconds later after recompiling, it restores the same page state." I have tended toward a more primitive-oriented approach (i.e., building page state up through composing signals and components) rather than the page-level state approach of Perseus, which I think is similar to the NextJS pages directory approach. So this wouldn't work quite as well... i.e., we could save the state of which signals were created in which order, but we don't have an equivalent struct with named fields to serialize/deserialize, so it would likely glitch much more often. (e.g., switching the order of two It would certainly be possible to implement at a fairly low level. I'm not sure whether there are real benefits. If you want to discuss further I'd say open a new issue and we can link to these comments as a starting point. |
Good idea, I've opened #2379 for this :) |
Not sure if this would be the right place for this, but I can't open an issue in the leptos 0.7 preview playground. I saw that the Usually,
To achieve this, just as an idea:
... and it would be the exact same time, which would make scaling a lot better and more efficient.
|
@sebadob Thanks, I'll enable issues on that repo too as it does make sense as a place for this sort of discussion. Basically: The reason we use multiple inline scripts is to enable the streaming of resource values and Suspense fragments as part of the initial HTML response. I don't think it is possible to hash these at compile time, because by definition they are only known at runtime, if for example the resource is loading information from the DB. (With some additional tooling we could support hashing for the actual hydration script, but not these additional scripts that are part of the streaming response.) |
@gbj Thanks! Yes that makes sense. I'll think a bit more about a way of how this could be solved in a more efficient way. |
(This supersedes the roadmaps in #1147 and #501.)
Big Picture
Depending on how you count, Leptos is somewhere near its 1-year anniversary! I began serious work on it in July 2022, and the 0.0.1 release was October 7. It's incredible to see what's happened since then with 183+ contributors, and 60+ releases. Of course there are still some bugs and rough edges, but after one year, this community-driven, volunteer-built framework has achieved feature parity with the biggest names in the frontend ecosystem. Thank you all, for everything you've done.
Releasing 0.5.0 was a huge effort for me, and the result of a ton of work by a lot of contributors. The process of rethinking and building it began in early April with #802, so it's taken ~6 months—half the public lifetime of Leptos—to land this successfully. And with the exception of some perennials ("make the reactive system
Send
"), the work laid out in #1147 is essentially done.So: What's next?
Smaller Things
Polishing 0.5
This section probably goes without saying: we had a lot of beta testing for this release but I also continued making changes throughout that process (lol), so there will be some bugs and rough edges to smooth out in 0.5.1, 0.5.2, etc.
Copy
impl and module-level docs forCallback
(Missing docs andCopy
impl forCallback
#1818)--hot-reload
support incargo-leptos
There have also already been PRs adding nice new features!
fallback
optional forShow
/Suspense
/Transition
! (made show fallback optional #1817)Portal
component (Portal #1820)And some cool ideas unlocked by ongoing development:
create_resource
, but without explicit dependencies; I made a working demo in a few minutes)Building the Ecosystem (Leptoberfest!)
Of course it will take a little time for our ecosystem libraries to settle down and adapt to 0.5. Many of them have already done this work—kudos! All of these libraries are maintained by volunteers, so please be patient and gracious to them.
The larger the ecosystem of libraries and apps grows, the harder it becomes to make semver-breaking changes, so expect the pace of change to be more sedate this year than it was last year.
The core is pretty stable at this point but there's lots that we can do to make Leptos better by contributing to these ecosystem libraries. To that end, I'm announcing Leptoberfest, a light-hearted community hackathon during the month of October (broadly defined) focusing on supporting our community ecosystem.
Good places to start:
If you make a contribution to close an issue (bug or feature request!) this fall, please fill out this form to let us know. I'll be publishing a list of all our Leptoberfest contributors at the end of the month (so, some time in early November/when I get to it!) @benwis and I also have a supply of Leptos stickers we can send out as rewards. If you have ideas for other fun rewards, let me know. (We don't have much cash, but we love you!)
On a personal note: As a maintainer, I have found that the feature requests never end. I welcome them, but they can become a distraction from deeper work! I will be slowing down a little in terms of how much time I can dedicate to adding incremental new features to the library. If you see an issue marked
feature request
, and it's a feature you'd like to see, please consider making a PR!Bigger Things
Apart from bugs/support/etc., most of my Leptos work this year is going to be exploring two areas for the future. I am a little burnt out after the last year, and everything it's entailed. I have also found that the best antidote to burn-out and the resentment that comes to it is to do the deep, exploratory work that I really find exciting.
Continuing Islands Exploration
The
experimental-islands
feature included in 0.5 reflects work at the cutting edge of what frontend web frameworks are exploring right now. As it stands, our islands approach is very similar to Astro (before its recent View Transitions support): it allows you to build a traditional server-rendered, multi-page app and pretty seamlessly integrate islands of interactivity.Incremental Feature Improvements
There are some small improvements that will be easy to add. For example, we can do something very much like Astro's View Transitions approach:
persist:searchbar
on the component in the view), which can be copied over from the old to the new document without losing their current stateAll of these can be done as opt-in incremental improvements to the current paradigm with no breaking changes.
Things I'm Not Sold On
There are other, larger/architectural improvements that can improve performance significantly, and remove the need for manually marking persistent islands. Specifically, the fact that we use a nested router means that, when navigating, we can actually go to the server and ask only for a certain chunk of the page, and swap that HTML out without needing to fetch or replace the rest of the page.
However once you start thinking it through (which I have!) you start to realize this raises some real issues. For example, we currently support passing context through the route tree, including through an
Outlet
. But if a nested route can be refetched indepedently of the whole page, the parent route won't run. This is great from a performance perspective but it means you can no longer pass context through the router on the server. And it turns out that basically the entire router is built on context...This is essentially why React Server Components need to come with a whole cache layer: if you can't provide data via context on the server, you end up making the same API requests at multiple levels of the app, which means you really want to provide a request-local cache, etc., etc., etc.,
Essentially we have the opportunity to marginally improve performance at the expense of some pretty big breaking changes to the whole mental model. I'm just not sure it's worth pushing that far in this direction. Needless to say I'll continue watching Solid's progress pretty closely.
I don't anticipate making any of these breaking changes without a lot more input, discussion, thought, and research.
The Most Exciting Thing: Rendering Exploration
Leptos has taken two distinct approaches to rendering.
0.0
The 0.0 renderer was built on pure HTML templating. The
view
macro then worked like thetemplate
macro does now, or like SolidJS, Vue Vapor (forthcoming), or Svelte 5 (forthcoming) work, compiling your view to three things:<template>
element that created once, then cloned whenever you need to create the view.firstChild
and.nextSibling
) that walked over that cloned treecreate_effect
calls that set up the reactive system to update those nodes directlyIn
ssr
mode, theview
macro literally compiled your view to aString
.Pros
Cons
<div>
?")view
macro, couldn't use a builder syntax or ordinary Rust codeElement
(one HTML element) orVec<Element>
(several of them!) without the nice flexibility of-> impl IntoView
.view
macro.Apart from these issues, which probably could've been fixed incrementally, I've come to think there was a fundamental limitation with this approach. Not only did it mean writing much of the framework's rendering logic in a proc macro, which is really hard to debug; doing it in a proc macro meant that we needed to do all of that without access to type information, just by transforming a stream of tokens to other tokens.
For example, we had no way of knowing the type of a block included from outside the view:
The
view
macro has no idea whethera
is a string, or an element, or a component invocation, or()
. This caused many of the early bugs and meant that we needed to add additional comment markers and runtime mechanisms to prevent issues.0.1-0.5
Our current renderer is largely the outstanding work of @jquesada2016 and me collaborating to address those issues. It replaced the old
view
macro with a more dynamic renderer. There's now aView
type that's an enum of different possible views, type erasure by default with-> impl IntoView
, much better support for fragments, etc. Most of our growth and success over the last six months was unlocked by this rewrite. Theview
macro expands to a runtime builder syntax for elements.Pros
view
Cons
#[cfg]
hell inleptos_dom
trying to support SSR, hydration, and CSR alongside one another in a transparent way, while maintaining performance.leptos_dom
is easier for us to debug and work with than the 0.0view
macro, but still pretty hard.This approach basically works! If I never changed it, things would be fine. It is kind of hard to maintain, there are some weird edge cases that I've sunk hours into to no avail, it's kind of chunky, the HTML is a little gross; but it's all fine.
So let's rewrite it.
The Future of Rendering
I have been extremely inspired by the work being done on Xilem, and by Raph Levien's talks on it. I'm also totally uninspired.
The Xilem architecture, in my reading, tries to address two pieces of the UI question. How do we build up a view tree to render stuff (natively or in the DOM)? And how do we drive changes to that view tree? It proposes a statically-typed view tree with changes driven by React-like components, in which event callbacks take a mutable reference to the state of that component.
The idea of a statically-typed view tree similar to SwiftUI is just so perfectly suited to Rust and the way its trait composition works that it blew my mind. Xilem is built on a trait with
build
andrebuild
functions, and an associatedState
type that retains state. Here's an example:If you don't get it, it's okay. You just haven't spent as much time mucking around in the renderer as I have. Building this all up through trait composition, and through the composition of smaller parts, is freaking amazing.
But the use of component-grained reactivity is just fundamentally uninteresting to me.
I played around a little with a Xilem-like renderer and finished with mix results: I was able to create a really tiny framework with an Elm state architecture and very small WASM binary sizes. But when I tried to build something based on Leptos reactivity and the same model, I ended up with something very similar to binary size and performance to Leptos. Not worth a rewrite.
Then it hit me. The fundamental problem with 0.0 was that, when we were walking the DOM, we were generating that DOM walk in a proc macro: we didn't know anything about the types of the view. But if we use trait composition, we do. Using traits means that we can write the whole renderer in plain Rust, with almost nothing in the
view
macro except a very simple expansion. And we can drive the hydration walk by doing it at a time when we actually understand what types we're dealing with, only advancing the cursor when we hit actual text or an actual element!Again, if it's not clear why this is amazing, no worries. Let me think about that. But it is really, really good.
It's also possible to make the view rendering library generic over the rendering layer, unlocking universal rendering or custom renderers (#1743) and allowing for the creation of a
TestingDom
implementation that allows you to run nativecargo test
tests of components without a headless browser, a futureleptos-native
, or whatever. And the rendering library can actually be totally detached from the reactivity layer: the renderer doesn't care what calls therebuild()
function, so the Leptos-specific functionality is very much separable from the rest.Pros
Rc
everywhere.leptos_dom
leptos_dom
view
macro's logic/the server optimizationsmove ||
block goes from being "this is bad, recreates the whole DOM" to "not ideal but kind of fine, efficiently diffs with fewer allocations than a VDOM"Cons
My goal is for this to be a drop-in replacement for most use cases. i.e., if you're just using the
view
macro to build components, this new renderer should not require you to rewrite your code. If you're doing fancier stuff with customIntoView
implementations and so on, there will of course be some changes.As you can tell, I'm very excited about work in this direction. It has been giving me a sense of energy and excitement about the project and its future. None of it is even close to feature parity with current Leptos, let alone ready for production. But I have built enough of it that I'm convinced that it's the next step.
I really believe in building open source in the open, so I'm making all this exploration public at https://github.com/gbj/tachys Feel free to follow along and check things out.
The text was updated successfully, but these errors were encountered: