You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It'd also be useful for sharing a daemon between container and host OS.
But it may be a bit more tricky than I had thought. Some ideas:
Remote dependencies can be sent or downloaded, but either way they shouldn't change much
Whichever way is used to send source files do daemon: does it also work for non-source material? Ideally only compile-time things, like what is included by macros. Sending everything would be a big slowdown.
Should intermediary artifacts be sent back, or just final executable/lib?
The problem with non-code resources is actually not just the daemon - the compiler should also know about such things, to be able to rebuild when needed.
EDIT: The problem in general, and especially with non-source, is that one either 1) sends everything in a batch up-front, or 2) sends only what is needed, which means lots of back and forth communication, because it only becomes known gradually.
It should not be very hard to make the daemon run on a server and have the CLI connect to it remotely.
This may be useful in a compiler-farm setup, or in a CI pipeline where a build-container keeps a continuous cache.
Some things that'd need to be addressed:
run
subcommand run locally or remotely? (Probably local)For now let's not include having multiple daemons with a load balancer, though it might be possible one day.
The text was updated successfully, but these errors were encountered: