-
-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/streamline build process #111
base: main
Are you sure you want to change the base?
Feature/streamline build process #111
Conversation
* Docker image prints version on startup * Build command now creates a self-contained archive * Builds nightly (version with git hash) or prod * Fixes prisma generate being run too late
Imo this is a very different usecase than the current docker flow, since the current docker flow ends up with a fully fledged image for use in running the application, and this seems to build a .tar.gz (which cannot then be orchestrated with a compose file or similar either)? This is probably useful for multiple purposes, but at least I (and I would guess many others) self-host this project by running the a docker container rather than using a packaged .tar.gz, and would not care to have that workflow broken. |
The build-image target has the same behaviour as before, it produces a docker image in your local repo and, in addition a archived image. The build target produces a standalone archive. It will give people the option to either run it containerized or directly on their machine. For context, this submission is the first step for CI/CD. Next I'd like to set-up a scheduled pipeline that will create a nightly build and release those artifacts. Then I'd like to add a pipeline that would run on tag creation to create a production release. (See this issue) |
Ah, sorry, I mistook the I tried building and running your new image (via the compose file) and these are my findings:
Overall I think the old flow should be kept (installs happen in Dockerfile etc) and could be kept as stacktrace
|
Are you sure you are running it in a container and not locally? But in any case, that's a good catch. When prisma generate is run, it then expects the same ssl flavor to be present at run time. The env variables from the .env.example are needed only at run time.
The node_modules folder contains only js files so it should not be OS dependant (in theory). I'll test that out on a windows machine though.
.env is only needed at run time (in the start container script). In any case, the official releases will be run in a controlled env in the pipeline. But people would still have the option to build locally. I assume some people would like to be able to build without docker. Likewise, some might want to run it on older PCs or raspberry pi and may not want to pay the extra processing cost that docker has. Or some people might not even know how docker works!
We currently support node 21 and that's what official builds will be made with. But other people might want to build it with other versions (at their own risk, of course).
To get a CI/CD pipeline going it would be much easier if it's possible to do a local build. I can ensure the resulting image will be the same as the env the pipeline runs in will be controlled. I started testing and it would be possible to still do the build in a local docker and extract a portable app with dockercp, but that would result in a slower build (can't reuse files) and I have no idea how that would turn out in the pipeline (with docker in docker, I'd guess). I'll try to fix the issue with prisma complaining about the ssl version, but my recommendation would be to allow local builds, for flexibility.
|
Yep, I'm sure!
This is not true, the
You should still be able to build a docker container without having to have node installed locally at all imo. Just like I do everything Java related with only docker, and don't have it installed on my machine at all. I'm not saying you should not be able to build stuff locally either, you should probably build the releases without involving any sort of docker code. I'm just saying that the Docker flow should not be reliant on having to build anything locally, the docker flow should handle everything on it's own, and if you can't perform the build on a machine that doesn't have node installed then imo it's not working properly. Not reusing the files that created the release will result in a slower build, but that should not matter as it is the CI/CD that is performing it, and is not time critical. |
I agree to this. Seems the current model is reliant on users using the service provided at spliit.app rather than hosting in their own home environments. You could attract a much larger audience if you resolve the docker flow. I think this is stopping a lot of people from spinning an instance of this and running it locally. |
npm run build
. It doesn't produce a .next/ folder anymore. It is now located under the tmp/ folder and it is cleaned up at the end of the build. Docker image and archive are now created under the dist/ folder. If the old build behaviour is still needed, I could create and additional target instead of overwriting it. !!!!!