Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/streamline build process #111

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

Max-TheCat
Copy link
Contributor

  • Adds automatic nightly and production builds
    • For nightly builds, short commit hash is appended to version number
    • Version number is updated in distributed package.json
    • Adds env variable for app name and version
    • Version number is displayed on container startup
  • Build now packages a standalone archive (tar.gz)
  • !!!!! This reverts a previous submission that did all the build steps in container because it's easier this way to export the new package. !!!!!
  • !!!!! This changes the behaviour of npm run build. It doesn't produce a .next/ folder anymore. It is now located under the tmp/ folder and it is cleaned up at the end of the build. Docker image and archive are now created under the dist/ folder. If the old build behaviour is still needed, I could create and additional target instead of overwriting it. !!!!!

Maxime Jacob added 3 commits February 16, 2024 07:57
* Docker image prints version on startup
* Build command now creates a self-contained archive
* Builds nightly (version with git  hash) or prod
* Fixes prisma generate being run too late
@justcallmelarry
Copy link
Contributor

Imo this is a very different usecase than the current docker flow, since the current docker flow ends up with a fully fledged image for use in running the application, and this seems to build a .tar.gz (which cannot then be orchestrated with a compose file or similar either)?

This is probably useful for multiple purposes, but at least I (and I would guess many others) self-host this project by running the a docker container rather than using a packaged .tar.gz, and would not care to have that workflow broken.

@Max-TheCat
Copy link
Contributor Author

Imo this is a very different usecase than the current docker flow, since the current docker flow ends up with a fully fledged image for use in running the application, and this seems to build a .tar.gz (which cannot then be orchestrated with a compose file or similar either)?

This is probably useful for multiple purposes, but at least I (and I would guess many others) self-host this project by running the a docker container rather than using a packaged .tar.gz, and would not care to have that workflow broken.

The build-image target has the same behaviour as before, it produces a docker image in your local repo and, in addition a archived image. The build target produces a standalone archive. It will give people the option to either run it containerized or directly on their machine.

For context, this submission is the first step for CI/CD. Next I'd like to set-up a scheduled pipeline that will create a nightly build and release those artifacts. Then I'd like to add a pipeline that would run on tag creation to create a production release. (See this issue)

@justcallmelarry
Copy link
Contributor

justcallmelarry commented Feb 27, 2024

Imo this is a very different usecase than the current docker flow, since the current docker flow ends up with a fully fledged image for use in running the application, and this seems to build a .tar.gz (which cannot then be orchestrated with a compose file or similar either)?
This is probably useful for multiple purposes, but at least I (and I would guess many others) self-host this project by running the a docker container rather than using a packaged .tar.gz, and would not care to have that workflow broken.

The build-image target has the same behaviour as before, it produces a docker image in your local repo and, in addition a archived image. The build target produces a standalone archive. It will give people the option to either run it containerized or directly on their machine.

For context, this submission is the first step for CI/CD. Next I'd like to set-up a scheduled pipeline that will create a nightly build and release those artifacts. Then I'd like to add a pipeline that would run on tag creation to create a production release. (See this issue)

Ah, sorry, I mistook the package.sh script for the container-entrypoint.sh script. So that's my bad for the confusion!

I tried building and running your new image (via the compose file) and these are my findings:

  • image stays the same size (around 400-450 mb), which is nice 👍 I do not particularly know why next.js in general uses multi stage builds, so I cannot comment on wether it should be kept or not.
  • I can't get any groups to load (maybe due to missing env vars during build? it complains about openssl but I can see that it's installed in the image), see stacktrace below
  • Since the install happens locally now, you end up with node_modules (and maybe other) files when building a docker image. This can (probably) cause inconsistencies in the container, if you have any packages that behave differently depending on OS. So I think the npm install etc should still happen in docker during build, rather than on local machine.
  • Since the installs etc happen locally this also makes it dependant on any local files, like .env files you might have in the directory, which then yields different results on different machines.
  • I think it's not possible to build the Dockerfile now without having node+npm installed on the local machine? This also means that mismatching versions of node could also cause issues potentially

Overall I think the old flow should be kept (installs happen in Dockerfile etc) and could be kept as Dockerfile.prod or similar, and then you can just add a Dockerfile.nightly with the appropriate changes for the nightly changes if needed.

stacktrace
spliit-app-1  | Invalid `prisma.group.findMany()` invocation:
spliit-app-1  | 
spliit-app-1  | 
spliit-app-1  | Prisma Client could not locate the Query Engine for runtime "linux-musl-arm64-openssl-3.0.x".
spliit-app-1  | 
spliit-app-1  | This happened because Prisma Client was generated for "darwin-arm64", but the actual deployment required "linux-musl-arm64-openssl-3.0.x".
spliit-app-1  | Add "linux-musl-arm64-openssl-3.0.x" to `binaryTargets` in the "schema.prisma" file and run `prisma generate` after saving it:
spliit-app-1  | 
spliit-app-1  | generator client {
spliit-app-1  |   provider      = "prisma-client-js"
spliit-app-1  |   binaryTargets = ["native", "linux-musl-openssl-3.0.x", "linux-musl-arm64-openssl-3.0.x"]
spliit-app-1  | }
spliit-app-1  | 
spliit-app-1  | The following locations have been searched:
spliit-app-1  |   /usr/app/node_modules/.prisma/client
spliit-app-1  |   /usr/app/node_modules/@prisma/client
spliit-app-1  |   /Users/lauri/code/nuc-norris/spliit/maxthecat/tmp/spliit2/node_modules/@prisma/client
spliit-app-1  |   /tmp/prisma-engines
spliit-app-1  |   /usr/app/prisma
spliit-app-1  |     at ai.handleRequestError (/usr/app/node_modules/@prisma/client/runtime/library.js:126:7075)
spliit-app-1  |     at ai.handleAndLogRequestError (/usr/app/node_modules/@prisma/client/runtime/library.js:126:6109)
spliit-app-1  |     at ai.request (/usr/app/node_modules/@prisma/client/runtime/library.js:126:5817)
spliit-app-1  |     at async l (/usr/app/node_modules/@prisma/client/runtime/library.js:131:9709)
spliit-app-1  |     at async a (/usr/app/.next/server/app/groups/page.js:1:16258)
spliit-app-1  |     at async /usr/app/node_modules/next/dist/compiled/next-server/app-page.runtime.prod.js:16:406
spliit-app-1  |     at async rg (/usr/app/node_modules/next/dist/compiled/next-server/app-page.runtime.prod.js:15:6309)
spliit-app-1  |     at async rz (/usr/app/node_modules/next/dist/compiled/next-server/app-page.runtime.prod.js:16:24709)
spliit-app-1  |     at async doRender (/usr/app/node_modules/next/dist/server/base-server.js:1394:30)
spliit-app-1  |     at async cacheEntry.responseCache.get.routeKind (/usr/app/node_modules/next/dist/server/base-server.js:1555:28) {
spliit-app-1  |   clientVersion: '5.9.1',
spliit-app-1  |   errorCode: undefined
spliit-app-1  | }```
</details>

@Max-TheCat Max-TheCat marked this pull request as draft February 29, 2024 02:27
@Max-TheCat
Copy link
Contributor Author

Ah, sorry, I mistook the package.sh script for the container-entrypoint.sh script. So that's my bad for the confusion!

I tried building and running your new image (via the compose file) and these are my findings:

* image stays the same size (around 400-450 mb), which is nice 👍 I do not particularly know why next.js in general uses multi stage builds, so I cannot comment on wether it should be kept or not.

* I can't get any groups to load (maybe due to missing env vars during build? it complains about openssl but I can see that it's installed in the image), see stacktrace below

Are you sure you are running it in a container and not locally? But in any case, that's a good catch. When prisma generate is run, it then expects the same ssl flavor to be present at run time. The env variables from the .env.example are needed only at run time.

* Since the install happens locally now, you end up with `node_modules` (and maybe other) files when building a docker image. This can (probably) cause inconsistencies in the container, if you have any packages that behave differently depending on OS. So I think the npm install etc should still happen in docker during build, rather than on local machine.

The node_modules folder contains only js files so it should not be OS dependant (in theory). I'll test that out on a windows machine though.

* Since the installs etc happen locally this also makes it dependant on any local files, like `.env` files you might have in the directory, which then yields different results on different machines.

.env is only needed at run time (in the start container script). In any case, the official releases will be run in a controlled env in the pipeline. But people would still have the option to build locally. I assume some people would like to be able to build without docker. Likewise, some might want to run it on older PCs or raspberry pi and may not want to pay the extra processing cost that docker has. Or some people might not even know how docker works!

* I think it's not possible to build the `Dockerfile` now without having node+npm installed on the local machine? This also means that mismatching versions of node could also cause issues potentially

We currently support node 21 and that's what official builds will be made with. But other people might want to build it with other versions (at their own risk, of course).

Overall I think the old flow should be kept (installs happen in Dockerfile etc) and could be kept as Dockerfile.prod or similar, and then you can just add a Dockerfile.nightly with the appropriate changes for the nightly changes if needed.

To get a CI/CD pipeline going it would be much easier if it's possible to do a local build. I can ensure the resulting image will be the same as the env the pipeline runs in will be controlled. I started testing and it would be possible to still do the build in a local docker and extract a portable app with dockercp, but that would result in a slower build (can't reuse files) and I have no idea how that would turn out in the pipeline (with docker in docker, I'd guess).

I'll try to fix the issue with prisma complaining about the ssl version, but my recommendation would be to allow local builds, for flexibility.

stacktrace

@justcallmelarry
Copy link
Contributor

Are you sure you are running it in a container and not locally? But in any case, that's a good catch. When prisma generate is run, it then expects the same ssl flavor to be present at run time.

Yep, I'm sure!

The env variables from the .env.example are needed only at run time.

This is not true, the NEXT_PUBLIC_* variables need to be present at build time, or they will not be respected at run time.
I know for a fact that this is the case, and I believe that it's because next scrambles the variable name to be harder to read in the browser facing code. This is the reason @jantuomi and me added the build.env variable file in the first case.

We currently support node 21 and that's what official builds will be made with. But other people might want to build it with other versions (at their own risk, of course).

You should still be able to build a docker container without having to have node installed locally at all imo. Just like I do everything Java related with only docker, and don't have it installed on my machine at all.

I'm not saying you should not be able to build stuff locally either, you should probably build the releases without involving any sort of docker code.

I'm just saying that the Docker flow should not be reliant on having to build anything locally, the docker flow should handle everything on it's own, and if you can't perform the build on a machine that doesn't have node installed then imo it's not working properly.

Not reusing the files that created the release will result in a slower build, but that should not matter as it is the CI/CD that is performing it, and is not time critical.

@LIRIKKER
Copy link

LIRIKKER commented Apr 2, 2024

I'm just saying that the Docker flow should not be reliant on having to build anything locally, the docker flow should handle everything on it's own, and if you can't perform the build on a machine that doesn't have node installed then imo it's not working properly.

I agree to this. Seems the current model is reliant on users using the service provided at spliit.app rather than hosting in their own home environments. You could attract a much larger audience if you resolve the docker flow. I think this is stopping a lot of people from spinning an instance of this and running it locally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants