-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pipeline.yaml: Add one more lab and board #356
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think @broonie also volunteered to have his lab enabled with early access.
About this particular PR, @pawiecz can confirm but the Collabora LAVA staging lab shouldn't be used for production use-cases. So while it's probably OK to have it right now, it should be removed or somehow treated differently when the new API reaches actual production status. |
Yes, if this works we can try to enable @broonie as well. At moment i am enabling it because Azure caching fix will be deployed on production only at December, so i need something else to test on. |
8291fed
to
2cb95ea
Compare
As good example for future improvements and to detect bugs in corner case we can add also one more lab and device. This is also unfortunately required to add one more real hardware device, as production lab doesn't support properly Azure files (yet), and it might be also good case to improve (detect infrastructure failure). Signed-off-by: Denys Fedoryshchenko <[email protected]>
2cb95ea
to
f9b9e26
Compare
What Azure caching fix? Is this something on the Collabora LAVA lab side? |
It is more incompatibility of Azure Files with caching systems. Azure Files have certain implementation bugs: |
Right but is the fix going to be deployed in the LAVA lab or on the Azure side? |
LAVA lab, its more workaround than fix. |
OK thanks. Do you know if other labs may be impacted as well, if this is a widely used caching mechanism? |
Yes, it is typical caching mechanism, and they might need to deploy similar workaround or we might need to change storage solution. |
Right, so that's something that would be ideally addressed before making the first production deployment. I believe one possibility mentioned in the past was to use MinIO on top of Azure storage. Is there a GitHub issue about this? There is definitely one about deciding which storage solution to use for production and which ones to support in general, but I don't remember seeing one about this particular caching problem. |
It is mentioned since October: kernelci/kernelci-api#381 |
RIght, so there isn't any GitHub issue about this particular problem. Probably it would be worth creating one and resolving it before the first production deployment happens (in early January I guess). |
381 is exactly about that, storage options. Probably i should add backend i tested recently and developed in go, as it is backward compatible with old one (means already implemented, i tested - it works), i will do that as will sort out current tasks. |
Right - there just isn't an issue specifically about the fact that we can't use Azure Files with caching in LAVA. Never mind me... |
Tested on staging and verified on dozzle |
As good example for future improvements and to detect bugs in corner case we can add also one more lab and device. This is also unfortunately required to add one more real hardware device, as production lab doesn't support properly Azure files (yet), and it might be also good case to improve (detect infrastructure failure).