-
-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accept other user configuration such as AMD GPU / iGPU / CPU rendering #62
Comments
The reason only NVIDIA works here is because different GPU vendors need an immensely different I imagine something similar to https://github.com/nestriness/nestri/pull/84/files would work. Contributions or spinoffs are always welcome, although the code must be clean and conform to the repository style to be merged. |
For CPUs, as well as Intel and AMD GPUs for now, docker-nvidia-egl-desktop should currently work; it uses a virtual X11 server in the first place anyway. But there is a performance overhead that trades off compatibility and the above link implements a real X11 server on AMD and Intel GPUs. |
As to how I managed to It's definitely not expected behaviour and may be patched out by X11 devs someday, but outside Wayland it was only way to get things working 😅 @Anghille I'm currently busy with Nestri work to contribute big changes here, however if you want to take a look, I can answer any questions to the best of my extent 🙂 |
I will look into that then ! If I have any ideas / PR to do, I would gladly do so ! Thanks for the answer |
@Anghille Yeah, thanks. The solution is already there with @DatCaptainHorse, it's just a matter of integrating it seamlessly. |
VirtualGL/virtualgl#229 (comment) @DatCaptainHorse BTW, does Vulkan work correctly on your approach with Intel or AMD in an unprivileged container? The above implies that it might be finicky in non-NVIDIA. |
@ehfd Yep, Vulkan works with my approach, if DXVK didnt work it would've been less useful 🙂 Just need to pass in GPU with the docker/podman |
If anything, Can I do a PR with a Kubernetes setup (configmap, secrets, deployment and volumes) for other users reference in the doc files ? Or is it redundant with other repo such as your kubernetes-operator ? |
By the way, that Might be a nice addition to have an ENV GPU={AMD,NVIDIA} wich means the user can seamlessly integreate the image for a cluster that has both graphic cards setup (which is my case for exemple) Something that integrate the works from here: https://github.com/nestriness/nestri/pull/84/files, but also keep NVIDIA setup as well. Not sure if this is something you want or if you prefer to keeps thing separated ? |
@Anghille my changes there support this kind of ENV variable: i.e. if you have 2 NVIDIA GPUs, you'd do either For mixed scenario (AMD + NVIDIA), you'd choose AMD one with But, I don't see why it wouldn't be possible to use the GPU name as parameter either, the GPU helper script is easily enough modifiable. Installing dependency of the script (lshw), then doing
Just look at the script source and you can see how it all works, hack away 🙂 |
@Anghille Check xgl.yml for the Kubernetes deployment. We only ship Deployment configurations because others wildly vary. |
I might be off-topic, since this project states this is for Nvidia and HPC focused jobs.
I was just wondering if it was possible (not asking you guys to do it, but at least tell me if it is) to modify this image to accept multiple configuration such as AMD GPU (server and consumer), or best, only use CPU encoding/decoding for more portability/compatibility.
I have some use cases running this on some kubernetes clusters, with various configurations (Home and cloud clusters). I am diving into the image and scripts, trying to understand WHY nvidia is required and how I might be able to change it to something else, or even better, make the image dynamically be able to run on AMD; NVIDIA or just CPU of any kind.
If you have any advices, that would be awesome 🙏
The text was updated successfully, but these errors were encountered: