This demonstrates the use of a lxd-profile.yaml to tweak lxd container settings. The example deploys privileged containers.
There are some example yaml files below you can use for you own needs.
Read more in the Juju official docs
- You can only have one lxd-profile per charm.
- The lxd profile is upgraded during the upgrade of the charm (juju upgrade-charm).
- Profiles are displayed at the machine level by using either the
juju show-machine
command or thejuju status --format=yaml
command. - There is a hierarchy of profiles. LXD has its own default profile, then the model default profile (juju-), and then a charm profile. Interestingly, you can insert a model profile juju- before you add the model, and juju won’t try and override it.
- You can't use "--constraints" to control the behaviour of lxd profiles.
- Profiles are applied in the order they are specified so the last profile to specify a specific key wins.
Juju uses the default "juju-default" profile for container defaults.
lxc profile show juju-default
You can edit a profile, for example changing the default storage zfs pool:
lxc profile device set juju-default root pool=lxd-zfs
You can change the lxd profile for a model, lets say mymodel:
cat lxd-profile.yaml | lxc profile edit juju-mymodel
Note that these profiles are not well tested. Help needed!
config:
security.nesting: true
security.privileged: true
raw.lxc: |
lxc.apparmor.profile=unconfined
lxc.mount.auto=sys:rw
security.nesting: true
security.privileged: true
config:
devices:
mygpu:
type: gpu
A Kubernetes worker profile
config:
linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
raw.lxc: |
lxc.apparmor.profile=unconfined
lxc.mount.auto=proc:rw sys:rw
lxc.cgroup.devices.allow=a
lxc.cap.drop=
security.nesting: true
security.privileged: true
description: "Kubernetes worker profile"
devices:
aadisable:
path: /dev/kmsg
source: /dev/kmsg
type: unix-char
config:
environment.DISPLAY: :0
raw.idmap: both 1000 1000
user.user-data: |
#cloud-config
runcmd:
- 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
- 'echo export PULSE_SERVER=unix:/tmp/.pulse-native | tee --append /home/ubuntu/.profile'
packages:
- x11-apps
- mesa-utils
- pulseaudio
description: GUI LXD profile
devices:
PASocket:
path: /tmp/.pulse-native
source: /run/user/1000/pulse/native
type: disk
X0:
path: /tmp/.X11-unix/X0
source: /tmp/.X11-unix/X0
type: disk
mygpu:
type: gpu
config:
boot.autostart: "true"
security.nesting: "true"
nvidia.driver.capabilities: all
nvidia.runtime: "true"
user.user-data: |
#cloud-config
package_upgrade: true
packages:
- mesa-utils
- build-essential
- nvidia-cuda-toolkit-gcc
- vim
description: GPGPU profile
devices:
mygpu:
type: gpu
eth0:
name: eth0
network: lxdfan0
type: nic
root:
path: /
pool: local
type: disk
name: gpgpu
used_by: []
One way to use GPGPU example is to launch the lxc profile like this:
One can add this running lxc image to a juju charm model's machines, like this:
juju add-machine ssh:[email protected]
From then bob's your uncle.
charmcraft build
juju add-model examples
juju model-config default-series=focal
juju model-config logging-config="<root>=WARNING;unit=DEBUG"
juju deploy ./<built_charm>
juju debug-log
Erik Lönroth, support me by attributing my work https://eriklonroth.com