Systemd Slice Configuration for Desktop Usage
Slices?⌗
Slices are control groups that manage resource sharing hierarchically. If you are using systemd, you already have at least 2 slices: system.slice
for system services and user.slice
for user sessions.
If you are using KVM, you will also have a slice dedicated to VMs called machine.slice
A slice emcompasses all the processes under it into a single control group. Control groups are used by the kernel to manage resource allocation through a hierarchy.
The 3 default systemd slices are all siblings, all parented by the root control group. This means that, if there is no custom configuration for the slices, they compete for the resources in the system equally.
Cgroups: Linux’s hierarchical resource management⌗
Control groups compete for resources equally. In other words, if all of the control groups are requesting 100% of resources, they will get each an equal slice of the 100% by default, regardless of how many processes they have under them.
That is because control groups are hierarchical. Processes within a group are accounted together as the group’s resource usage, and then each group is compared to its siblings.
This does not prevent a control group from using the full system’s resources. If there are enough resources to handle all the load of the system, then the control group configuration is basically a no-op. However, when the system is overwhelmed, the kernel uses the hierarchy of the control groups to decide where to allocate the resources.
If we want to prioritize a control group over another, we need to configure them accordingly. The two main options in our disposal are CPUWeight
and IOWeight
. These values are used by the kernel scheduler to calculate the division of resources in the system. If a certain control group has higher weight than another, then the kernel will allocate more resources to it, proportionally to their weight values. If all the weights are equal to one another, then the kernel will divide the resources equally.
Control groups are mounted in /sys/fs/cgroup
, you can also view them with systemd-cgls
. The default systemd slices should be listed there. To view resource usage by controll group in realtime, you can use systemd-cgtop
.
My personal configuration for desktop usage⌗
Knowing how control groups work, we can leverage the systemd slices to favor the user session slice, which improves the responsiveness of a desktop system when under load.
Systemd offers a command under systemctl
to set slice properties and save their values to apply them during boot.
We want to prioritize user sessions over system services, and we want to diminish the impact of VMs running in background. To do so, we can run:
sudo systemctl set-property machine.slice CPUWeight=50
sudo systemctl set-property machine.slice IOWeight=50
sudo systemctl set-property system.slice CPUWeight=100
sudo systemctl set-property system.slice IOWeight=100
sudo systemctl set-property user.slice CPUWeight=500
sudo systemctl set-property user.slice IOWeight=500
I personally use those values in my system, though they might be too aggressive for some people. As always, you can try this out by yourself and see what values work best for your use-case.
The values are stored in /etc/systemd/system.control
. To reset the configuration to the default you need to remove the files in that directory.
Controlling CPU resources inside a user session⌗
You can also use cgroups to control CPU resource sharing within a user session. Most desktop environments nowadays use cgroup scopes to containerize applications. If you are not using a desktop environment, you can roll your own scoping with systemd-run
. For example, to spawn a user service or a user application you can run:
# Spawn network applet in the user session slice
systemd-run --user --slice=session.slice --unit=nm-applet-icon /usr/bin/nm-applet --indicator
# Spawn a terminal as an application under app.slice
systemd-run --user --scope --slice=app.slice $term
And integrating this into your application launcher:
bindsym $mod+space exec "rofi -modi drun,run -show drun -run-command 'systemd-run --user --scope --slice=app.slice {cmd}'"
If you run systemd-cgls
you will find that services will run in session.slice
while applications will run in app.slice
. Each service or application will also have its own unit (service or scope), allowing you to set priorities for each one of them. Remember, cgroups are hierarchical, so you can fine-tune how resources are allocated for user services and for each user application as well.
Automatically prioritizing foreground applications⌗
One of the tricks of macOS’s UI is prioritizing foreground applications over background applications. We can do the same on Linux with cgroups and some hacking.
System76 has a project that accomplishes that but with process niceness, called system76-scheduler
, but it is very coupled to their Cosmic desktop environment. Cgroups also override any process niceness, so rolling our own cgroup scheduler will should yield better results.
This will vary according to your configuration, but in short what we need to do:
- Listen to window focus events
- From the pid of the focused window, get its scope within app.slice
- Increase its CPU weight and reset all other applications CPU weights
As a bonus, you can increase the weight of app.slice
as a whole to give it priority over user services, or even create a custom slice with different CPU weights (I have a custom slice for WM related processes with a very high CPU weight, so that the UI does not lag even when I’m compiling)
On Sway, this is how we can hack this together. You can do the same on your WM or DE by changing the command that subscribes to window focus changes:
#!/bin/sh
# Get user session root cgroup
user_cgroup=$(systemctl --user status | grep CGroup | head -n 1 | awk '{print $2}')
# User top level slices
app_slice="$user_cgroup/app.slice"
sway_slice="$user_cgroup/sway.slice"
# Adjust top level cgroup priorities
echo 1000 > /sys/fs/cgroup$app_slice/cpu.weight
echo 3000 > /sys/fs/cgroup$sway_slice/cpu.weight
# Subscribe to window focus change events
swaymsg -t subscribe -m "['window']" | jq -c --unbuffered 'select(.change | contains("focus")) | .container.pid' | while read -r pid; do
# Get process cgroup and top level scope cgroup
cgroup=$(grep app.slice /proc/$pid/cgroup | cut -d ':' -f 3)
scope=$(echo $cgroup | sed 's/\.scope.*/.scope/g')
# Reset all app cgroup weights
for s in /sys/fs/cgroup${app_slice}/*.{scope,service,socket}; do
echo 100 > $s/cpu.weight
done
# Prioritize focused window cgroup
if [ -n "$scope" ]; then
echo 1000 > /sys/fs/cgroup$scope/cpu.weight
fi
done
If done correctly, you should see the cpu.weight values change as focus windows. On my machine, I can see this under /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service
:
app.slice/run-r07a39ab5ca2d406fa4bcc729a5dbd1b2.scope/cpu.weight:100
app.slice/run-r57563482d50d4ce18be335344a7cf977.scope/cpu.weight:1000
Notice the last scope with a cpu.weight of 1000, in comparison to the others at 100. In practice, the kernel should give the focused 10 times more CPU time than the other windows (under CPU contention). When CPU is idle, all windows will be able to use as much CPU as they want.
The impact of this arrangement is very noticeable under stress. A very easy way to test is this is working: open a terminal and run stress --cpu 32
, and then open two additional terminals. On the two other terminals, run find /
. As you focus from one terminal to the other, you will see that the terminal being focused will run much faster.
Other neat stuff you can do with slices⌗
You can apply any type of resource control in a slice. If you want to be even more agressive, you can hard limit CPU or memory usage with CPUQuote
and MemoryMax
, or be more lenient with CPUWeight
and MemoryHigh
.
Users with hybrid or assimetric CPU configurations, where there are have Efficiency and Performance cores, can force slices to run exclusively on either of the two with AllowedCPUs
. For example, you can configure system services and unfocused windows to only run in efficiency cores, as well as forcing user sessions and focused windows to only run in performance cores.
References⌗
https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html
https://www.man7.org/linux/man-pages/man5/systemd.slice.5.html
https://www.man7.org/linux/man-pages/man5/systemd.resource-control.5.html