I installed Ubuntu Desktop on a server machine so I could run some docker stuff and also set up Oracle VirtualBox to run some guest machines. My expectation was that I could set up a “vboxuser” account for running these, launch the GUI client, set up my machines, and run them headless. This all worked… until I logged out of my desktop session. All my guest machines would instantly crash out. Even if I set them to ACPI Shutdown they would instantly stop. It’s not even the same as “Power Off” since the machine processes themselves would be terminated ungracefully, leaving them in an “Aborted” state. That means even VirtualBox didn’t expect the guest processes running each VM to be terminated so ungracefully.
My expectation
The expectation I had was that I could launch various guests in headless mode, then log out of my user desktop session and they would continue to run in the background. On Windows I could do this, then log back in and open the VirtualBox client and see them still tunning.
The solution
After a lot (I mean a lot) of fucking around, trying different VirtualBox releases, trying VBoxManage scripts, trying everything I could try… a friend suggested something… he said that on newer releases of Ubuntu, Lennart Poettering had fundamentally changed the way various distributions of Linux handle user processes… so now, BY DEAFULT, whenever you log out of a user session, ALL YOUR PROCESSES ARE TERMINATED. This is not how it used to be, and people hate him for it… me included.
What you need to do to enable background processes to continue running even if you log out of your terminal or GUI session, is to enable “linger”. You do it like this;
sudo loginctl enable-linger [user]
Setting that one setting meant I could log in, spin up a VM guest in headless mode, then log out and it would continue to run. Holy. Shit.
Ooh, this is exactly my cup of tea! Amal, my opinions and suggestions below are not specifically targeted at you, I’m sure you’ve got your stuff covered. This is mostly for others running into the same issues.
Relevant Credentials: CompTIA Linux+, Sysadmin at work in all but title, and have run exclusively GNU+Linux (may I interject ) at home for years, including servers.
I would heavily, heavily suggest moving away from Ubuntu Desktop. The most one-to-one OS that avoids many of the issues of Ubuntu entirely would be Debian. You still get apt, and it is more or less the same. For an even smoother install process LMDE would also work just fine.
@Hamspiced makes a great point above with Proxmox. It is a wonderful VM host, and if you are mostly using a machine as such, I can’t give much of a better suggestion. I do personally use TrueNAS-Scale on my main server instead, because the primary use is as a… well… NAS, and I only have a little bit of virtualization going on there. I have some other reasons for that, but that’s not worth getting into here.
Now about Virtualbox…
I’ve found that it runs very poorly overall, especially when running anything with a GUI. I much prefer to use Kernel Virtual Machine (and virt-manager as an interface for it when a GUI is available). Much more stable in my experience, runs a lot faster, and any sort of hardware pass-through is an absolute breeze.
I believe the main performance issue with Virtualbox is the fact that it is a type 2 hypervisor, meaning basically everything is emulated. This is not the case with KVM, as it is a type 1 hypervisor (meaning instructions are run almost directly through the hardware in question).
I will look into Debian for sure. I’m only familiar with Ubuntu because of the server droplets I deploy on digital ocean. This machine is a server in my basement. I don’t use the GUI on the server and I don’t use the GUI on the guests either The only reason I’m using virtualbox is for host OS portability reasons. At the moment I’m not finding any of my guests to have performance issues, as they are all low utilization but high customization setups.
My thought is that if I ever needed to, I could easily spin up one of the guests on my windows workstation if for some reason the server went TU.
For NAS storage I use Synology but have used FreeNAS in the past.
If you ever want a playground environment on proxmox I can add you a user and assign you some open vm’s. May take a while to get the tunnel setup for you but it’s honestly cake.
I’m a little more than a novice and it was an absolute breeze to setup and deploy.
And I’m pretty sure proxmox is built on an Ubuntu kernel
“Proxmox Virtual Environment is based on Debian GNU/Linux and uses a custom Linux Kernel.”
and
“Proxmox Virtual Environment is a powerful open-source server virtualization platform to manage two virtualization technologies - KVM (Kernel-based Virtual Machine) for virtual machines and LXC for containers”
Ubuntu is generally a nightmare and I stopped considering it altogether when it started shipping with adds. Debian is significantly better! Maybe I’m weird, but I’ll suggest playing around with CentOS in a VM as well, or perhaps even Slackware.
I sure miss having a few servers around for home lab purposes… But I wasn’t going to travel internationally with a bunch of old and dirty computers. Maybe I should buy a few NUCs.
It’s so hard to keep up with massive FOSS projects anymore. It seems I stop using one thing for a year and I come back to either massive fundamental changes or zero updates.
Wait, the final release is from 2021… So yeah, I’m old and haven’t messed with x86 servers since a couple of events flipped my life upside down.
Thanks for the heads up and the distrio recommendations. I daily drive a Debian box, but I prefer something a bit more enterprise for servers. Although I’m running a Debian variant on the raspberry Pis that have been my servers for the time being.
This is true for almost all Debian based distributions. That command is also needed if you want your containers (Docker, Podman, etc) to survive a log out. It isn’t just Desktop systems either. If you run remote Debian servers you’ll need to enable-linger for VMs and containers to survive disconnecting from an SSH session.
100% second/third the suggestions on swapping to Debian. I’ve been using it exclusively at work for the past few years (both desktop and servers) and things just go so much smoother than Ubuntu.
That said, I my top choice is actually running Debian on Windows in WSL2. You get the best of both worlds, and with Docker Desktop being as good as it is now on Windows, I’ve had much less need for actual VMs. WSL2 has X11 support, so you can run Linux desktop apps in Windows. Networking can be a bit funky, but it is a pretty seamless experience, and more resource efficient than a VM. Pulse audio is still a giant PITA to set up though.
It might be possible if you replace the repositories and do a dist-upgrade or something along those lines. But I’d gravitate towards installing it from scratch.
Ditto what enginerd said. A fresh install + installing software and configuring everything will be quicker than trying to do an in-place swap.
I’d recommend picking a Debian flavor that includes your desired desktop environment built in. The newest stable release of Debian is Debian 12 Bookworm. By default, Bookworm ships with the GNOME desktop environment, which is also the default in Ubuntu. So if you like the look and feel of Ubuntu desktop experience, you can just download the stock Debian ISO, copy it to a USB and boot it up. But Debian also has pre-bundled flavors of other desktop environments, like KDE Plasma (very similar UX to Windows), Cinnamon, XFCE (lightweight, good for lower-end systems) and others.
Downloading one of the live image .iso files, then copying it to a USB is better/easier then using Debian’s installer images. You can find all of the ISOs here: Index of /cdimage/release/current-live/amd64/iso-hybrid. Once you download the iso (such as debian-live-12.9.0-amd64-gnome.iso), you can write it to a USB drive from Ubuntu by doing a cp path/to/debian.iso /dev/sdX, where X is the letter assigned to your USB stick. Make sure you copy to the USB device itself, and not a partition on it.
Once you have the Live USB you can either install it to your system using the new, fancy installer that will be on your desktop after you boot into the live OS, or via the legacy installer you see from the boot menu.
One thing I noticed immediately… even during installation, while hardware was being detected… Debian sorted out the default iDRAC ridiculous high power fan issue. Calmed them right down automatically. If this behavior holds after installation, count me impressed!
somehow i fucked the install trying to remove gnome and replace with xfce4… problem with gnome on debian 12 is how RDP works… or rather doesn’t. With Ubuntu you have the option of allowing remote sessions (no user logged in on console) but for whatever reason Debian GNOME didn’t. Wanted to try xrdp with xfce4 but yeah removed gnome and went to reboot for some reason and grub came up “no kernel” like wtf hah ok.
went through a complete reinstall with the USB stick and the fuckin grub command screen is all I get… type “boot” and it say something about no kernel. ugh…