From CoLinux to MoreLinux....
Posted by loziomario@reddit | linuxadmin | View on Reddit | 45 comments
Hello.
maybe someone of you know the old project called "coLinux" :
Cooperative Linux is the first working free and open source method for optimally running Linux on Microsoft Windows natively. More generally, Cooperative Linux (short-named coLinux) is a port of the Linux kernel that allows it to run cooperatively alongside another operating system on a single machine. For instance, it allows one to freely run Linux on Windows 2000/XP/Vista/7, without using a commercial PC virtualization software such as VMware, in a way which is much more optimal than using any general purpose PC virtualization software. In its current condition, it allows us to run the KNOPPIX Japanese Edition on Windows.
yeah,its very old and not maintained from a lot of time and I'm not interested to resurrect it (and I don't have the knowledge to do it),BUT I'm interested to gather some informations about a similar project that I have in mind. What about if,instead of having a Linux kernel that run a Windows cooperatively,we can have a Linux kernel that can run more Linux distributions,without using a virtualization software ? Would be a very tricky project to work on ? How much complicated ?
Someone is interested to study how much doable is it ? Which kind of technical problems that could be ? Which limitations there could be ? How interesting is this project in your opinion ?
Korkman@reddit
Let's sort technologies first.
Are you familiar with WINE? CoLinux is to Linux on Windows what WINE is to Windows on Linux. It is a single "port" of a Linux kernel, translating between userspace (the programs that make a distro) and the true OS Kernel, being Windows. WINE is analog to that with roles switched, just more focused on single applications.
So these are translation layers. Hope this analogy helps understand what they do.
For multiple distros to be running in parallel, there is no need for translation, as the userspaces all "talk Linux" anyways. A single Linux kernel is all they need. Running multiple distros is as old as chroot. It has been expanded upon in several ways now: LXC, Docker but also Distrobox, Vanilla OS' "apx" and the wild Bedrock Linux
loziomario@reddit (OP)
The most important limitation of the containers is that they haven't a own kernel. Maybe the CoLinux method will give to the container some control to the host kernel with some advantages.
Korkman@reddit
If you run a distro in a plain chroot, there are no limitations facing userspace. Some distros introduce patches to their kernels for specific use-cases like real-time or low latency audio. But if you need that, running multiples of anything isn't something you would want ;-)
So it boils down to: choose a kernel to access the hardware, run any amount of distros you want below that. What is it you think dedicated kernels would bring to the table other than overhead?
loziomario@reddit (OP)
what is this ?
https://l4re.org/download/snapshots/pre-built-images/arm-v7/
it seems to me that it does what Colinux did a lot of time ago,but it works even on more cpu architecture. Am I wrong ? With l4linux I can run multiple linux distro using the same kernel. I'm very curious to try it on my ARM machine,since there is even a version for arm 32 and arm 64 bit. It sounds interesting.
Korkman@reddit
It's a hypervisor for virtual machines. Sounds pretty similar to Xen but with modules. The "Linux" images probably feature a minimal Linux shell to bootstrap VMs.
Permanently-Band@reddit
I think what Ioziomario is not getting here is that CoLinux and L4linux are running on top of Microkernels, there is only one microkernel that Windows can run on and that won't change because Microsoft would have to permit it and provide the source code to port it to another microkernel. MkLinux, L4linux and CoLinux are (or were) other microkernel ports of Linux.
The key here is that the microkernels above are designed to abstract most of the hardware from userland allowing greater portability and the possibility of "split personalities" where more than one kernel is used at a time.
Unfortunately, Microsoft have breached the chinese-wall between userland and microkernel for performance reasons, hence the need for CoLinux which runs co-operatively, partially directly on the NT kernel and partially in Windows userland.
The most viable approach to get what the OP wants, then, is to run everything on top of the NT kernel.
Most of the work has already been done for that to happen, all that would be needed is some kind of "Co-BSD" that has been ported to the NT kernel, possibly leveraging the efforts of those who have already ported BSD userland to Mach and L4. I propose naming that project BSoD4BSD.
I'm guessing that the project is a lot less appealing, and more appalling, once you realize that you'd need to run the whole thing on the NT kernel if you wanted Windows compatiblity.
Otherwise what the OP wants is essentially already very nearly available with little effort using the L4 microkernel and L4 ports of whatever sensibly licensed OS you want.
loziomario@reddit (OP)
I'm working on Arm. So,there is no Virtualbox. Qemu + kvm works only for an old version of Linux. I'm trying to use xen and it should work,but I'm also trying to find some other alternative.
Korkman@reddit
But why not containers?
loziomario@reddit (OP)
Because a container does not allow to run a different OS than Linux itself. I want to run FreeBSD together with Linux. From What I read I can't boot a FreeBSD userland with L4Linux,but interested to understand if and how can I do it. I will ask all around if the modifications needed to achieve this goal are not very complicated.
Korkman@reddit
Xen isn't a bad choice for that, then. How do you think you can improve on that? In a broad sense it's exactly what you wanted: dedicated kernels cooperating with each other (like, all virtio-drivers and similar approaches are cooperation with a host kernel).
There should be Linux KVM available at least for arm 64-bit, though, and if you don't mind the performance penalty you could go for qemu without hw assisted acceleration on 32-bit.
loziomario@reddit (OP)
nope. qemu without kvm is so slow that I have the time to die before the virtualized OS window came up.
GertVanAntwerpen@reddit
WSL2 is already many things coLinux did in the past. I don’t see any advantages of a new variant of coLinux.
Permanently-Band@reddit
A big advantage would be that it doesn't kick windows into a hypervisor and break every other virtual machine irreparably like WSL2 does, and wouldn't need to be a clean room reimplementation of the whole kernel like WSL1 because it could be released under a sensible license.
This approach has more in common, technically, with WSL1 than WSL2. Unfortunately that means integration with the NT kernel would probably suffer performance degradation just as badly as WSL1, but at least it would be open source, and development couldn't be stopped at the whim of a corporate executive somewhere.
loziomario@reddit (OP)
I don't use WSL2,I don't use Windows so much. I like Linux and FreeBSD. So,an even nicer idea is to create a coLinux variant that allows the Linux kernel to cooperate with FreeBSD. This is even nicer than making a cooperation between 2 linuxes.
signed-@reddit
This already works and exists, distrobox (unless you count docker, podman etc as virt software
loziomario@reddit (OP)
Not the same technology. I can't put Windows or FreeBSD in a container. With the approach of CoLinux I could.
safrax@reddit
You're confounding containerization and coLinux. They do vastly different things. CoLinux is essentially a syscall emulation layer that plugs into the kernel windows kernel as some kind of ring 0 device driver. Meanwhile, containers just utilize the host kernel with no emulation layer.
I think the closest modern equivalent we have to something like coLinux is the FreeBSD Linuxulator. Which again has a pretty severe performance penalty and as a compatibility layer is not that great. And the Linuxulator isn't really getting a whole lot of development these days. FreeBSD 14.0's linuxulator implements Linux 4.4.0's syscalls. 4.4 is from January of 2016 which is almost 8 years old.
loziomario@reddit (OP)
its not me that I'm confounding them. I tried to explain more times that the technology is different to the users here.
loziomario@reddit (OP)
We are talking about different technologies. In the description of Cooperative Linux is explained that it runs a specially modified Linux kernel that is Cooperative in that it takes responsibility for sharing resources with the NT kernel and not instigating race conditions ; I'm not sure that Linux which runs inside a container has a kernel that cooperate with the kernel of the host os.
RigourousMortimus@reddit
"the coLinux kernel is run inside the Windows enviroment, no computer is emulated as VMWare and VirtualBox do."
https://colinux.fandom.com/wiki/FAQ at the end
Ultimately you have one kernel or similar controlling access to physical memory, CPU etc. coLinux lives within Windows. If you run Linux, BSD etc under windows (or vice versa) then by definition you need a virtualisation layer through which the subordinate environment interacts with the resources made available by the managing environment. In a container like environment, you just have a single kernel that runs the contained environments in a way their 'userland' aspects don't interfere with each other.
Or you go for physically distinct machines.
If you're proposing something that is lighter than virtualisation but doesn't have a single managing kernel, you'll need to explain how things like memory are being shared.
loziomario@reddit (OP)
Nope. I'm not interested to run BSD under Linux. In the CoLinux variant that I find interesting,the idea is to run BSD under Linux as a process. Or Linux under FreeBSD as a process. Is the CoLinux general idea that I like,but it should be reached differently from the code used at the moment by CoLinux,because Windows will not be involved.
RigourousMortimus@reddit
In what way is that not virtualisation ?
The subordinate kernel isn't working with 'real' memory or CPU cores but just what the managing kernel, through virtualisation, makes available to it.
HTX-713@reddit
https://linuxcontainers.org/lxc/introduction/
Literally what containers were created for.
loziomario@reddit (OP)
Im not sure that a linux container cooperates with the kernel of the host os.
Fr0gm4n@reddit
The container uses the host kernel. It's kept separate through various technologies and features like cgroups. They are very specifically not VMs because they don't emulate/virtualize an entire computer.
paulstelian97@reddit
The container uses a host Linux kernel. The project OP refers to creates a Linux container on a non-Linux host without employing virtualization or emulation.
loziomario@reddit (OP)
Not properly a common Linux container,but to create a light virtualization tecnique that uses the CoLinux approach that can allows us to run Windows or FreeBSD together with Linux. Any container today can't virtualize Windows or *.BSD.
loziomario@reddit (OP)
Exactly. Why you understood the point so fast ?
BinaryGrind@reddit
There isn't a point to running two or more Linux kernels in the method CoLinux uses because you'd essentially just be running the same kernel twice. The primary differences between Linux distributions is their methodology and userland, the Linux kernel isn't really changing much between them. This is pretty evident since you can download the Kernel Source on to any distro, compile it, and use the Kernel you just built to replace the Distro's default. Since the kernel is the same you can just run another distro's userland under the same kernel using chroot or a container.
CoLinux was neat because it let you run very dissimilar OS kernels, especially during a time when virtualization was in its infancy and containers didn't exist (chroot was around tho) but its pointless in the context of running multiple Linux Kernels.
And it doesn't matter now anyways, since Paravirtualization exists you're essentially doing what CoLinux did but with any supported OS.
loziomario@reddit (OP)
Ok,even better. You suggested an evolution for Colinux : the same kernel,but the secondary userland will run as a list of processes under the first one.
Fr0gm4n@reddit
That's literally just a container.
loziomario@reddit (OP)
No. With LXC you still have just one Linux kernels, but for processes it "feels" like they have their own kernel "alone" but actually they are just isolated from the other processes. The kernel got better in providing processes own seperated "environments".
ryebread157@reddit
I don’t think there’s much interest in this, you essentially get this with docker. When I want to run Ubuntu on my Rocky Linux box, I just use docker.
loziomario@reddit (OP)
nope. I don't get what I want. I want to virtualize FreeBSD or Windows using a light virtualization tecnique and I can't use docker for this.
slylte@reddit
What's your end goal? What are you trying to accomplish that isn't available with modern virtualization techniques?
loziomario@reddit (OP)
For example im not sure that on Linux we can run Windows on a container {lets say that i dont want have a full virtualization with qenu-kvm) And Im sure that we cant do it with a *.BSD. There is a space for colinux.
slylte@reddit
But what are you trying to do? You can run linux on a container on Windows with a decent level of performance.
safrax@reddit
Overhead. Things like coLinux/WSL1 have a ton of overhead and no real good way to get around that issue. Virtualization has less overhead and fewer compatibility issues these days . Containers have even less overhead. This is why WSL1 went from a syscall re-implementation of Linux, similar to coLinux, to just straight up virtualization in WSL2.
There's no need for something like coLinux anymore.
loziomario@reddit (OP)
For example im not sure that on Linux we can run Windows on a container {lets say that i dont want have a full virtualization with qenu-kvm) And Im sure that we cant do it with a *.BSD. There is a space for colinux.
safrax@reddit
There is no space for such a thing. There's too much overhead when compared to the alternatives. It is and was a technological dead end. There's been plenty of similar dead ends over the years, things like OpenMOSIX come to mind as another example.
What you want doesn't exist for a reason. Not because it's impossible, it clearly isn't, but because there's alternatives out there that aren't as fragile and are technologically a lot better in terms of the solution they offer.
Hell virtualization these days is probably closer to what you're envisioning than you realize. Hypervisors these days are pretty light weight and with the all the virtualization tricks we have baked into processors, hypervisors, and guest OSses, there's very little overhead. You'll get within a few % of native performance with virtualization vs something like coLinux which has to cooperate with another OS and translate between two very vastly differing ideologies and technologies.
mkosmo@reddit
It’s not maintained because there are better solutions available today. WSL2 being the top of the list.
loziomario@reddit (OP)
I don't use Windows.
loziomario@reddit (OP)
For example im not sure that on Linux we can run Windows on a container {lets say that i dont want have a full virtualization with qenu-kvm) And Im sure that we cant do it with a *.BSD. There is a space for colinux.
ubernerd44@reddit
coLinux is a really cool project and I would love to see it come back. The hardest part is going to be finding people who have the time and the skills to work on it.
BinaryGrind@reddit
CoLinux was cool for its time, but doesn't really make sense in the present. Virtual Machines provide better performance and stability then CoLinux ever could, especially with paravirtualization.