I feel it's odd thing. I have no idea how hardware partitioning will work IRL. Maybe it will be workable on servers, on desktops it all falls apart around 'special' role for GPU (e.g. you can't meanigfully give your iGPU to virtual machine and keep discreet gpu to the host). I suspect it's the same for hardware partitioning.
That is the whole point of the work being proposed. If you are interested I'd suggest asking the author of the proposal who will be able to supply a satisfactory answer.
Linux at least used to support being run as a userland process on another instance of Linux as the main kernel, if I recall, so it likely follows similar patterns to how that was implemented.
It would be cool if we could run NT and Linux on bare metal at the same time and have them share CPU cores so you'd get access to native windows applications with full gpu acceleration while your desktop is a rock solid debian install. It's probably possible, but it requires a lot of work and compromises(plus, NT the kernel isn't open source) to where it wouldn't be worth it.
How would running Linux and a BSD flavor at the same time go though?
When we say multiple kernels we mean multiple copies of the Linux kernel. The concept is called multikernel and is especially seen on places where security is a must. osdev has a nice article on it: https://wiki.osdev.org/Multikernel
What you're thinking of is impossible because each kernel handles the hardware differently and it wouldn't take long before race conditions destroy the system entirely
Multikernels are particularly suitable for systems with multiple incompatible cores, e.g. due to different feature sets (for example, a RISC-V system with one set of cores having 128-bit vectors and another set having 512-bit vectors).
Are such systems common/does Linux not cope with this already if they are common?
It’s pointless, a security fucking nightmare for zero benefits. You realize the kernel has to manage the hardware, right? Adding in support for direct scheduling across kernels will be a stupid project.
amarao_san@reddit
I feel it's odd thing. I have no idea how hardware partitioning will work IRL. Maybe it will be workable on servers, on desktops it all falls apart around 'special' role for GPU (e.g. you can't meanigfully give your iGPU to virtual machine and keep discreet gpu to the host). I suspect it's the same for hardware partitioning.
Also, who is handling APCI?
nekokattt@reddit
Assumably the main kernel that was booted into that bootstrapped the other kernels?
amarao_san@reddit
And how other kernels are prepared for having sudden sleep event? Hardware must be prepared (all, not some selected subset), resume should be ready.
Which implies strong cross-kernel communication. Which is not exactly isolation...
nekokattt@reddit
That is the whole point of the work being proposed. If you are interested I'd suggest asking the author of the proposal who will be able to supply a satisfactory answer.
Linux at least used to support being run as a userland process on another instance of Linux as the main kernel, if I recall, so it likely follows similar patterns to how that was implemented.
amarao_san@reddit
userspace linux don't need to handle hardware aspects of suspend. In case of partitioning, it must.
S1rTerra@reddit
It would be cool if we could run NT and Linux on bare metal at the same time and have them share CPU cores so you'd get access to native windows applications with full gpu acceleration while your desktop is a rock solid debian install. It's probably possible, but it requires a lot of work and compromises(plus, NT the kernel isn't open source) to where it wouldn't be worth it.
How would running Linux and a BSD flavor at the same time go though?
Specialist-Delay-199@reddit
When we say multiple kernels we mean multiple copies of the Linux kernel. The concept is called multikernel and is especially seen on places where security is a must. osdev has a nice article on it: https://wiki.osdev.org/Multikernel
What you're thinking of is impossible because each kernel handles the hardware differently and it wouldn't take long before race conditions destroy the system entirely
nekokattt@reddit
Are such systems common/does Linux not cope with this already if they are common?
Specialist-Delay-199@reddit
I assume it's done for embedded devices which could definitely make use of a multikernel design. If they're doing it they must have a reason to
RoomyRoots@reddit
I don't think you understood what this patch is about.
MarzipanEven7336@reddit
Not clicking the link, but yeah it’s easy to setup.
Hosein_Lavaei@reddit
Its not what you think. Its running multiple kernels at the same time on the same machine witch is very hard to setup
MarzipanEven7336@reddit
No it’s not.
Hosein_Lavaei@reddit
Maybe just read the article? It is and the work is now in progress
MarzipanEven7336@reddit
It’s pointless, a security fucking nightmare for zero benefits. You realize the kernel has to manage the hardware, right? Adding in support for direct scheduling across kernels will be a stupid project.
Morceaux6@reddit
Maybe read the article
MarzipanEven7336@reddit
I did
Morceaux6@reddit
Then why are you saying it’s pointless ? You should have seen the potential benefits if you read it carefully
MarzipanEven7336@reddit
The article is literally about a commercial product, it even link to it.
https://multikernel.io/
How is this at all relevant to this thread? It’s literally a fucking ad.
nekokattt@reddit
It is okay grampa... multikernels don't exist... lets get you back to bed.