Few years ago I did something funky. I mounted a fuse filesystem (sshfs), put file there, open it as loop device, put dmcrypt there, created filesystem on it and mounted.
My clever idea was to use any remote server as secure filesystem (one user at a time).
What it did? Constant freezing and hanging of the computer. I don't know if it was fixed or not, but I got a bad taste of using fuse as backing device for block device.
I don't see my question answered. There is only comparison of old /dev/fuse with new io_uring mode but nothing that would compare it against in-kernel fs.
What good would comparing a userspace filesystem transfer speed and that of for example, an ext4 one be?
Sounds like comparing apples and oranges when you give one some plant food?
I suppose you could compare ntfs-3g and ntfs3, but the code behind both implementations is quite different, so how would that help?
Just seen another post mentioning bcachefs.... Suppose you could do something there, although probably not on Debian... But to compare them you would need the same data to be written / promoted and accessed in the same order, and IIRC you can't manually force a rebalance, just perform more reads / writes to get the heuristics to trigger one if it feels like it, hoping the outcome will be identical - so then you are comparing an orangey orange with an orangeish orange..
And if you are not mindful of it, further removing your comparison from real-world usage by having to disable the page cache to force reads to use FUSE instead of any cached data picked up by a block read made when reading data from a different location (which I can't help but wonder if that was something that prompted the comment below the graphs).
The graphs should give you an idea, but the improvement is not in the filesystem concerned, but speeding up the "chit-chat" between the FUSE server concerned and the kernel, not the actual transfer of data from memory landing on the disk, performed by the kernel at the request of the server on behalf of the application or vice-versa; hence no specific on-disk filesystem is mentioned.
Yeah, the faster the requests / instructions to read or write data traverses back and forth between application / VFS / in-kernel FUSE code / FUSE server / kernel can be processed, then more instructions can be processed in any given time.... which can have an effect on the number of times data can be written / read in that given time, but bar quotas, the kernel reads and writes the defined data to / from disk at the same pace no matter where the instruction came from.
If people are after transfer speed to/from a disk, they won't be using FUSE unless they don't have any other acceptable option. (I use Dolphin / KIO for quick memory-stick drag'n'drops for convenience, but manually mount stuff for larger transfers via rsync / mc / etc - irrespective of the filesystem )
If people are communicating with things they want to "see" as a disk that doesn't have an alternative method, then, yeah, cool, but will your provider be the bottleneck (as it always was)?
The big benefiters of this will be people who are purely desktop users - KIO / GIO will see a boost (by doing nothing more than enabling a flag), so devices mounted through them will benefit - especially those themselves using a FUSE filesystem (also using the flag)
So build an ext4 driver that uses FUSE instead of the one in the kernel and find out... Nobody is going to do that because it is simply not worth it - the benefit is not at the level of bytes-to-disk, which is why the article makes no mention of a filesystem; just that the machine had 8 cores, to explain why the concurrent jobs only went up to 8.
And then, throw away your driver and your results and keep using the in-kernel one.
Wow, that was an extremely long article to basically say some anti-virus programs don't yet monitor io_uring calls.
There's no privilege escalation, exploit, or even a CVE for this. It's just a blind spot in some enterprise security monitoring tools that rely exclusively on basic syscall hooking.
Would be interesting to see how fuse compares to native kernel drivers in performance on this branch, I think for example bcachefs supports both fuse and native.
I don’t know if fuse will ever get to a point where it's near no overhead compared to native but for userspace scheduling some of the best scx schedulers marginally beat the upstream cpu scheduler bore now:
amarao_san@reddit
Few years ago I did something funky. I mounted a fuse filesystem (sshfs), put file there, open it as loop device, put dmcrypt there, created filesystem on it and mounted.
My clever idea was to use any remote server as secure filesystem (one user at a time).
What it did? Constant freezing and hanging of the computer. I don't know if it was fixed or not, but I got a bad taste of using fuse as backing device for block device.
Not sure if this changed or not.
PainInTheRhine@reddit
How close fuse is now to in-kernel filesystems?
ang-p@reddit
Read the article?
There are two ways in which I can see that question being interpreted, and both are answered.
PainInTheRhine@reddit
I don't see my question answered. There is only comparison of old /dev/fuse with new io_uring mode but nothing that would compare it against in-kernel fs.
ang-p@reddit
What good would comparing a userspace filesystem transfer speed and that of for example, an ext4 one be?
Sounds like comparing apples and oranges when you give one some plant food?
I suppose you could compare ntfs-3g and ntfs3, but the code behind both implementations is quite different, so how would that help?
Just seen another post mentioning bcachefs.... Suppose you could do something there, although probably not on Debian... But to compare them you would need the same data to be written / promoted and accessed in the same order, and IIRC you can't manually force a rebalance, just perform more reads / writes to get the heuristics to trigger one if it feels like it, hoping the outcome will be identical - so then you are comparing an orangey orange with an orangeish orange..
And if you are not mindful of it, further removing your comparison from real-world usage by having to disable the page cache to force reads to use FUSE instead of any cached data picked up by a block read made when reading data from a different location (which I can't help but wonder if that was something that prompted the comment below the graphs).
The graphs should give you an idea, but the improvement is not in the filesystem concerned, but speeding up the "chit-chat" between the FUSE server concerned and the kernel, not the actual transfer of data from memory landing on the disk, performed by the kernel at the request of the server on behalf of the application or vice-versa; hence no specific on-disk filesystem is mentioned.
Yeah, the faster the requests / instructions to read or write data traverses back and forth between application / VFS / in-kernel FUSE code / FUSE server / kernel can be processed, then more instructions can be processed in any given time.... which can have an effect on the number of times data can be written / read in that given time, but bar quotas, the kernel reads and writes the defined data to / from disk at the same pace no matter where the instruction came from.
If people are after transfer speed to/from a disk, they won't be using FUSE unless they don't have any other acceptable option. (I use Dolphin / KIO for quick memory-stick drag'n'drops for convenience, but manually mount stuff for larger transfers via rsync / mc / etc - irrespective of the filesystem )
If people are communicating with things they want to "see" as a disk that doesn't have an alternative method, then, yeah, cool, but will your provider be the bottleneck (as it always was)?
The big benefiters of this will be people who are purely desktop users - KIO / GIO will see a boost (by doing nothing more than enabling a flag), so devices mounted through them will benefit - especially those themselves using a FUSE filesystem (also using the flag)
AyimaPetalFlower@reddit
This whole post seems like bullshit I won't lie
ang-p@reddit
OK - state why there is no mention of a filesystem on disk in the article.
Apart from - clue - because at that level it doesn't matter
AyimaPetalFlower@reddit
doesn't dolphin/nautilus mount disks using udisks2
ang-p@reddit
Ok... 2 questions...
1) - which you ignored..... state why there is no mention of a filesystem on disk in the article.
2) How does that get round FUSE if there is no driver in the kernel for that filesystem?
Please explain why instead of just opining it.
Well, go on.....
AyimaPetalFlower@reddit
you're not making sense
ang-p@reddit
Whatever.
RileyGuy1000@reddit
To know how fast it is in comparison. There doesn't need to be a point other than that
ang-p@reddit
So build an ext4 driver that uses FUSE instead of the one in the kernel and find out... Nobody is going to do that because it is simply not worth it - the benefit is not at the level of bytes-to-disk, which is why the article makes no mention of a filesystem; just that the machine had 8 cores, to explain why the concurrent jobs only went up to 8.
And then, throw away your driver and your results and keep using the in-kernel one.
Damglador@reddit
That's awesome. Now perhaps I can get back to using bindfs.
SleepingProcess@reddit
Do not forget to read first this:
https://www.armosec.io/blog/io_uring-rootkit-bypasses-linux-security/
mocket_ponsters@reddit
Wow, that was an extremely long article to basically say some anti-virus programs don't yet monitor
io_uring
calls.There's no privilege escalation, exploit, or even a CVE for this. It's just a blind spot in some enterprise security monitoring tools that rely exclusively on basic syscall hooking.
SleepingProcess@reddit
Not so bad if LSM and MAC is in use that will catch such reading events
AyimaPetalFlower@reddit
Would be interesting to see how fuse compares to native kernel drivers in performance on this branch, I think for example bcachefs supports both fuse and native.
I don’t know if fuse will ever get to a point where it's near no overhead compared to native but for userspace scheduling some of the best scx schedulers marginally beat the upstream cpu scheduler bore now:
https://flightlesssomething.ambrosia.one/benchmark/1518