I just ran `sudo rm -rf ~` by mistake.
Posted by 28jb11@reddit | linux | View on Reddit | 498 comments
I've been using linux since 2002 and it's the first time I've done anything like this. I thought it was essentially impossible and anyone who did it is dumb. I guess the egg is on my face!
I may be cooked? Wish me luck!
purpleidea@reddit
Keep a file named
-i
in your home directory. I've had mine there forever, but I also don't make thisrm
mistake, haha. It's still a good prevention against a buggy script.guigouz@reddit
that wouldn't have helped with
~
WildManner1059@reddit
actually it would, but not with -rf on there
purpleidea@reddit
Yeah, I don't know how you can do this particular variant by accident, maybe is fake. But I figured I'd mention the
-i
hack which prevents 99% of actual mistakes.decay_cabaret@reddit
Oh I can. I've been distracted before and when meaning to hit tab to list the contents of a dir so I can just use the arrow keys with the resulting output and select the folder I mean to delete, and instead hit the ~ key, but thankfully i caught the mistake before pressing enter.
(I have a bunch of keys reassigned like
is ~ because I use ~ far more often than
so I've got them swapped in regards to which needs shift. Same as my capslock is actually delete)whatisuser@reddit
That’s super bad aim! How’d you hit the bottom row when trying to press tab!?
decay_cabaret@reddit
Bro what? The keyboard i use has esc,
, tab, caps lock, shift, ctrl from top to bottom.
is directly above tab.Though my new keyboard I'm building today when the last parts arrive will be esc, tab, caps lock, shift, ctrl because it's a 5 row 60% keyboard. Esc is ` and ~ depending on whether or not I'm holding fn or shift
whatisuser@reddit
Brain off + forget not everyone uses UK layout 🤦♂️
PureBuy4884@reddit
i can imagine it happening when attempting to remove ~/something, but hitting enter too early, especially if relying on tab complete (so you’re not typing out the entire word “something”)
agenttank@reddit
when you use * bash will go ahead and resolve it. in our case it will give back every file that maches.
ls -i file1 file2 file3 echo rm * rm -i file1 file2 file3
so it will run rm in interactive mode, meaning it will ask y or n for every deletion
did not try, but I think that's it
DutchOfBurdock@reddit
yup..
tandycake@reddit
Does -n not work?
CarloWood@reddit
I can not imagine that that will still work. A filename should now be quoted properly, no? Next you're telling me that if you have a file called "-rf ." ...
joshuakb2@reddit
Quoting doesn't make a difference, if a file's name looks like a flag, the argument will be interpreted as a flag instead of a file path
SRART25@reddit
Capitol i, more better.
purpleidea@reddit
Not in your home dir it's not, that's worse than
-i
!! Maybe as part of an alias though. Read theman
page to understand.SRART25@reddit
I don't want to affirm every file, when -I pops up, that's enough to make sure I'm sane.
purpleidea@reddit
You clearly don't understand :rofl:
SRART25@reddit
I understand what they do and how they work. I don't know what you think they do or how you think it would help.
ZincFingerProtein@reddit
set up your rm alias in your .tcshrc or .bashprofile as:
alias rm 'rm -i'
_verel_@reddit
That's so smart thank you!
Przem90@reddit
The best failsafe I have heard of and mind I have been using *nix since '97! I dig that.
maaz@reddit
because it breaks on that file first or?
purpleidea@reddit
Because if you have a
*
expansion version of this command, the-i
file name will add the-i
flag torm
. You can use the man page to see what this does.Mammoth-Attention379@reddit
Teach me more
purpleidea@reddit
I have a blog and a mentoring program.
Want to learn dope automation things? I work on this.
maaz@reddit
beautiful
WeAllWantToBeHappy@reddit
That's why you have backups.
iamarealhuman4real@reddit
I'll just grab a backup from
~/backu
... hmmm...Enip0@reddit
3-2-1 anyone?
Berengal@reddit
This is a case where a plain btrfs snapshot would be the perfect backup solution. "Backup" is a lot of different things suitable for different purposes and situations.
WildManner1059@reddit
Snapshots, yes. 'btrf' is fine. Hopefully OP is aware it doesn't have to be btrfs, and if they're using xfs (*or zfs or some others) they still have recovery options possible. Really any journaling or snapshotted filesystem.
Hal-Kado@reddit
+1 for snapshots. After you make a mistake like this then realize you can just rollback to your last snapshot in seconds....you'll never use a FS that doesn't do snapshots ever again. It's a real game changer. I'm pretty paranoid about keeping proper backups, yet I've never actually needed to use one. In contrast I've relied on many times, usually in cases where I made a mistake like OP's. And keeping a few weeks of daily/hourly snapshots takes up a pretty insignificant amount of space in most use cases.
UbieOne@reddit
This reminds me to set up a Snapshot config for my ~ dir, too. I believe TW doesn't do this by default.
But yeah, +2 for snapshots.
DopeBoogie@reddit
Yeah that's the other side of btrfs snapshots that a lot of people overlook. Copy-on-write means you can keep as many snapshots as you wish and only the changed data takes up additional space.
So it's not like if you are using half your disk you will only be able to make a single additional snapshot. It's extremely unlikely (impossible?) that your changes between snapshots will be the entire disk, most likely each snapshot will only need a few GB at most. In cases where you say, install a 500GB game, that game only needs to be stored on disk once. The subsequent snapshots will point to the same game data, only the parts that change, such as a game update, will take up additional space on the disk.
As long as you aren't keeping all snapshots indefinitely forever then you aren't likely to see a significant cost increase when it comes to the storage required, and the benefit easily outweighs that cost!
StretchAcceptable881@reddit
It’s also not a bad idea to have recovery medium like a USB drive in the event you take ax your entire linux system
DopeBoogie@reddit
Yes, btrfs snapshots should not be considered a "backup" because they are on the same disk as the source.
Btrfs snapshots are a great way to roll back mistakes or other issues that could bork your system. But you should always also be making backups to an external disk and a cloud storage.
As I mentioned later in this thread, I use kopia to do both of those. Everything I care about gets backed up to both an external disk and an object-based storage server with kopia separate from the built-in btrfs snapshot behavior.
Old-Adhesiveness-156@reddit
Does it still have a lot of data-eating bugs?
DopeBoogie@reddit
I haven't experienced any and btrfs is my go-to filesystem across several distros on many systems. I even use it on external disks.
aaronjamt@reddit
FWIW, I keep snapshots indefinitely and while I do notice running out of space occasionally, I just delete a couple old snapshots when that happens and it's fine. I did move my Steam library to a separate subvolume so it's not snapshotted, and that made a huge difference.
DuckDatum@reddit
Based on the last description, it sounds like snapshots store differences from older snapshots. Doesn’t that make the older snapshots a dependency for a full restore, even if using the latest one? How can you safely delete any of them?
aaronjamt@reddit
Btrfs is a COW (Copy-On-Write) filesystem, so every time you save a file, it writes the new data in a new location and updates the file to point to the new location. Each snapshot just points to where on disk the files were located at that point in time, and prevents those old copies from being deleted. As a result, multiple snapshots (and even the live subvolume) can actually point to the exact same data, so unchanged files don't have to be duplicated (giving incremental snapshot support). However, each snapshot still contains references to every file in its full, unmodified form, so snapshots aren't dependent on parent snapshots like with "traditional" incremental snapshots.
Deleting a snapshot just frees up its references, which means that its storage can be used for something else (unless it's still being used by another subvolume/snapshot). Any files also contained by other snapshots will still be kept, as they're still referenced, thus allowing them to be restored later.
If you're familiar with hardlinking files, just imagine that a snapshot just hardlinks all files at that point in time, and it's pretty much the same thing.
DopeBoogie@reddit
Yeah I also tend to keep mine indefinitely but I felt like if I didn't include that qualifier someone would have replied with a "Well actually..." about how it will eventually use up a lot of space if you keep them all indefinitely
MyraidChickenSlayer@reddit
Well actually
aaronjamt@reddit
Fair enough! By the way, do you back up your snapshots to another machines? I've been trying to come up with a good way to do that.
CmdrCollins@reddit
Btrfs has the ability to turn snapshots/subvolumes into datastreams via
send
/receive
, and consequently tools automating that exist (eg btrbk).DopeBoogie@reddit
Nah I just use kopia to do remote backups, never felt the need to backup my btrfs snapshots when the kopia ones seem better suited for that anyway.
aaronjamt@reddit
Fair enough! The reason I'm looking for a Btrfs based solution is since tools like Kopia have to rescan all your data, which takes a while, kills battery life, and makes my laptop hot. Snapshots are instant and only store a diff, but I haven't been able to find a way to (reliably) back up incremental snapshots, especially with an unreliable wifi connection and such.
DopeBoogie@reddit
Yeah, I just figured with kopia I can set up custom ignore rules and only backup what I really need to.
It's actually very efficient, does rolling snapshots similar to btrfs, and it's written in Go.
For me at least it has a negligible impact on battery life and the reduced storage and bandwidth through the efficiency features like compression and rolling hash snapshots is worth any minor cost to battery and data bandwidth. I suppose it does need to scan for changes but it runs so quickly I imagine there must be some kind of caching or change monitoring going on in the background.
aaronjamt@reddit
Interesting, how much data do you back up with it? I have over a TB of data it has to back up, so maybe that's why it's slow for me.
DopeBoogie@reddit
My source disk is currently about 3TB full but I only actually back up about 50-60GB.
nakina4@reddit
I once deleted ffmpeg like an idiot on my system and thankfully had a snapshot from before I did that. I booted into it from grub and restored to it. I didn't really lose anything but my icons. I was able to download them again easily anyway lol. Btrfs is a lifesaver.
tjharman@reddit
Until BTRFS bugs out and then you've lost everything
stalecu@reddit
And that's what happens when you trust bootleg ZFS instead of using the real deal like FreeBSD and illumos do.
tjharman@reddit
bootleg ZFS?
anna_lynn_fection@reddit
Perfect "recovery" solution. As we don't want people getting the wrong idea that it's synonymous with "backup".
Senedoris@reddit
Unless you're someone like me who has different subvolumes for home and root, in which case... Good luck restoring a snapshot from the ~/.snapshots directory you just nuked
TheOneTrueTrench@reddit
Or you can use ZFS, which doesn't let you delete or fuck with snapshots outside of the zfs cli tool.
Berengal@reddit
The same is true for read-only btrfs snapshots, which is what you should be using for backups. Not just because it's a good idea, but because send/receive demands read-only snapshots. Btrfs read-write snapshots are analogous to zfs clones rather than snapshots.
TheOneTrueTrench@reddit
Don't know btrfs inside and out like I do ZFS, good to know
Berengal@reddit
Hopefully your snapshots are read-only (
btrfs subvolume snapshot -r
), whichsudo rm -rf
cannot touch. You need to explicitlybtrfs subvolume delete
(or remove the read-only flag before deleting them).gcu_vagarist@reddit
You should have a different subvolume for
/
and/home
, not for/
and/home/$USER
.Senedoris@reddit
Actually, you're very correct and that is how I have it set up. I got mixed up with my earlier comment. My home snapshots live in /home/.snapshots, not $HOME/.snapshots. In this case, snapshots would've helped, though instead of a ~ it could've been a /, so hopefully there's more than just snapshots :P.
Knopfmacher@reddit
Just create read-only snapshots with
btrfs subvolume snapshot -r
, thenrm -rf
won't be able to nuke them.Or even better just use https://github.com/digint/btrbk and have your read-only snapshots somewhere in
/mnt/btr_pool
with an automatic deletion policy.firewi@reddit
As an early adopter of Btrfs, seeing this with so many upvotes really makes me feel happy.
FlyingWrench70@reddit
Fixed it for you!
DopeBoogie@reddit
btrfs snapshots literally changed my life for the better. Now my Linux system is completely immune to my stupidity or busted updates or anything else that could go wrong.
Of course I still use Kopia to make backups to an external disk and a cloud fileserver just in case. But even since enabling (and understanding) btrfs snapshots I've never again encountered any issue (no matter how self-inflicted) that couldn't be solved by rolling back to an earlier snapshot.
Enip0@reddit
oh for sure, I was just addressing what the other commenter said
jiminiminimini@reddit
I actually don't have money for that. The best I can do is btrfs snapshots plus an external disk for important things.
fiyawerx@reddit
In this case, more like the 01111001-01101111-01101100-01101111 strat
Beautiful_Crab6670@reddit
laughing_gitlabs.jpg
FamousPassage9229@reddit
Saving backups and snapshots on an external drive would be a nice idea.
ContentPlatypus4528@reddit
I can't imagine using my home folder for anything than just programs, I always store any data unrelated to a program on a second drive. There is an exception with single drive devices though like a laptop, handheld, etc.
Nikovash@reddit
sudo ctrl+z
stickymeowmeow@reddit
I had my external drive mounted to ~/ and learned a very valuable lesson running sudo rm -rf * thinking I had cd into another directory.
I learned two things:
Mount your external drive to /mnt (or anywhere other than ~/) and
cd is too risky just to save a couple keystrokes. Just type out the absolute file path in the commands.
kerridge@reddit
i always use tab autocomplete to confirm it's correct.
SheriffBartholomew@reddit
That's called setting yourself up for failure.
Lucas_F_A@reddit
I laughed more than I expected at this
Jojos_BA@reddit
Well thats one reason why u not only have on machine backups
indvs3@reddit
Or at least backups not saved in the same folder (or subfolder thereof) where you have all the power and permissions to do the dumbest things lol
At least that way you're only affected by hardware failure and extensive stupidity, in case you didn't learn from wiping your home folder the first time lol
http451@reddit
best I can do is
/etc/skel
gosand@reddit
Cron (rsync script) that gets run at 3am to a separate internal drive. Occasionally rync that to an external drive.
I have only needed it a few times, but handy to have. Out of sight, out of mind.
-Pelvis-@reddit
https://i.imgur.com/x928ndo.jpeg
Jethro_Tell@reddit
That’s why you should have backups, but no one does.
Phlink75@reddit
*Should
Nearby_Astronomer310@reddit
Bruh. Didn't even notice the absence of this word.
KeySir2841@reddit
I created a separate backup in a diffrent partion with windows using USB’s
cyrixlord@reddit
yup, timeshift is a life saver, also I am always conscious about using sudo rm -rf and I think I have tried to avoid it at all costs lol
Gavekort@reddit
https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Fa.pinatafarm.com%2F1210x799%2Fbb90789ee7%2Fpuppet-monkey-looking-away.jpg
Gavekort@reddit
Gavekort@reddit
https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Fa.pinatafarm.com%2F1210x799%2Fbb90789ee7%2Fpuppet-monkey-looking-away.jpg
kman0@reddit
Copy/paste from AI, huh?
dbear496@reddit
I almost did something similar yesterday. I did
rm -r $variable/*
without checking that $variable was non-empty. Thankfully, $variable was not empty, and I was not sudo, but it still gave me a reality check.Also, keep backups, my man. I use duplicity to backup to my home server. It has saved my bacon at least once.
Asterix_The_Gallic@reddit
I once did a remove from without a where :)
DutchOfBurdock@reddit
Power down that PC right now. Literally, pull the plug.
Boot up using a USB pendrive, preferably one with a usable live distraction. Install foremost in that liveUSB environment.
Read foremost's manpage.
Assuming your drive isn't encrypted, run foremost against your home partition. If it is encrypted, use dm-crypt to create the raw block device for your home partition, then run foremost against that.
This can recover a majority of file types.
stinger32@reddit
Welcome. Recovery is a process, but we all go through it at one time or another.
Finallyfast420@reddit
Alias rm to move to /tmp
luigi-fanboi@reddit
Photorec can fix this or could have if anyone had told you
Queasy-Objective3834@reddit
Your training is now complete Jedi
Tooligan13853@reddit
I’ve done that once. No backup. It was fun.
decay_cabaret@reddit
And this is exactly why I've got rm aliased to mv the target files and folders to ~/.local/Trash with a second alias 'trashman' that invokes the real rm to clear out the Trash folder.
I know this isn't helpful after the fact for your current dilemma as it doesn't unfuck your files, but it's a suggestion for how to keep it from happening again in 2058.
JaggedMetalOs@reddit
That reminds me of the time running some C# app under mono created a folder named ~ in the current dir, which I tried to delete... Thankfully this was a server with recent backups!
FortuneAcceptable925@reddit
I always do this when unsure:
GlumWoodpecker@reddit
Just use single quotes when removing, wrapping arguments in single quotes keeps anything within from being expanded by the shell.
rm '~'
will delete a file name~
and won't touch your home directory.morafresa@reddit
Does that work on zsh as well?
GlumWoodpecker@reddit
I only use Bash, so I have no idea. You can try
echo '~'
though and see whether it expands.6e1a08c8047143c6869@reddit
Or
rm ./~
.GlumWoodpecker@reddit
For this specific instance, yes, single quotes works for all files that may contain shell-expandable expression. Consider if you had a file named
$HOME
, if you triedrm ./$HOME
, the shell would expand that torm .//home/user
, whilst the single quotes would preserve the dollar sign.SweetBabyAlaska@reddit
you could also escape the character with a backslash like \$HOME
but this is also why I just use tab expansion in zsh or similar because try typing in an emoji character or other unicode named file (like chinese), in the framebuffer or terminal alone, its basically impossible. Its not something that happens often but still.
ipaqmaster@reddit
This is imo the safer play.
./
is the go to safety prefix for dealing with removing strangely named paths.imtoowhiteandnerdy@reddit
sloothor@reddit
$ rm /
imtoowhiteandnerdy@reddit
Well you got me there.
RedSquirrelFtw@reddit
I've run into the same thing, had a bash script that I messed something in it and it created an actual folder called ~. What I ended up doing is using rmdir, because that would fail on the home directory as it has files in it.
ke151@reddit
Also when in doubt you can use
ls
or something non destructive in place ofrm
first. Andrm -i
to positively confirm each deletion for an extra final check.ThreeCharsAtLeast@reddit
https://chaos.social/@feliks/114568018939671388
__nohope@reddit
mypetocean@reddit
‼️
RonHarrods@reddit
Had this recently. I was sweating and shaking trying to get rid of it. But it was Claude code who made it.
sgilles@reddit
Ouch, that's nasty.
LavishnessCapital380@reddit
"hey google, sudo rm -rf \~"
Not_An_Archer@reddit
I can't fathom how anyone could do this "by mistake"
lolxnn@reddit
alias rm='rm -i'
MoneyFoundation@reddit
What if you have to delete a large directory?
bullwinkle8088@reddit
That is an out of the box default for Red Hat and Fedora installations. Likely their derivatives as well but I haven't checked in a few years.
Ironically it's one of the first things that will annoy me and that I revert to default behavior.
shogun77777777@reddit
Damn this is a great tip. I’ll do this
BK_Rich@reddit
Except for the one time you do it on the system where you didn’t set the alias.
shogun77777777@reddit
Oof good point
linkslice@reddit
I once did a recursive chmod on /usr and removed the execute bit because I thought I was in a different directory.
PsychologicalWeird39@reddit
Dumbass 😭😭🙏
raiksaa@reddit
Nobody runs sudo rm rf ~ by mistake. Low quality bait.
Parasyn@reddit
Yeahh that is pretty far out. Worst that ever happened to me was tired as hell, opened up the wrong terminal and ran
rm -rf *
in my home directory. Thankfully I had made a backup about an hour prior so it wasn't a hassle but runningrm -rf ~
is pretty wild lolParasyn@reddit
Well if it makes you feel any better, I accidentally did this while ripping a SHIT ton of my old family DVD's. I'm talking 25 year old media that was a pain in the ass to get off with DDrescue and other tools. Spent a ton of time finding PACK headers for corrupted media to remove corrupted bytes, archiving, mastering, reencoding, etc.
All gone in one fell swoop of
rm -rf *
in my home directory... thankfully I had just made a backup 2 discs prior so it really wasn't a big deal. As others have said, this is why we make backups.BawsDeep87@reddit
Depends on your shell some will ask if you are really that stupid I think starship does there also some packages that a a recycle bin to rm - rf (if you alias it) also not ment as an insult i use Linux for 15 years and also wiped /home once by accident
ProposalAlone9467@reddit
Don't drink and root !
Ur_Local_Milk@reddit
you are not cooked, you are deep fried
zyzmog@reddit
Comfort yourself that it could have been worse:
sudo rm -rf /
DeKwaak@reddit
That works only on non gnu systems. Someone did it on AIX and he got a week to reinstall that system.
morganb298@reddit
Wonder if you can use testdisk and get it back
Unusual-Amount5809@reddit
Ctrl+C, try to back up your data, try to make a bootable stick and then "systemctl reboot" and... good luck ahead
Inevitable-One9782@reddit
Bro saw the post and used the command haha
orthomonas@reddit
Shoutout to things like ``trash cli`` which come with unloaded footguns.
_smaugE@reddit
"By mistake" really? Did you conform your root passwd?
billyg599@reddit
Its the perfect time to learn about backups
Several-Fly8899@reddit
I once did rm -rf /etc in what I thought was a chroot'd environment. Boy was that a fun time.
Anonymous_006@reddit
Once I did it with " ../ " instead if " ./ ".
passiverolex@reddit
I mean atleast try it in a virtual machine first lol tf
ZincFingerProtein@reddit
set up your rm alias in your .tcshrc or .bashprofile as:
alias rm 'rm -i'
Kirsle@reddit
I one time did something similar but with the chmod command.
It was in my early Linux days, and I had an external hard drive formatted FAT32 for my backups and common files shared with Windows (as NTFS was still 'experimental' at the time and I didn't want that risk). When I restored files off of FAT32, they were all chmod 777 on Linux and I wanted to fix their permissions.
What I meant to run:
chmod -R 644 .
What I actually ran:
chmod -R 644 /
I was running it as root too for some dumb reason, and it wasn't until I got "permission errors" under /proc and things that I realized my mistake. By then, everything in /bin and /usr/bin had lost its execute permission, so the
chmod
command itself wouldn't work again. And anyway, all the nuanced permissions of things under /etc were hosed, so things that needed to be 600 etc. would cause problems if I tried to 'fix' all the permissions back again.I just cut my losses and reinstalled my distro from scratch.
Jaydeniscooolo@reddit
I did the same, had to reinstall windows
PureBuy4884@reddit
i’m actually not too scared of running this. pretty much all of the code that i care about is tracked by git and on github already. And application config is hardened due to using NixOS + Home Manager. So literally all of my home directory is recoverable in a functional way.
I think the only thing I’m worried about losing is documents and images, though i don’t have much of those rn.
tamachine-dg@reddit
How did you manage that? Personally, I have to triple check every command I run with rm
Defiant-Flounder-368@reddit
/ is quite close to <enter<>
quicksand8917@reddit
Which is why the -rf should always be at the end of the command.
Flamak@reddit
He deleted his home directory not root. I find it hard to believe seeing as i cant think of a reason you'd type that out
squeeby@reddit
rm does actually warn you if you try to stupid this hard nowadays.
iamarealhuman4real@reddit
Wow true I just did `sudo rm -rf /` and it warned me. That's helpful.
imbannedanyway69@reddit
Bruh wtf do you dangle your child out a window saying "look I have a good grip!" too? /s
squeeby@reddit
eeehee
imbannedanyway69@reddit
That's ignoraaaant
black-wolf-76@reddit
I envy your confidence
gex80@reddit
Docker containers and VMs are a thing.
black-wolf-76@reddit
Buzz kill
Reetpeteet@reddit
Too many jokesters on Discord, IRC etc throughout the decades. Guess they finally built in the foot-gun protection.
wRAR_@reddit
The protection predates Discord by about 10 years.
Eremitt-thats-hermit@reddit
That’s some dangerous experimenting you do
fearless-fossa@reddit
Not in the world of virtual machines and snapshots.
althalusian@reddit
I once did sudo rm -rf / tmp (with space after the slash)… It didn’t give a warning. But luckily that was a virtual machine with a few days older snapshot taken, so not that much was lost.
__konrad@reddit
Ironically, the only command without --dry-run option
-Sa-Kage-@reddit
You got me thinking as
rm ~
would just not work for 2 reasons:So in order to actually remove everything in your home you either had to do
rm -r ~
with sudo rights (why would one sudo in their own home?) orrm -r ~/*
Careful-Major3059@reddit
must be a different keyboard because this is not the case at all for me
repocin@reddit
Pretty sure it's right next to enter on US layouts.
On my Swedish keyboard it's shift+7, which is just about impossible to accidentally input.
hexsudo@reddit
Depends on keyboard lol...
Mooks79@reddit
Yeah, ISO here thankfully. Not made that mistake - yet.
pppjurac@reddit
Because it did not happen. Just post for karma.
This subreddit is trash and garbage dump day by day.
TheOneTrueTrench@reddit
I mean, I run with snapshots every 5 minutes, and the snapshots are backed up to my backup server every 15 minutes.
I'll run
sudo rm -rf --no-preserve-root /
to prove a point, or scare the shit out of a sysadmin looking over my shoulder lol-Sa-Kage-@reddit
I guess similar/same keyboard layout than I have (German).
Here \~ is right next to the Enter key, so if you mishit a bit, you might accidentally press enter when using \~
-Sa-Kage-@reddit
For those worried now:
You could make a script that always asks, if you are sure before actually executing rm and alias the script to rm
BreakerOfModpacks@reddit
On the plus side, you no longer have the French language pack.
Puzzled_Minute_7387@reddit
How did you manage to do this? This is my worst fear. It is why I NEVER use the "rm" command!!!
Never_existed__@reddit
You're cooked! Reminds me of when I removed the bin folder from my server.
The_SniperYT@reddit
You need to do some data recovery with forensic tools. There are tutorials and you will find every tool in Kali live
h4rd0n@reddit
Hahahahahahahaha
Hot_Reputation_1421@reddit
What distro are you running? Usually, there is protection against this.
daffalaxia@reddit
I've seen the recommendation before to alias `rm` to `rm -I` (or, if you want to be more careful, `rm -i`) to circumvent this. Personally, I triple-check commands like this before running them, but also prefer to trim files from, eg, dolphin, where I can (a) get them back from trash and (b) have to select files (and can double-check the selection). After deleting stuff, I usually double-check that that's what I wanted to do, and then empty trash.
Perhaps another option would be to use the trash mechanism from the cli - ie, create a script like this ( https://www.stefaanlippens.net/moving-files-to-the-kde-trash-can-from-the-command-line.html ) and get into the habit of doing `totrash` over `rm `. Anyhoo, just ideas.
1v5me@reddit
I did it on a novell server in production back in the days, i didn't expect it to actually delete anything, but big surprise it wiped everything, and the server kept running like nothing has happen. We did call in an expert to help us fix the erhm issue, and with the help of molokov commander or whatever it was called, we did manage to copy all the system files from another novell server, and viola we rebooted and everything worked :)
Mister_Magister@reddit
My current installation has:
I got so many copies. If you have 0 still, you should rethink your life choices. No mirror and no backup is literally gambling with time. It's just matter of time you'll lose data
GrabbenD@reddit
How are you reliably handling the automation though?
My biggest difficulty with automating backups is the risk of synchronising bad data/state. For example, potential bitflips or foremost when having unknowingly deleted a file then not noticing it's missing in all recent backups until it's too late (after old ones get pruned)
slickyeat@reddit
I have a cron job that rsyncs the data onto a separate drive. Both the src and dst have their own sets of snapshots so it's mostly for handling a worst case scenarios.
Mister_Magister@reddit
My biggest difficulty with automating backups is the risk of synchronising bad data/state. For example, potential bitflips or foremost when having unknowingly deleted a file then not noticing it's missing in all recent backups until it's too late (after old ones get pruned)
zfs takes care of that for me. Then the servers have ECC. If there were a bitflip, scrub will catch it
You can also for example make snapshots every day and keep only last 3 days. Or you can configure backup to have shapshot from month ago, week ago, and from yesterday
GrabbenD@reddit
The theory makes sense but in practice I find there's a missing step.
Imagine you've deleted a nested document that is accessed once yearly. If the data is synchronised automatically: deduped/reflinked, scrubbed and pruned periodically (considering you might not want to keep backing up files that were purposefully deleted forever). You'd only discover the data loss the next time you try to find this document due to automation.
In other words: - How do you review the backup transactions to ensure you're backing up the correct state? - Which tools would accomplish this?
I've been considering this setup: -
Syncthing
to automatically backup files from Android (media) + Linux desktop (documents). -Git Annex
to track incremental changes of state. - 2 different (COW) filesystems to hold the data. Either BTRFS + ZFS, or maybe LVM with BTRFS + XFS to avoid relying on a single filesystemMister_Magister@reddit
Or more precisely, you're confusing backup with archival storage. Backup is to be used in case of catastrophic failure, or if you drop the database. It's not meant to be used to recover that one document that you might have deleted while sleepwalking. That's archival storage. You can use tape storage for that.
GrabbenD@reddit
That sounds right! I'm looking for a complete architecture that doubles as a Active Backup and Archive.
Next roadblock in this puzzle would be multiseat usage. Something which multiple family members could use meaning: -
$HOME
on multiple Linux computers - Media storage from Android phones - Nested homelab environments (LXC/VMs/Podman Volumes)All of these concerns and design choices make me wonder if there isn't already a platform that has solved all of these problems?
Mister_Magister@reddit
Here's what I would do
Have one main server with raid via zfs. backup everything to it, and also have it host storage for family mambers. set up snapshots like I've described in other comment, so that you get daily, weekly, monthly and yearly snapshots.
>All of these concerns and design choices make me wonder if there isn't already a platform that has solved all of these problems
I am literally doing it? And I've described how to do it to you
GrabbenD@reddit
Thanks!
Apart from ZFS, which software would be used to assemble all the necessary parts like automation bits / the entire "backup workflow"? E.g. cronjobs?
Mister_Magister@reddit
systemd timers > cron thats for first
as to making snapshots… i had to grep the logs cause i didn't remember name of software i was recommended but i'm not yet using https://github.com/jimsalterjrs/sanoid
as to automatic backup from hosts, I use ansible that in turn uses rsync but there's multitude of solutions of sending data to nas so just pick one
Mister_Magister@reddit
If I were you, I would just have server with zfs on it, and then use some software (i forgor the name of specific one) to keep progressive snapshots like daily backup, every month, and then year old snapshot. So like, snapshots are made daily. Then 7 days old snapshot gets promoted to weekly. Then if that weekly snapshot gets a month old it gets promoted to monthly, and if that gets year old it gets promoted to yearly
Mister_Magister@reddit
>deduped/reflinked, scrubbed and pruned periodically
I don't do that
>You'd only discover the data loss the next time you try to find this document due to automation.
sure, but I've not run into such scenario where i would be sleepwalking and delete something by accident without knowing i deleted it. But then you can just keep year old snapshot
>How do you review the backup transactions to ensure you're backing up the correct state?
I don't. It is always correct state.
>Which tools would accomplish this
there's plenty of tools you can use
>2 different (COW) filesystems to hold the data. Either BTRFS + ZFS, or maybe LVM with BTRFS + XFS to avoid relying on a single filesystem
thats pointless overkill. Just btrfs or just zfs will do
0mnipresentz@reddit
He doesn’t. Everything is backing up at different times. Whenever things go to shit, he finds the most recent backup in his chain of backups and goes with that one
Mister_Magister@reddit
correct
northparkbv@reddit
Take your meds
Mister_Magister@reddit
thanks for reminder
Sewesakehout@reddit
What you on? Any benzos?
Mister_Magister@reddit
hormones
Sewesakehout@reddit
Ah too bad, was hoping for vallium squad to show up
Mister_Magister@reddit
No I'm not actually insane. This is normal 3-2-1 backup solution
Camo138@reddit
Sounds like work. I just dumped it in an S3 bucket and called it good.
Mister_Magister@reddit
sure you can do that, but I like owning my data. And it's SIGNIFICANTLY cheaper in the long run
Camo138@reddit
A bunch of Linux isos I don't need. But between the data on OneDrive I can't access and the S3 box. It's good. My documents witch I need the most are syncthing to all devices.
SpoilerAvoidingAcct@reddit
Lmfao
fk00@reddit
I learned that trick a while ago: prepend your destructive or dangerous command with
ls
, execute the command and make sure it does what you want. Then just replace withrm
. Works for ad-hoc only but can save your files.Witty-Stand8197@reddit
Did curiosity get the best of you?
philpirj@reddit
Such a stupid mistake!
sudo
was completely unnecessary. You can omit it next time.Private_Bug@reddit
I hope it’s not the same installation as the one you used in 2002
Przem90@reddit
Still better than what I did once. I was in my linux curiosity days, you know everyone has them from time to time. Checking what if etc. So, having been sure I have my last snapshot saved few hours ago I did issue the classic command you had used. Guess what? The snapshot did not copy to NAS due to lack of space which I did not check beforehand. So I happily destroyed my system and about a month worth of other work. Silly me.
unevoljitelj@reddit
You cant do that by mistake 😅
sudoDark@reddit
Create a diskimage and restore 😅
http://www.sleuthkit.org/sleuthkit/man/fls.html
CarloWood@reddit
Last time I did that (delete my home directory with 50,000 files) I wrote ext3grep and recovered everything. My trick was to use the journal to recover the deleted inode tables, something nobody had thought of before... This idea was later ported to another tool to work with ext4. I don't know if anything like it exists for other filesystems; it might depend on how files are deleted on that fs.
SupersonicSpitfire@reddit
On every new system I am on...
alias rm="rm -i" alias mv="mv -i"
clumsoz@reddit
Once i made a copy of some file in /etc at ~/etc. And after im done with work i removed /etc instead of ~/etc . I lost ssh to our switch. I told this to my senior and he started laughing.
lilPellegrino@reddit
Accidentally ran rmdir /q /s C:\Users the other day lol
long_trailer@reddit
I have write #rm ./ document/picacyuu.pic.
long_trailer@reddit
So I learned home folder permission with tears. But Bash is running on cliff not knowing reference /src/bash. Thanks Linux.
long_trailer@reddit
May be mistake .file to terminate. No use.
Responsible-Loan6812@reddit
I deleted my home content in the past, and now I always perform two-step deletion in the command line: rename the file/dir with a suffix, and execute
rm
with tab completion (⭾⭾) to confirm that the suffix appears correctly.Helmic@reddit
I also highly recommend not using
rm
. Usetrash
, and aliasrm
totrash
. There are very few situations where you need to delete something right this instant without any confirmation prompt, the vast majority of the time you'll be just as fine putting it into the trash folder to be permanently deleted once you actually need the room on your disk and you're certain you're not going to be missing any important data.SRART25@reddit
And if you really do, \rm will ignore the alias.
Big-Equivalent1053@reddit
bro i ran make outside of a project on mingw it deleted /dev/sda0 to 9 i reinstaled it it failed again and i decided to go with msvc after many bugs
Classic-Rate-5104@reddit
Everyone makes mistakes. The only option you have is restoring your last snapshot or backup
eras@reddit
Well it's not the only option, but it is the best option. Plenty of data recovery tools around.
ipaqmaster@reddit
photorec is a great one as a second last resort for recovering important files/documents.
With the absolute last resort being an expensive file recovery company. In the case of a hardware failure they may be able to frankenstein a working drive for you out of donor parts for the working pieces of your original broken drive - in those cases you get to keep the directory structure and filenames all still organized.
MrNokiaUser@reddit
second this.... i've recovered all sorts of shit with photorec. absolutely love it. the latest being multiple photos i accidentally lost of a zoo in prague!
SheriffBartholomew@reddit
Right. They could finish the job with
rm -rf /
0x1f606@reddit
"Shit, there goes my home folder. In for a penny..."
3vi1@reddit
If he immediately powered off the machine he could also boot from a USB and undelete with testdisk. Restoring is easier though.
hmz-x@reddit
The caveat is that if it ~ took a significant part of your storage, recovering with testdisk may overwrite some of it and corrupt the rest.
Source: been there done that
Alduish@reddit
A solution would be to use dd to make an image of the disk and only use testdisk on the image of the disk but it also requires another disk as big as the first.
WokeBriton@reddit
If the datais important, the individual needs to make sure they have this spare disk.
TheOneTrueTrench@reddit
If the data is important, the person needs to have backups...
WokeBriton@reddit
Needs to, definitely.
Whether they have or not is a separate question.
TheOneTrueTrench@reddit
Fun fact, my parents kinda learned this lesson 2 weeks ago, my dad's laptop crashed and burned, wouldn't power on, nothing.
He was panicking and called me, telling me his computer and all his data was lost.
He was immediately relieved and a bit amazed.
Also, my roommates are all also on the same system, and someone stole one of the laptops I gave them while they were at a coffee shop. A 6th Gen i3 trashbook, so not a physical machine I care about. They came home frantic that all their data was gone.
I just revoked the cert, restored the most recent backup to another trashbook, and issued another backup cert.
The things we can do with decent backup systems on Linux are astonishing.
hmz-x@reddit
Anyone would that level of Linux knowledge would probably not put themselves in that position, I think. But very sensible solution, though.
Menem_Intergalactico@reddit
That's true, been there, done that. And that's why file recoveries needs to be saved to external storage.
I don't remember if photorec warns about that.
ipaqmaster@reddit
til testdisk has an undelete function
Alduish@reddit
That's assuming the partition wasn't encrypted, but if it's not then it's an option to try as last resort.
-Sa-Kage-@reddit
Couldn't you still use those tools opening an encrypted drive read-only?
Alduish@reddit
Wait maybe I'm stupid and I'm the one making assumptions, I'd need to check.
UnLeashDemon@reddit
I mistakenly saved a file as ~/.config instead of ~/.config/file I still cringe at that.
GamerXP27@reddit
Ah damn man F
maaz@reddit
what are the genuine use cases for -f, assuming you have no permissions conflicts
Dangerous_Region1682@reddit
Backup, locally on an other driver under /mnt, on a RAID system on the network, on the cloud, and week to a thumb drive for important things in my bank deposit box. I saw a grown man cry after his mainframe data center flooded in the basement of his company.
stubborn_george@reddit
Some people make backups, some people will be making backups
TabTwo0711@reddit
No one wants backups, all they want is restores
stubborn_george@reddit
Or... Backups which cannot be restored are not backups at all
AirTuna@reddit
Narurally. They’re just Backups In Training until you test them.
TabTwo0711@reddit
The problem is, if you have large interconnected systems, it’s very hard if not impossible to test a restore. Introducing old data to prod is asking for real trouble and restoring to a test server might introduce duplicate system ids if not contained 100%. Especially when you add SAAS to the landscape and this vendor is only able to restore to the source of the backup
AirTuna@reddit
That's why the only safe way to do a full DR-style restore "test" is in a truly isolated environment. As in, network has to be 100% isolated from your production environment.
Sadly, as you mentioned and also implied, the more dependent our "stacks" become upon disparate (especially third-party vendor) systems, the harder a true "DR" test restore becomes. :-(
TabTwo0711@reddit
Friend of mine worked at a financial company who were forced by auditors to do a failover to their secondary systems to prove they are working. The failover worked. But the they realized they can’t switch back due to some details they didn’t think about. Took two years to switch Bach to primary.
basil_not_the_plant@reddit
Every online backup software guide.
How to do backups: Example 1 Example 2 Example n How to do restores:
The end
__nohope@reddit
https://i.imgur.com/ihA7y6b.jpeg
twitterfluechtling@reddit
Excactly! I have a NAS and rsync the relevant folders frequently! It's mounted on login, the script runs automatically, I just mount it to ~/mnt/backup and ... aaaand ... Oh, shoot!!!
Ciwan1859@reddit
Serious question. OP’s scenario is scary. How do I backup? What do I backup? How easy is the restore process?
TheWorldIsNotOkay@reddit
Get an external hard drive, and periodically copy all important data to it. Alternatively, set up NAS (Network-Attached Storage, basically a storage drive that can be accessed by other devices on the network) and schedule regular backups of important directories with
rsync
(or using a GUI front-end like GRsync or GAdmin-Rsync if you're not comfortable using the terminal).What's important to you? For starters, you'll probably want to backup your
/home
directory, since unless you're doing something weird that will include all of your Downloads, Documents, Music, Videos, and Photos, as well as the configuration files for the software you use.Everything in Linux is a file, so... usually as simple as copying the files back over. If you backup your entire
/home
directory to a Linux-friendly filesystem (like Ext3 or Btrfs, but not NTFS or FAT), then all of the file ownerships and permissions should be kept intact, so if you have to reinstall your entire OS you can just copy your/home
directory backup to the new/home
directory. If you don't use a filesystem that retains the Linux file ownership and permissions, then you can still copy things over, but you'd just need to make sure you reset those permissions.(Though if nothing is wrong with your current
/home
directory and you just want to install a different version of Linux, you can usually just choose to keep your current/home
directory intact during the installation process. Backups are for when the data in the primary location is lost or corrupted.)SheriffBartholomew@reddit
Install Vorta and Borg. Specify your root path as the source and a different drive as the destination. Pick common exclusion rules from the settings options. Run the thing.
elmagio@reddit
It depends, what are you most afraid of? Having to reinstall your OS > Set up with Btrfs snapshots or something like that. Losing precious files you can't easily replace > Use tools like Déjà Dup to set up backups of those important folders (pictures, documents, ...) to an external drive and/or to the cloud and run that regularly.
Restoring files should always be pretty easy, restoring system will depend on how much it got borked up.
Restoring
Fun-Dragonfly-4166@reddit
I use immutability. Everything is generated from a script. I can just regenerate it. There is no need to backup home.
Hellraiser1605@reddit
That’s insanely funny. Yeah, I had to learn that the hard way too …
stubborn_george@reddit
Want more fun? Removing the french pack always with rm -fr /
Comfortable_Swim_380@reddit
Bonjour means goodbye also.
ansibleloop@reddit
Yeah this isn't something you repeat
Backups of data that can't be replaced
Snapshots of the rest
did_i_or_didnt_i@reddit
try ‘rm -rf /‘
/s
Ikinoki@reddit
For the sake of god do snapshots.
Crissup@reddit
I took a snapshot of mine. It’s in my photos folder. What do I do with it now? :p
OhMyTechticlesHurts@reddit
I ran "sudo rm -rf /" one time because / is so close to . On a keyboard And I have fat fingers. Took me 30 seconds to realize why it was taking so damn long to run. Learned to do absolute paths only from then on.
PrettySlickJohn@reddit
Every power user has done it once. I'd go as far as to say you are not yet a power user until you have.
In DOS, my nervous habit was to type "del . " and backspace through it. It started as a joke but I kept doing it.
And it wasn't even that which caused my wiping of a drive, it was an evil batch file that was supposed to unzip an archive, move some files to be loaded later, then delete all in the temp folder.. It got looped, and with no zip files walked up the directory tree wiping everything.
Norton undelete failed. Once you undelete one wrong file, it's game over man.
drucifer82@reddit
But…don’t you have to enter your password, and confirm that you really want to do that? Or is mine the only Linux system that tries to stop you from doing that?
thisisnotmynicknam@reddit
I’ve made this mistake before, I was supposed to run
rm -rf ./<dirname>/*
but I ranrm -rf /*
.sswam@reddit
I alias rm to a tool which moves stuff to \~/rubbish. Saved me from mistakes lots of times. If I want to use real rm, must type "command rm" which makes me take it more seriously. And, backups, of course.
jajajajaj@reddit
# (type ambiguously dangerous part) (complete correctly) (read it) [home] [del] [enter]
jajajajaj@reddit
bonus:
Suppose you were writing a long command, and realize you need to review something before you'd know how to end it. You can just enter the comment-ed command line into the history, then pick up where you left off, later. (CTRL-R then type some distinct part of what you'd done, before)
theng@reddit
depends on your binds but generally:
<ctrl+a><ctrl+d>
is equivalent to<home><suppr/del to the right>
I find it easier to type
Skyhighatrist@reddit
<ctrl+a><ctrl+k>
usually moves home then deletes to the end of the line.<alt+d>
does one word at a time.jaredw@reddit
Control +d is exit usually. If there's text it deletes?
theng@reddit
ah indeed haha it's
alt = meta
Ok_Exchange4707@reddit
I'm confused. Which home was nuked here? /root or /home/user? If /root was deleted, then I don't see much of a lost.
Interesting-Jicama67@reddit
sudo runs the command as root but preserves the user environment variables so is nuked home directory of user home not a root home P.S. I recheck this on my server with sudo echo ~
theng@reddit
the command typed as the user (you can also sudo while in root but I choose to believe it wasn't the case)
was :
~
is a shell syntax sugar and is evaluated just after you pushenter
and before the program is launchedhmz-x@reddit
~ is normally /home/yourusername or equivalent.
Naviios@reddit
run sudo rm -rf / to undo
Genoskill@reddit
this will remove everything from your partition.
duck_butter@reddit
I never say stuff like this!
YOU ARE A FUCKING ASSHOLE EVEN JOKING ABOUT THAT.
yahmumm@reddit
Can confirm. This worked for me
Educational_Sun_8813@reddit
when you finish reinstallation i cerommend doing alias rm to rm -i for the next time ;)
FunnyLizardExplorer@reddit
🤦♂️
Trick-Host-4938@reddit
Your cooked...
Loose-Committee6665@reddit
You're cooked if you haven't made a backup.
Sad-Resource-873@reddit
Iv heard it can break the system but what does it actually do
Thick_You2502@reddit
rm = remove = delete -r = recursive= delete inside alln directory,
-f = force = ignore warnings and assume yes
examples: rm -rf ~ deletes the home directory of the $USER rm -rf / deletes all.
trusterx@reddit
Urban legends. Arm won't execute rm -fr /
Instead you'll get a warning like this: rm of / is not allowed
Thick_You2502@reddit
because arm is well designed. 🤭
trusterx@reddit
Aaah f*** android autocorrect 🤣
MountfordDr@reddit
Depends on where you execute the command and whether you have elevated or the right permissions to delete the files. If you ran it in a system directory or worse, in / as superuser then yes, you will destroy your system as everything gets deleted from the point where you executed it, otherwise it is quick way to remove a populated directory. Generally emptying your home directory like what the OP did is recoverable with the obvious loss of any data. There are tools like testdisk to recover files but it is a phenomenal pain to use, probably as a last resort.
Either-Winner-1965@reddit
Good luck mate
longdarkfantasy@reddit
This is the reason I installed safe-rm of François Marier: https://launchpad.net/safe-rm
I also did the same command as your before, because I accidentally created a folder with name
~
with root user, then used sudo rm to remove it, and didn't notice that it took a little bit too long to remove. Until I saw my firefox UI is changed. Luckily I already backed up my config.hugewhammo@reddit
i did that for fun on a test install years ago! worked exactly like i expected!
root@test:/# rm -rfv *
fun! :)
spin81@reddit
A former coworker did this. He'd accidentally created a file called
~
and wanted to delete it, resulting in a whole new, and worse, accident.Cool-Walk5990@reddit
next time use
rm -fri
;)Helmic@reddit
I would rather use
trash
and avoidrm
entirely. Avoids the need for an interactive prompt (saving time) but also avoids mistakes being irreversible. In the vast majority of cirucmstances there's no reason to be usingrm
directly in a terminal emulator, and in that tiny minority of cases it's worthrm
not being in your muscle memory so that you know to treat it with respect as the scary command it actually is.Alduish@reddit
Oh my god I had never heard of this option, thank you, I'm definitely gonna add it as an alias
alimnaqvi@reddit
If using GNOME, you can alias
rm
togio trash
, which moves the “deleted” item to a specific trash directory instead of tossing it into the void. You can choose, e.g., a 30 day period after which the trashed item will automatically be fully deleted. I personally like to keeprm
non-aliased and use the aliastrash
forgio trash
.Alduish@reddit
That's a good tip, thank you.
I don't use GNOME but I'm pretty sure if I search a bit it could apply to other DEs.
CmdrCollins@reddit
gio
is part of GTK (not GNOME) and thus probably installed on your system anyways.Alduish@reddit
Oh ok, from how it was phrased I thought it would be part of nautilus
lego_not_legos@reddit
I type the
rm -i
first , out of paranoia, but often preferfind dir/ -delete
. Then it's non-destructive until you add the last param, and you can run it without to see what will be removed. When you do run it to delete, insert a space at the beginning so it's not added to your history.Fit_Smoke8080@reddit
Too bad -delete flag isn't totally widespread, not available in some old systems.
dagbrown@reddit
It's okay, grandpa, you can relax. Solaris is deader than dead now.
Lucas_F_A@reddit
That's cool
someweirdbanana@reddit
And
/
instead of~
LordSkummel@reddit
I can't remember any distro in the last 10 years that let you run that without - - no-preserve-root.
aaronryder773@reddit
This should be like alias by default
campbellm@reddit
And you come to depend on that behavior until you run into a system that doesn't have it.
vim_deezel@reddit
yep, better to keep backups or at least something like snapper
ClashOrCrashman@reddit
doesn't -i negate -f?
Cool-Walk5990@reddit
Not if you append
-i
at the endwRAR_@reddit
That's what they meant.
So why did you suggest passing
-f
?theng@reddit
I think it's order dependant: the latter is the one used
gochomoe@reddit
How do you do that “accidentally”?
jaimefrites@reddit
a. install trash-cli b. alias tp=trash-put c. alias rm=echo use tp d. use
tp -rf
EngineerTrue5658@reddit
I did that with my downloads folder once. It really sucks.
Interesting-Jicama67@reddit
Use testdisk for recovery if it not found anything Ctrl + alt + f4 login to root mkdir /home/username reboot Your deleted home for for your user account just make new
oneesan_with_van@reddit
Alias ls = sudo rm -rf ~
RedSquirrelFtw@reddit
That's evil lol
RedSquirrelFtw@reddit
Yeah you're cooked. Time to restore backups lol. At least it's only your home directory so setting permissions will be easy. Doing rm -rf / now that's a pain even with backups as you need to figure out all the permissions and ownership etc and if there was any mapped network shares it really gets ugly fast. Even with backups it's a lot of work having to completely rebuild everything.
ExtraFly4736@reddit
I share your pain, i imagine that’s a nightmare and could happen to me actually. Tha ks for sharing i will read what others suggest 👍
OmegaJinchiiiiiii@reddit
lmao, all of a sudden im looking for ways to protect my home folder from bad command now, i use home directory in commands a lot. i might be vulnerable lol.
Budget-Scar-2623@reddit
One time I ran sudo chown [user] / and didn’t have a backup. I’d tried to type ./ but missed. That was fun.
Own_Ad2169@reddit
1988 Sun Microsystems Systems Administration Tool software testing team. Performing it manually before we automated. Type in some shift numeric characters. Plugged in *. No home anymore
bullwinkle8088@reddit
"I ran sudo...."
Where there's your problem! Silly sudo using admins. Real nix admins log in and exclaim "I am root! Bow before me!"
no2gates@reddit
Been there, done that back in 1994 on an AIX system. I thought I was in a temp directory, not / and was wondering why it was taking so long to remove just a few dozen files. Stopped it before it nuked the entire system, so I still had the "tar" command to restore the full system tape backup that had just finished an hour before.
ultraboykj@reddit
alias rm="rm -i"
jedberg@reddit
I once did rm -rf /bin (instead of the bin in home dir).
You are not alone.
LesStrater@reddit
No problem. Just restore your partition backup and move along...
Intelligent-Turnup@reddit
At least you didn't sudo rm -rf /
-fno-stack-protector@reddit
I was messing with chroots once and did a
rm -rf /usr/bin
instead ofrm -rf ./usr/bin
YTriom1@reddit
Unmount the partition or boot into a live environment, there are tools that get deleted files back
But unmount asap
Available_Pressure25@reddit
My latest similar circumstance happened last month I guess. I was trying to move files into subvolumes. I forgot I was in the main root and I did force remove root hahaha, I thought I was in a copy of my root.
jmcunx@reddit
Congratulations, you just joined an exclusive club of UN*X users who have done the same. This club welcomes everyone from Graybeards, elite, average and newbies.
Welcome to the Club, you should receive an ID card and Tee-shirt as soon as we get out production system restore from our punch-card backup.
ipaqmaster@reddit
On my zfs rootfs machines all I have to run to undo that is a rollback to the most recent snapshot.
#sudo zfs rollback -Rr $(zfs list -t snapshot $(hostname -s)/home -H -o name | tail -n1)
would have me back online in literally seconds.These days, always take backups/snapshots, always automatically replicate them to another machine or at least, a second zpool of one or more drives.
Shoutouts to
sanioid
andsyncoid
bloodguard@reddit
Ouch. I'd like to think I'd be too cautious to do something like this but I've pulled way too many late nights to judge.
I wonder if distros could put an alias on rm such that when the recursive flag is used it asks "are you sure? - do you want me to show you what you'd be deleting with a verbose dry-run first?".
rainpl@reddit
Why do you need sudo to remove a directory owned by yourself?
LonelyMachines@reddit
I've never been brave enough to do this. I'm curious: where does it stop? Can rm remove itself?
bob2600@reddit
I'd guess that it probably can because when rm is executed, it is loaded into memory so it could probably remove itself from the disk.
hmz-x@reddit
Yes, you can do
sudo rm -rf --no-preserve-root
and watch the fireworks.Yes, rm can delete itself because once it is loaded into memory, it doesn't care about the file. (someone more knowledgeable than me -- I guess that's most people here -- can probably explain this better)
You don't have to be brave enough. Just dumb enough to write a quick script that does it for you.
mikechant@reddit
I've done this for fun on a system I was about to blow away, I was impressed with how the system just kept on running, although to get it to shut down I had to use the sysrq sequence.
LonelyMachines@reddit
Something like
could do it. But I'm not that brave.
RichWa2@reddit
DO NOT ACCESS THE PARTITION CONTAINING THE DELETED FILES!!! What file system are you using?
Inodes can be restored and the files recreated
https://www.linuxquestions.org/questions/linux-general-1/how-to-restore-deleted-inodes-using-debugfs-898383/
UnusuallyLargeSloth@reddit
Happens to all of us, I suppose. Tried to delete directory 'foo' in my home. Wanted to type 'rm -rf ~/foo'. Actually typed 'rm -rf ~ /foo' (note the space).
Backups, have them.
kqvrp@reddit
I did that once. I was trying to remove all the dumb emacs backup files, and I was not yet wise enough to be careful, so I typed
rm -rf *~
except it was actuallyrm -rf * ~
☠️plethoraofprojects@reddit
How is it “by mistake” if you typed that in the console?
ionV4n0m@reddit
Reinstall time!
lLikeToast1@reddit
Ah I see where you went wrong. The right way to do it is removing the French package with
rm -fr
LordSlimeball@reddit
What caused you to do it?
JJenkx@reddit
I did a chown on my whole system once by messing up syntax on a find command. I didn't have the skill to recover
Alan_Reddit_M@reddit
Yeah as a general rule of thumb, NEVER save anything that you cannot afford to lose locally, either use a cloud storage provider like google drive or figure out how to build a RAID array at home, or even better, do both things, you can never have too many backups of important information
Kry07@reddit
Testdisk is your friend is it was ext4 or ntfs. If you use something more extravagant like btrfs than you hopefully have some automatic snapshots.
TheseHeron3820@reddit
SMH kids these days...
What the fuck do you need sudo for, if you're removing your FUCKING HOME DIRECTORY?
LEARN 2 UNIX U N00B!
/s
op4@reddit
use this...
https://imgur.com/gallery/uno-reverse-card-uWYBrVs#wq9smrX
glpm@reddit
I somehow ran this with / instead of ~ and I immediately hit Ctrl c. Somehow I saved this. Never again. Took me 5 minutes to get my heart rate back down.
audigex@reddit
It is essentially impossible
You have to add the -f flag to force it
You also have to add sudo to do it with elevated permissions
You specifically entered a command which bypasses both of the “are you definitely sure about this?” restrictions
The problem here is that you’re entering commands without being certain what they do - as soon as you enter -f then “am I sure about this?” alarm bells should be ringing, and any use of sudo should make you immediately pause and consider what you’re doing
prey169@reddit
Snapshots with bcachefs would've saved you here :\
Hopefully you have backups somewhere tho
junqueira200@reddit
Don't use this fs. It is trash. Use xfs or Btrfs.
prey169@reddit
Have you tried it? I've been using it for over 1.5 years. It's solid.
Xfs is good but less built in features
Btrfs Ive used and it imploded so.... Never again lol
Three_Dogs@reddit
I once nuked an SSD full of important photos while rm -rf’ing random hidden folders that didn’t even need to be touched. I was tab-completing and didn’t realize the shell would auto-complete after a few seconds. Hit enter, thinking I was confirming the directory. Nope — I rm -rf’d the entire drive.
About 80% was backed up on iCloud Photos, but that other 20% is gone forever. It honestly took me months before I could bring myself to use rm -rf again. But that’s how you learn the hard way to actually back stuff up.
For me, it snowballed into off-site backups, mirrored pools on my TrueNAS box — the works. I’ll be damned if my fat fingers ever pull that stunt again.
TL;DR: learn from your mistakes. Your sudo rm -rf ~ will not be in vain.
meisterbookie@reddit
well, \~ != /root, /var, /lib, /etc or other really important stuff. Assuming you have a backup, no big harm.
Assuming you don't, you're fucked.
Regardless, this was the very last time you did this error. Believe me, been there, done that.
qrushless@reddit
You just removed the french packages, dont worry!
TryHardEggplant@reddit
It happens. I've run
git reset --hard
on the wrong repo and also deleted a VM on the wrong host. I now run zsh with OMYZSH with the git prompt and color coding based on different hosts (and if they are local or SSH) so I can tell exactly which window is which now.I also have nightly backups. You learn from your mistakes.
TommyTheTiger@reddit
git reset --hard
really doesn't matter if you commit first because the reflog will always have the commit you were at when you did thatTryHardEggplant@reddit
It does when you have a bunch of uncommitted work that you haven't committed yet. This was well over a decade ago, and have since learned to commit often and then squash/rebase dev branches.
sudogaeshi@reddit
I can't even type the tilda when I'm trying to
TommyTheTiger@reddit
If you didn't do anything else, you can probably get the data back with forensic software! You will need to stop using that drive immediately, probably get a flash linux and put extundelete on there. Anything you write to the drive will be likely to overwrite some of your old files.
fzammetti@reddit
Maybe a silly question, but why can't rm be updated to confirm deletions of some known dangerous things, at least when -rf is specified? Maybe just ~ and / would be enough. It's a known landmine that people certainly hit sometimes, why not add a guardrail? Only argument I can think of is why not then add X and Y and so on... but I don't know, I feel like one or two don't automatically have to mean all the rest when they one or two are almost special cases in a way.
Maykey@reddit
After hearing how some KDE plugin ran
rm -rf /*
and still remembering bumblebee issue, I decided there are way too many incidents and made a wrapper which checks arguments for forbidden ones.Originally it had no $HOME, but after reading this post I decided extra protection. Now I'm kinda safe:
Personally my biggest fuck up int the past was running
sudo chown -R user:user .*
in very old bash which included..
into expansion of.*
, and it happened in my user home dir which lived outside of/home
: it was in/me
(I recently changed the distro, but kept my old home dir on separate hdd and it had different user id)As the result half of OS changed the ownership, especially lots of files in
/dev
changed (it was usual directory, not devtmpfs) and I had to reinstall OS again as services complained about invalid owner/groupObjective-Stranger99@reddit
You can try using photorec. No guarantee that everything will be recovered or that everything will be fine. First thing to do is shut down immediately to prevent overwriting. Boot from the Arch ISO and run photorec (it's preinstalled). Please do not copy it to the same drive, send it to an external drive.
ammar_sadaoui@reddit
what is wrong with removing French language?
Such_Drummer8197@reddit
Trash-cli sitting in the corner:
Comfortable_Swim_380@reddit
Well take soloce in the fact you still bucked the odds. Fortunately reloading a Linux machine can be pretty dang trivial when set up properly. Learn from the experience and level up your wizard Powers.
AirTuna@reddit
That’s one reason my “production“ home directories are based upon a workstation-local template directory I “blast out” via a prsync pipeline as-needed. And this directory is actually a symlink to the original copy in OneDrive (which I’ve left at “keep backups of deleted files for 30 days” just in case).
Since my template directory doesn’t really change much, I’ve also been working towards importing its entirety into a personal GitLab repo.
EtiamTinciduntNullam@reddit
It's safer to type it like this:
sudo ~/path/to/the/file -rf
. This way you add a potentially dangerous switch at the end.SheriffBartholomew@reddit
LOL!
Did you run Borg backups to a separate drive or partition like we've been telling you to do? Or GRUB Timeshift?
p2ii5150@reddit
Sure...
BK_Rich@reddit
Isn’t that just going to delete the root users home directory?
Mgerkin2187@reddit
Yeah I think I'm typing the directory first from now on whenever I use rm -rf 😂
Thanks for the cautionary tale. Next time have some snapshots to fill back on.
obsqrbtz@reddit
Done it once by accident. Was writing a bash script to package an RPM package from project build dir and cleanup part had an incorrectly set variable, so at the end I got ‘rm -rf ~/*’. Thanks god that was on almost stock WSL system and everything was set up again in minutes.
Qwert-4@reddit
https://www.cgsecurity.org/wiki/PhotoRec
putocrata@reddit
Reminds me when a colleague created a file called
$HOME
by accident and then tired to delete it with rm -rf. He also lost all his files and one day of work setting everything up.Im afraid that one day I'll shoot myself in the foot as I use rm -rf liberally
OneAyedKing@reddit
Been using Linux since 2013 and I ran "sudo rm -r /" instead of "./". The funny thing was I was trying to tidy up a folder as I was sorting out backups. Felt physically sick but turned out alright and I didn't lose any data. I now have a proper 321 backup strategy and feel much better. Hope all worked out for you
BambooRollin@reddit
Only time I've made this mistake was on a Solaris system in 1988.
rm -rf *
as root in /Icarium-Lifestealer@reddit
On modern Linux root protection should prevent this. While
rm -rf /*
still works.OsoGrosso@reddit
I did that once ... at work! [Fortunately, it was just after installation, so I was able to immediately reinstall with only some time lost.]
Longjumping-Green351@reddit
Why not shred 😉😉😉
dracotrapnet@reddit
Good news is, just root's home got deleted.
jackun@reddit
testdisk maybe finds somethings
mikechant@reddit
Obviously the following is not relevant if you are on a system with no GUI, but:
This is why, although I use the CLI a lot, I would always do anything like this in the GUI, since a file manager makes it visually clear exactly what you're doing.
This is also why, despite being fully capable of using dd to write an iso file to a USB stick, I use something like Gnome Disks to do that since again it makes it visually obvious what you are doing and makes it almost impossible to do something like accidentally write to /dev/sdb instead of /dev/sdd
And I also always use gparted for partition and file system operations even though I know how to use the CLI tools.
I think that once some people have discovered the delights of the Linux CLI, they get fixated on using it for everything rather than acknowledging that the visual obviousness of the GUI can make certain operations much less error prone.
Of course there are TUI tools which do have some of the visual advantages of GUI tools, but I find them a bit clunky; I'd probably use them if I had a system without a GUI though.
jessecreamy@reddit
I can run it as many times as you want. Then i will recover within 6 hours
3 NAS to keep backup in my house. Didnt count to backblaze and other cloud.
chigaimaro@reddit
I've found that the photorec commandline program for linux does a surprisingly good job of recovering recently deleted info from EXT4 partitions
Pika backup is also a great easy to use utility that does profile backups. Since its based on borg backup, its essentially Time Machine like backup for linux.
vim_deezel@reddit
hope you keep backups, because you are cooked unless you use snapper or something
Grobbekee@reddit
Does that nuke /root or /home/$user ?
not_from_this_world@reddit
The bash expansion happens before the execution so the command becomes
sudo rm -rf /home/user
Entraxipy@reddit
RIP 💔
wootybooty@reddit
Burnin’ Down the House!!!
siodhe@reddit
You probably should fight the habit of using sudo - that command definitely shouldn't need it.
Although this isn't helpful versus what happened, it might help later:
This trains the user to always use the right wildcards (unlike the
rm -i
approach, which trains people to think they can use y/n for each item - although that usually means they'll eventually type "y" when they shouldn't have, or evenyes y | rm -i
which is horrifying). The habit of being right will save the user occasionally when their in some context that lacks therm
function.The function isn't perfect, in particular, it takes a much longer one to accept "rm" options that "ls" doesn't understand. But it's generally sufficient, and as a sysadmin who stuffed something like it into all my users' environments, it radically cut down on requests to restore files, which is the whole point.
You decide. If seeing your home directory contents shown to you with "remove[ny]? " at the bottom would have saved you...
Oh, and don't try to convert the function into a \~/bin/rm script - that can result in very bad things.
lurch99@reddit
FAFO
ClashOrCrashman@reddit
I'm just curious, how does one do this by accident? autocomplete?
MountfordDr@reddit
I did it by not paying attention to where I was. I intended to empty a populated directory but instead of being within it, I executed rm -r * in my home root.
ClashOrCrashman@reddit
Gotcha. Hope you didn't lose anything too important.
28jb11@reddit (OP)
Guys I'm not really cooked, I have backups. I may be stupid but I'm not insane.
rtds98@reddit
I did once
rm -rf folder/ *
(notice the space between / and *. It shouldn't have been there).I was in HOME. It's on an nvme. It's fast as hell and the deletion was fast as hell. I ctrl+c -it asap, but a bunch of files were gone.
Luckily I had backup and restored most of the gone files (I don't backup everything, things like downloads/ ... no).
Maiksu619@reddit
But, how?!
_n3miK_@reddit
Good luck with that, it's very dangerous.
JagerAntlerite7@reddit
I run Deja Dupe for cloud backups. Plus I scripted a user home directory backup: rsync for a snapshot and a compressed archive file for each day of the month. I run it twice a night with separate storage destinations. Neither is my boot NVMe.
wagneja4@reddit
Unless shreded, files are not deleted. They only get deleted inode entries; scan your disk with a tool (forgor name, google it) and files probably will still be there
lysergic_tryptamino@reddit
Come back to us when you replace that - with a /
Unusual_Feedback7841@reddit
If you want or have the need to recover what’s lost, I suggest making an image of the disk and use Autopsy to recover what’s lost.
This would only work if you haven’t actually written anything over it.
cs_forve@reddit
If you french fry when you should've pizzaed, you're gonna have a bad time
Dinux-g-59@reddit
Linux is not Windows. If you want to destroy your system, as root you can. So...
Pitiful-Valuable-504@reddit
since 2002, you surely have several boxes and hundreds of backups, so forget about it.
Spiritual_Sun_4297@reddit
Once I was cleaning my home directory. I wanted to delete
something
. And there were multiple. So i typedrm something *
. I was running on the keyboard. And by mistake I typed an extra star... Shit started deleting and i lost many of my files.Since then, I use trash-cli and substitute my rm with rm-cli.
LocoCoyote@reddit
How could you possibly do this by “ mistake”? It’s a multi step process
vaynefox@reddit
Ah yes, you fall for the oldest trick in the book, I hope you have a backup of everything....
Iwisp360@reddit
I have snapshots just for moments like this one
ArrayBolt3@reddit
Did this just a few months ago because a script made a directory somewhere not even under
/home
that was literally named~
. Tried torm -rf
it without properly quoting the string. Didn't realize what I had done until I noticed the command was taking suspiciously long to return. Fifteen seconds later I was learning about how awesome PhotoRec is. (It successfully pulled both my KeePassXC database and an important financial document from the wreakage, everything else I lost I didn't care that much about.)funbike@reddit
No problem. Just restore from your last backup or snapshot. Do a git pull on all projects so you are up to date. You shouldn't have lost much at all.
Real-Abrocoma-2823@reddit
Use cachyOS with btrfs and next time you break something just return from auto snapshot.
JamesLahey08@reddit
You accidentally typed that all out? No.
not_from_this_world@reddit
Last time I did this was several years ago. I mistype a space after the first
/
. Since then I just don'tsudo rm
ever. I also avoidrm
, I usetrash
instead.Typos happens to everyone. Good luck with the recovery.
Entaris@reddit
It happens to the best of us. One habit that helps that I try to cultivate is to flag AFTER the path. IE: sudo rm ./dir_name -rf
Gives your brain an extra second to recognize that you mistyped your path before you commit
sgilles@reddit
That is why I'll never go back to a filesystems without integrated snapshotting support. With automated snapshots there's always a safety net.
(It is not a replacement for proper backups. But depending on configuration those might be a bit more tedious to access.)
FunAware5871@reddit
It's okay, it happens to everyone sooner or later.
That's when you go down the backups/datahoarding holes.
uar-reddit@reddit
If you happen to remove your home dir, just run
mkhomedir_helper <username>
whatyoucallmetoday@reddit
I once did ‘rm -rf ~/.*’. It took me a few seconds to realized the command walked up the tree with the ‘..’ directory. Lesson learned.
Longjumping-Youth934@reddit
How is that possible to run this command by mistake?
UndulatingHedgehog@reddit
Ideally, start any sudo command with a # to make sure you won't do anything unintended, type out the command, check it again, and then finally remove the #
wbw42@reddit
Up arrow to reuse previous command.
Does #sudo do a dry run? Or just make it a comment.
UndulatingHedgehog@reddit
Makes the command a comment.
Only problem with that approach is that tab completion stops working, since the completion machinery doesn’t know you’re typing a command.
madmooseman@reddit
Or
echo
caribbean_caramel@reddit
I once ran sudo rm -rf / to see what was going to happen. It was kinda fun to see the system collapse.
freshpandasushi@reddit
you didn't setup snapshots?
CulturalSock@reddit
Btrfs and snapshots. So handy you'll never go back
howardhus@reddit
is this a troll post? even if you did run that command… the sudo then asks for your password?? you also fully typed out yoir password?
gkbrk@reddit
Not if you ran another sudo command from that terminal in the last 15 minutes.
howardhus@reddit
true, my bad
tblancher@reddit
I did this on my work MacBook Pro, and lost a good 9 years worth of scripts since the Code42 backup excluded ~/bin.
D'oh.
DistributionRight261@reddit
Use btrfs with snapshot next time
RedHuey@reddit
At this point, why doesn’t rm have a simple check for “-rf” when it is going to remove either the entirely of your own user or the root file system. Just a simple “are you sure?” Pretty much nobody would ever do this intentionally, and if they do intend it, what harm is there in have to answer y or n?
(I know, when the FBI is banging on the door, seconds count…)
MountfordDr@reddit
Coincidentally I did this yesterday too! I had already done a backup of my Pictures directory which was full of photos organised into subdirectories. I intended to cd into Pictures and then ”rm -r *”. I didn't. Fortunately I managed to reconstruct it from various backups but it was a pain. The good news is that it cleared out a load of rubbish which I had been meaning to do for years!
Independent_Lead5712@reddit
I ran this once after following a sequence of random commands I found during a google search. I fucked up perfectly fine Ubuntu installation in the process. I probably won’t run this again
suksukulent@reddit
F brother
Hope you got backup of important things. Otherwise good luck digging...
chilabot@reddit
Always write "rm
114sbavert@reddit
I have aliased "rm" to "trash" on my pc and "sudo" to "sudo " (with a trailing space)
northparkbv@reddit
This is bait, it literally forces you to write --no-preserve-root...
theng@reddit
it warns you on
/
only IIRCDarkKnyt@reddit
My first line backup is that I almost always use tab to auto complete. That way, I let the shell populate the rm directory vice me.
Or you can
trash-cli
or put an alias for rm that is mvdiatron3@reddit
Did rm -rf / on head node of my lab's HPC cluster... Been working all day with environment modules which use $() syntax for variable interpolation, and mistakenly used it in bash to cleanup some path which evaluated to nothing + /... Never before have i experienced a panic attack like that. Looking around my colleagues, most seniors had one such major fuck up that made them way more careful and wise :)
UnintelGen@reddit
Did that once like 7 years ago. Wasn't fun. I still occasionally rm, but when I'm paranoid I just sudo pcmanfm and use pcmanfm as a gui for deleting and changing privileges. I use dolphin otherwise, but pcmanfm doesn't care about sudo so it makes a great secondary file manager for that sort of thing.
Slight_Manufacturer6@reddit
Good thing you are a long time Linux user and understood the need for a good backup.
WSuperOS@reddit
man one time i deleted /usr/bin by mistake lol
mmacvicarprett@reddit
There is a good chance of recovering lege chunks using recovery techniques. If possible take an image of the disk.
Comfortable_Relief62@reddit
My manager accidentally did this to me at work once, good times
kernel612@reddit
alias rm='trash -v'
kernel612@reddit
alias rm='trash -v'
Street_Secretary_126@reddit
You lost your home, just destroy your roots next
AbdSheikho@reddit
My condolences
mimavox@reddit
No you didn't.
Avanatiker@reddit
Why would you remove the French language pack?
bcredeur97@reddit
ZFS would help with this (if you had snapshots)
****although snapshots are not backups, as King’s College in London learned
bp019337@reddit
If you don't have backups and haven't rebooted you can try to recover files by lsof and checking for deleted files and copying them to another drive. Otherwise use a file recovery tool like PhotoRec
Forsaken-Orange9293@reddit
install trash-cli and alias rm to trash-put
sleepingonmoon@reddit
Always
ls
beforerm
.IntroductionNo3835@reddit
I remember that at university a colleague studied computer science and I studied engineering. I sat next to him and watched him typing some things. He went to get something and I typed something like, erase * enter.
Well, he had to contact the administrators to retrieve something...
Today he works in administration, collects taxes and I deal with computing.
Summary of the story, I made a mistake at the right time... Kakaka
But I've deleted files by accident.
The problem is that I type quickly and a CD gave an error and I didn't see it, so the delete erased the wrong files. Ultimately, you must maintain 100% attention on the terminal.
seiha011@reddit
"by mistake" ?
qdim42@reddit
Only inodes are deleted.
extundelete (for ext3/ext4, requires an unmounted file system)
testdisk / photorec (search for known file signatures)
Professional data recovery tools or services (expensive, but more thorough)
v3bbkZif6TjGR38KmfyL@reddit
Been there, but I got that out the way early in my Linux quest.
frankster@reddit
How did you run this command? Brainfart?
rjkush17@reddit
i run sudo rm -rf /*
in virtual machine
chud_meister@reddit
Oh it's possible.
I essentially did a sudo rm -rf / on myself from a typo in a makefile I was working on
The typo was something like this:
After running and rule to test it (and entering my sudo password) I had a moment of confusion watching all of the permissions errors on all the files it wouldn't rm blow by in my terminal and when I realized what was going on I mashed control + c as as fast as I could but the damage was done.
It didn't have the --no-preserve-root arg passed, so not everything was nuked, but enough random files got removed scattered as across the system that fixing it was next to impossible and I just made a backup what wasn't already in my backups and reinstalled.
lastPixelDigital@reddit
how do you do that by accident?
No_Cookie3005@reddit
Happened to me some weeks ago because I've been too lazy to add -I before testing a rm command. This ended up removing everything in /home/username apart from hidden folders. Luckily I had a backup, unfortunately 1 week old.
There is photorec utility in testdisk pac
fernandodellatorre@reddit
You may try installing your disk on another computer and run testdisk. It might find some deleted files. Good luck. It happens.
ayalarol@reddit
I did try this in 2010 with puppy linux jaja damn a bad joke of someone in some foro I copy and paste the command and starting the magic jajaja
GaryDWilliams_@reddit
a lot of people think that until they do it and sooner or later it'll happen.
Responsible-Gear-400@reddit
I totally did this not long ago when I accidentally named a folder “~”.
I now have a wrapper script that checks if I rm -rf specific directories and makes me confirm I want to delete.
Good luck getting your stuff set back up!
Appropriate_Net_5393@reddit
It doesn't matter what you do, tomorrow you still have to go to school
Nevermynde@reddit
Depending on the filesystem type, there may be some "undelete" options.
korinokiri@reddit
Look up file recovery, it's possible.
Next time install safe-rm which I believe would prevent this
HotLingonberry27@reddit
Done this before. I did with with /. Deleted the root dir
keaman7@reddit
I know this pain 😂.
-Sa-Kage-@reddit
Afaik there is no simple "undo rm" command. If you don't have backups and need the data, you need to shut down system asap or at least unmount the drive if possible and either go for some data recovery software on bootable USB or pay $$$ for data recovery
Defiant-Flounder-368@reddit
/ is quite close to <enter<>