If your security is insufficient with perfect knowledge, it's still insufficient when obscured. The hurdle of obscurity doesn't scale: once it's broken once, it's broken forever.
Consider a properly salted, SHA256 hashed passphrase with 50 bits of entropy. Now consider that I add some twists and complexity to how I perform that hash, which altogether add another 20 bits. I've made that passphrase over a million times harder to crack, great!
Except once an attacker has put in the work to solve that process, every other hash in my database loses that additional 20 bits of protection.
It's the same reason we use a salt in the first place. If we don't, then the effective entropy of each password is no longer independent. Once I spend the effort breaking that 50-bit passphrase once, I've already broken every one that's weaker.
I agree, but you aren't considering cases where security is already as good as it possibly can be.
Consider the examples of Google reCAPTCHA, they have to run their JavaScript on your browser. There is no way around that, its just how the web works and they need to drive the client side solving of the captcha. There are no shortcuts taken, there is nothing more they can do on security side. However, they decide to still obfuscate their JS code to make it harder for the bots to learn about their signal and bypass the captcha.
Consider the Riot Vanguard kernel level anti-cheat, running on a users machine. It has to be on the machine to detect cheats. It also has to use the internet to send cheat signals back to the Riot servers. There is no extra measures they can take here to solve this problem, so they have to rely on obfuscating the signal sent back and hiding their implementation as much as possible. They'll have to constantly change things and iterate as hackers reverse their code and learn more about their system.
My stance is, always use the proper security for your system. After that, it doesn't hurt to add obscurity as another layer on top as part of a defence at depth strategy. It cannot hurt. If its ineffective, no worries you're basically exactly back to where you were before with the maximum level of proper security anyway.
Consider a properly salted, SHA256 hashed passphrase with 50 bits of entropy. Now consider that I add some twists and complexity to how I perform that hash, which altogether add another 20 bits. I've made that passphrase over a million times harder to crack, great!
That's once type of obscurity; using an algorithm no one knows about.
Except once an attacker has put in the work to solve that process, every other hash in my database loses that additional 20 bits of protection.
Yeah, but that's extra work; that's the point! That's what Defence in Depthmeans! After an attacker defeats one layer, they have another layer to contend with.
No one is claiming that obscurity is sufficient. It's just one more hurdle that an attacker has to overcome.
You can run a fully hardened keys-only ssh server on the standard port, or you can run the same setup on a random port. That gives the attacker one more hurdle - perform port-knocking on 65k ports to determine which is the correct one to use.
"Instead of running SSH on port 22, I'll run it on port 15288 and only give that port number to trusted clients" -- that would be "security through obscurity." Yeah, you've created an extra hurdle, but it only has to be bypassed by one attacker once, and then any future attacks may not benefit from it at all.
It's not zero benefit -- you're likely safe from attackers who are just spamming requests to a bunch of hosts and not targeting you directly. It's like having your car door locked, when all the other cars on your street are unlocked and there's nothing obviously valuable in yours. Not worth the effort.
But if you're being targeted for attack, it's not going to hold up for very long and once it's breached, it's breached.
A stronger security posture, if you want attackers to consistently have to cycle 65k ports, would be to regularly change which port you're listening on based on a secure pseudorandom sequence and provide the seed of that sequence to the trusted clients. That way, even if an attacker figures out what port you're on today, or this hour, or this minute, that information is soon useless. And the information they have -- the fact that you cycle port numbers randomly -- doesn't expose the actual secret you need to protect, which is the seed of the sequence.
In that scenario, it's safest to assume all non-secret info (like how the next port is chosen) is already compromised. Relying on obscurity leads to hubris and overestimating the security you have.
I'll add my own take on this: regardless of if security through obscurity is bad or not, (usually) the downsides (harder maintenance, performance hits, debugging problems) aren't worth the amount of security it gives you.
This is a straw man. You say "security through obscurity is not bad, security ONLY through obscurity is bad." But really that's what everyone means when they say "security through obscurity is bad." So you're basically arguing against a take that doesn't really exist.
With the invasion of almost limitless AI bots I'd posit that security through obscurity has never been less good than now. If it doesn't cost much to add then might as well add it but I wouldn't believe it adds much value. I'd rather have well engineered security than smoke and mirrors.
In my, limited, experience AI has a much easier time seeing through obfuscation than I as a human do. I have no idea why, maybe because it holds a lot more context at once? no idea, but that's what seems to be.
Defense in depth means that obscurity is one of your layers.
Like, listening for ssh on a random port; you're obscuring the real port because drive-bys who have a 0-day for ssh will try and then move on assuming you have closed off ssh, and even if they don't your tripewire/fail2ban/whatever is going to alert you to a port-knocker anyway.
The idea of enhancing security through obscurity is fine, it's when you decide that hiding the door is enough and choose not to put a lock on it, that's where you start to run into problems.
> Their main goal was to make it harder for data-scraping bots to reverse engineer and replicate the API requests powering the page.
Basic code obfuscation is almost certainly a 1 or 2 shot for any AI agent to undo, especially if it’s JavaScript. It may not even be necessary anymore because an agent could easy just run the code and infer what it does decently enough as a blackbox. Certainly well enough for a human to take over if it seems like there’s a worthy target here.
And so yeah, obscurity is pretty useless in this scenario.
rooktakesqueen@reddit
If your security is insufficient with perfect knowledge, it's still insufficient when obscured. The hurdle of obscurity doesn't scale: once it's broken once, it's broken forever.
Consider a properly salted, SHA256 hashed passphrase with 50 bits of entropy. Now consider that I add some twists and complexity to how I perform that hash, which altogether add another 20 bits. I've made that passphrase over a million times harder to crack, great!
Except once an attacker has put in the work to solve that process, every other hash in my database loses that additional 20 bits of protection.
It's the same reason we use a salt in the first place. If we don't, then the effective entropy of each password is no longer independent. Once I spend the effort breaking that 50-bit passphrase once, I've already broken every one that's weaker.
PersianMG@reddit (OP)
I agree, but you aren't considering cases where security is already as good as it possibly can be.
Consider the examples of Google reCAPTCHA, they have to run their JavaScript on your browser. There is no way around that, its just how the web works and they need to drive the client side solving of the captcha. There are no shortcuts taken, there is nothing more they can do on security side. However, they decide to still obfuscate their JS code to make it harder for the bots to learn about their signal and bypass the captcha.
Consider the Riot Vanguard kernel level anti-cheat, running on a users machine. It has to be on the machine to detect cheats. It also has to use the internet to send cheat signals back to the Riot servers. There is no extra measures they can take here to solve this problem, so they have to rely on obfuscating the signal sent back and hiding their implementation as much as possible. They'll have to constantly change things and iterate as hackers reverse their code and learn more about their system.
My stance is, always use the proper security for your system. After that, it doesn't hurt to add obscurity as another layer on top as part of a defence at depth strategy. It cannot hurt. If its ineffective, no worries you're basically exactly back to where you were before with the maximum level of proper security anyway.
NuclearVII@reddit
This is a vaccously true statement.
lelanthran@reddit
That's once type of obscurity; using an algorithm no one knows about.
Yeah, but that's extra work; that's the point! That's what Defence in Depth means! After an attacker defeats one layer, they have another layer to contend with.
No one is claiming that obscurity is sufficient. It's just one more hurdle that an attacker has to overcome.
You can run a fully hardened keys-only ssh server on the standard port, or you can run the same setup on a random port. That gives the attacker one more hurdle - perform port-knocking on 65k ports to determine which is the correct one to use.
rooktakesqueen@reddit
"Instead of running SSH on port 22, I'll run it on port 15288 and only give that port number to trusted clients" -- that would be "security through obscurity." Yeah, you've created an extra hurdle, but it only has to be bypassed by one attacker once, and then any future attacks may not benefit from it at all.
It's not zero benefit -- you're likely safe from attackers who are just spamming requests to a bunch of hosts and not targeting you directly. It's like having your car door locked, when all the other cars on your street are unlocked and there's nothing obviously valuable in yours. Not worth the effort.
But if you're being targeted for attack, it's not going to hold up for very long and once it's breached, it's breached.
A stronger security posture, if you want attackers to consistently have to cycle 65k ports, would be to regularly change which port you're listening on based on a secure pseudorandom sequence and provide the seed of that sequence to the trusted clients. That way, even if an attacker figures out what port you're on today, or this hour, or this minute, that information is soon useless. And the information they have -- the fact that you cycle port numbers randomly -- doesn't expose the actual secret you need to protect, which is the seed of the sequence.
In that scenario, it's safest to assume all non-secret info (like how the next port is chosen) is already compromised. Relying on obscurity leads to hubris and overestimating the security you have.
driven_to_it@reddit
Not bad at all
UnintentionallyEmpty@reddit
I'll add my own take on this: regardless of if security through obscurity is bad or not, (usually) the downsides (harder maintenance, performance hits, debugging problems) aren't worth the amount of security it gives you.
R2_SWE2@reddit
This is a straw man. You say "security through obscurity is not bad, security ONLY through obscurity is bad." But really that's what everyone means when they say "security through obscurity is bad." So you're basically arguing against a take that doesn't really exist.
PersianMG@reddit (OP)
That's not the case, the introduction paragraph explains how the Echo user and others believed that all security through obscurity is bad.
I'm arguing against a real take that some people have and believe it. That my whole inspiration for writing this blog post in the first place.
dgkimpton@reddit
With the invasion of almost limitless AI bots I'd posit that security through obscurity has never been less good than now. If it doesn't cost much to add then might as well add it but I wouldn't believe it adds much value. I'd rather have well engineered security than smoke and mirrors.
lelanthran@reddit
Isn't it the other way around?
Your $FOO might be in the training set, but your obfuscated $BAR($FOO) won't be.
dgkimpton@reddit
In my, limited, experience AI has a much easier time seeing through obfuscation than I as a human do. I have no idea why, maybe because it holds a lot more context at once? no idea, but that's what seems to be.
lelanthran@reddit
Defense in depth means that obscurity is one of your layers.
Like, listening for ssh on a random port; you're obscuring the real port because drive-bys who have a 0-day for ssh will try and then move on assuming you have closed off ssh, and even if they don't your tripewire/fail2ban/whatever is going to alert you to a port-knocker anyway.
dmcnaughton1@reddit
The idea of enhancing security through obscurity is fine, it's when you decide that hiding the door is enough and choose not to put a lock on it, that's where you start to run into problems.
PersianMG@reddit (OP)
Yes, that's exactly my viewpoint too! I was pushing back against those that believe obscurity is bad in all cases and should be universally avoided.
phillipcarter2@reddit
I mean it’s right here though?
> Their main goal was to make it harder for data-scraping bots to reverse engineer and replicate the API requests powering the page.
Basic code obfuscation is almost certainly a 1 or 2 shot for any AI agent to undo, especially if it’s JavaScript. It may not even be necessary anymore because an agent could easy just run the code and infer what it does decently enough as a blackbox. Certainly well enough for a human to take over if it seems like there’s a worthy target here.
And so yeah, obscurity is pretty useless in this scenario.