Need advise to decide https certificate approach
Posted by Haunting_Meal296@reddit | linuxadmin | View on Reddit | 25 comments
Hi, we are working on an embedded linux project that hosts a local web dashboard through Nginx. The web UI let the user configure hardware parameters (it’s not public-facing), usually accessed via local IP.
We’ve just added HTTPS support and now need to decide how to handle certificates long-term.
A) Pre-generate one self-signed cert and include it in the rootfs
B) Dynamically generate a self-signed cert on each build
C) Use a trusted CA e.g. Let’s Encrypt or a commercial/internal CA.
We push software updates every few weeks.. The main goal is to make HTTPS stable and future-proof, the main reason is that later we’ll add login/auth and maybe integrate cloud services (Onedrive, Samba, etc.)
For this kind of semi-offline embedded product, what is considered best practice for HTTPS certificate management? Thank you for your help
serverhorror@reddit
Option D)
Generate a self signed cert on first startup. Then let the users add their own cert (and CA) if they choose to do so.
If you need to know the certificate it should be somewhere an option that allows me to register my certificate with your system.
I don't want you to be in possession of the cert, ever.
suncontrolspecies@reddit
What's the best way to do this? I mean, how will you setup the self-signed cert on first startup? I am also interested in this issue and trying to understand what would be the process.. Thanks
thequux@reddit
100% this, but also, add support for some sort of automation to provision and update the certificate. API access is acceptable. ACME (with a configurable directory URL) is better. SCEP, EST, CMP, or the like is S-tier.
chocopudding17@reddit
Yes, and make it configurable via API (especially if ACME support isn't added, but even if it is). Special snowflake systems that cannot have their administration scripted are a pox on sysadmins everywhere.
Haunting_Meal296@reddit (OP)
Completely agreed, I am know trying to learn more about this approach since it sounds the more secure and customer friendly of all of them (also future proof)
ferminolaiz@reddit
Step-ca is a pretty good option if you want to spin up an internal CA with support for ACME.
archontwo@reddit
Option C, but you will have to find a way to update it as for security reasons you cannot have certs that last forever.
I suggest you put a private vpn on every embedded device (wireguard, preferably as it is a 'quiet' protocol) and the schedule a job where you copy the certs as they are updated on your backend service somewhere.
See this.
Haunting_Meal296@reddit (OP)
Thank you! I wasn't thinking about this, these are embedded devices running a very old version of ubuntu (bionic). I use wireguard at home using openwrt for my vpn, but I am not sure if adding this extra layer to this board (tegra jetson), is feasible. I might have to run some performance tests first
archontwo@reddit
Maybe update the very old Ubuntu which ended standard support back in 2023, or see if Debian will replace it.
michaelpaoli@reddit
C - and automate the sh*t out of it. :-) And yes, needs be a domain in public Internet DNS for that, but that doesn't mean you need to expose the actual web server or the like. Just need to use DNS or wee bit of http to validate for certs (needs be DNS for wildcards). That's basically it. I've got programs I've written, I type one command, and I have cert(s) in minutes or less, and including wildcard, SAN, multiple domains ... easy peasy. Even done versions of same that handle multiple DNS infrastructures (ISC BIND 9, f5 GTM, AWS Route 53) as needed, in heterogeneous environments, to get the needed certs - even when the domain(s) in the cert span multiple such distinct infrastructures.
And yeah, you don't wanna be doing A nor B.
Haunting_Meal296@reddit (OP)
Great. The challenge for us is that the devices usually sit behind NAT on customer networks, so DNS validation etc sounds tricky. Thank you for the advise
michaelpaoli@reddit
That doesn't prevent you from also having (at least some) corresponding public DNS on The Internet - doesn't even in fact need to be same resource record(s), just need some of the domains out there - that's all. And it's a relatively common thing to do - many will often have DNS split under a single domain, such that what that looks like and resolves to with public Internet DNS is distinct from internal DNS.
And no, hiding your internal DNS names isn't real security anyway - likewise hiding or trying to hide the IP addresses. What matters for security is the access.
megared17@reddit
LE certs are only valid for 90 days, so unless you have a way to regularly renew and redeploy that won't work.
Why does something on an isolated/internal network need https anyway?
Primary_Remote_3369@reddit
By 2029, all TLS certificates will have a maximum validity period of 47 days. ACME is becoming very important very quickly.
megared17@reddit
Making using stock browsers in local isolated networks even more awkward.
rakpet@reddit
The best would be C but I don't think it would be possible if this is not internet facing. In that case go for B. If possible, additionally allow users to import their own
iam8up@reddit
We have servers getting certificates that aren't publicly reachable. You can absolutely get a LE cert without the device reachable from the world.
barthvonries@reddit
C is totally possible if the machine has some kind of internet access, since letsencrypt has DNS APIs.
I use it for al my internal services. For the machines with no Internet access, I set up a public facing webserver whose only task is to renew certificates and push them to the other servers.
Haunting_Meal296@reddit (OP)
Thank you for your response, yeah, these devices are being used by the customers in an isolated environment. The idea of letting users to import their own cert looks very nice, but I need to learn and try to understand more about it. I want to keep things simple
rakpet@reddit
This is a feature to please the Cybersecurity team but that will never be used. If this is for consumers, don't bother. Only implement it if this is for large Enterprise or a niche nerd segment. (Disclaimer: I'm a niche nerd that would use it, but I know I'm a minority)
ibnunowshad@reddit
Option C. You can automate it as well. All you need a publicly registered domain. For more details go through my blog please.
Il_Falco4@reddit
Option d: put Proxy on front that takes care of SSL certs. ACL so only internally is allowed and put DNS in place with internal resolution. Scalable.
03263@reddit
Why even support https if it's not public facing?
Le_Vagabond@reddit
you can only do A or B if the access is through the IP. that's basically the standard way for stuff like cameras or small IoT stuff like this, users have to click through the warning page "this website is untrusted".
if you want proper certificates, you need a proper FQDN, and this just doesn't happen when your users are mainstream consumers, the kind for whom "access those devices via local IP" is already a stretch. if you sell those devices to professionals you could ask them to route *.yourdomain.com to the IPs internally, and have valid C certificates that way.
the "best practice" route for those things nowadays is to have a cloud-based config tool that you as the vendor hosts, with clean certificates because it's on your domain, that pushes to or is polled by the devices for config changes. it's a LOT more difficult than it sounds and it exposes you to fun stuff like GDPR.
you could also get the devices to reach out to your hosting and establish a reverse tunnel with a dynamically generated subdomain and certificate. I tend to frown upon stuff doing that without permission, though.
Haunting_Meal296@reddit (OP)
Good point, and yes, you've described the current situation very well...
For now it was decided just to stay with B (unique self-signed certs) and accept the browser warning, since it’s the standard behavior for local devices.
But I want to solve this long term issue from the get-go. The cloud-based config approach sounds clean and defintiely the way to go, but it's truly an overkill for our project.