Static sites are great, light weight, fast and cheap...but also ugly and not interactive. I'll always prefer a nicer looking site with a bit of JS interactive if applicable over a purely static website.
Maybe it's just me but I don't particularly enjoy browsing sites that look like they were made 50 years ago. It's fine for simple stuff and in OPs case it's more than fine but anything more complicated and it falls short quickly.
Being ugly has nothing to do with the site being static. Static sites can utilize CSS all the same. And the same goes for JS interactivity. OP’s site is interactive, even though it is a static site.
Yes, static sites are great. But it wouldn’t hurt to add some lines of CSS for better readability. The text is quite literally on the edge of my phone screen.
You can use a static site generator like Hugo to transform markdown to HTML as well as pick a theme for your site, in my experience even a few years ago the results were good.
http://www.lainsystems.com/ is my (horribly dated) personal site all built with Hugo and but a single like of HTML / CSS / JS written by me.
How exactly does the site reload when the JSON data file is refreshed? And how does the integration with the JSON file populated by python work, how is the file sent to the static site?
On a cron job, a GitHub action rebuilds the JSON file and commits it, which triggers the Cloudflare Pages deploy action, and deploys a new version of the site with the newest JSON.
This .... This is just server side processing right? How is this any different than a plain old jsp or whatever. You're just offloading the processing to the backend rather than multiple tiny calls from the front end?
No, because "server side rendering" conceptually generates the page for every request. Each user might get their own unique page based on their IP address, or cookies, or stored per-user information.
This is created files once, and just sending those, unchanged, to every request, but updating the files once an hour
... A unique page based on their cookie and session... Sounds like server side rendering to me. I get that if the page doesn't change it'd be caxhed, but what kind of page doesn't change with each request?
This is just server side processing right? How is this any different than a plain old jsp or whatever
Believe it or not, the world before react came along is now "olden times". There will be now-senior devs out there to whom server side rendering is a revelation.
That's what have been doing at work for a long time. Some monitoring scripts run by cron in the background generate a few .html files, copy them to the web server root after generation and that's it. The HTML contains the old <meta http-equiv="refresh" content="60"> tag so the browser automatically refreshes the page every minute.
God I hate pages that work this way. They completely fuck up text selection and probably most accessibility tools such as screen readers. Not to mention they constantly make the tab bar flicker, adding a bunch of visual noise to my screen.
Out of all the ways of getting a dynamic website, I consider this to be the only actively user hostile one and I am so glad Firefox can block that shit.
No use Ajax instead. Check out query load. It would be better to add a sign to tell the user he needs to refresh than auto refreshing whether needed or not. Unless you are weather data. And then still better to just refresh the div not the whole page
The HTML for the whole page is not even 10 KB. It's display only, so no links on it and there is no reason to copy text from it since you can get the same data from a logfile as CSV.
God no don’t do this. You eat the users bandwidth. And you serve useless pages if they leave for lunch and come back Monday. Make it not cache is enough. Let the user hit refresh.
The refresh option is for a forced reload. It bypasses the browser cache and downloads everything over and over again whether you need to or not. There is no “normal” reload that you can do with a header, but you can add a script tag with a line of JavaScript in the html to call location.reload() after a minute. That’s a lot better and won’t waste your users bandwidth.
If you’re serving your html from something simple like an S3 bucket it will automatically hash your file contents and create an Etag. This will cause your navigation.reload() to only re-download the website if something actually changed.
does this mean that it refreshes automatically for users? So a guy is browsing your site and then out of a sudden it gets refreshed on his end? Probably less than ideal I'd say
This is for a page that displays information that changes over time. I want to be able to leave the tab open for hours and still have the current page displayed automatically.
For other applications where this is not needed, don't include that tag and you get a fully static page without automatic reloads.
Ok. But in your example, cron kind of meta-generates files; though probably also from some code that resides in other files. So you have a dynamic controller, right? Or do you handle the static .html pages directly?
The scripts are called by cron, grab data, convert it to a HTML file and copy that file, once generated, to the web server root. So the files are only generated once per period and the web server itself only serves static html files no matter how many people ask for them while many sites do generate the HTML fresh for every request.
The example in the article does about the same, just doesn't recreate the page as often as my setup does it.
The refresh tag is so that I can leave the browser tab open and always get the latest version of the page since those pages are for monitoring things that change slowly, like file system usage.
There are a lot of applications that don't need an automatic page refresh, this was just an extra I needed.
This exact setup is what I do for some at-home self-hosted stuff that I run for my wife to conveniently access some local services. The cron-bash combo is so much simpler, leaner, and easier to make quick-and-dirty changes than the other options.
That said, it's also plainly obvious that it wouldn't scale all that well. But my 'scale' will always be just 'home', so I don't need to care.
Since pushing a static html file uses a lot less CPU than creating that same HTML dynamically for every incoming request, it can scale pretty well. It depends a bit on how complex that page creation is. If your scripts need 10 mins to create the page, it might be a good idea to look into a faster way to do it.
But whatever method you use, if it works for you, it's all good.
Well, you can split up a large bash script into multiple ones that get called and all provide a part to the result and you can use functions/procedures inside a script. That does help with managing things.
Or you can not do that and end up with a script that grew slowly, still does what it's supposed to do but is now a monster. If you have the latter, I hope you remembered to comment your code... :)
my first gig in the industry was at a saas company that used a similar setup for their internal infrastructure. it started from bash scripts and HTML, and grew inti a 250,000 lines of code monstrosity that was mission critical to the org. i have fond memories of it because of how much i learned while refactoring it over 2 years, but evebtually the org moved to kubernetes instead.
Anyway. if you ever need to scale beyond bash scripts, i recommend transitioning to PHP scripts. simple setup, you can avoid OOP if you prefer, and its a bit easier to maintain and test. plus it makes it really easy to switch between coding scripts and coding what gets displayed on the webpage.
I don't think AI (in terms of LLMs) is capable of knowledge or wisdom. I think it is capable of statistically likely combinations of words that correlate strongly with a given input relative to its training data with a sprinkling of RNG.
Try googling for documentation on the various commands in Vim and do the same exact thing in ChatGPT. I can almost guarantee you that ChatGPT can parse the near-unreadable Vim documentation better than you (or at least me) can. Add in some context, "how do I use this command in conjunction with this other command" and an LLM is the only possible place to even ask the question.
Personally, documentation is the only place I still use LLMs. I've stopped using it for practically everything else; it's way too unreliable for most things.
What's unreadable about vim's documentation? I find it goes into quite enough detail in most cases that a web search could not actually find a solution, but vim's help shed a lot of light.
I wouldn't trust it to present me new information, regardless of if it could do it more quickly than I could. I wouldn't be in a position to know if it's telling me accurate information or not and I'd question anything it presents. I don't like the uncertainty I feel from it.
I don't blame people for using it to get quick results but I do hope they know that they run the risk of it shitting the bed every so often. It's like having a calculator except one in every dozen calculations it's wrong.
The first thing AI got good at was summarizing. AFAIK GPT 2.0 could do it very reliably, and with clearer, more concise results than most humans. Newer algorithms are (speculation) even better at this, although off the shelf models may be worse as they could be weighed down by all of the superfluous information they have trained on.
As documenting code is very often equivalent to summarizing its contents, you can have LLMs do it pretty reliably for those cases.
The thing is that - at least when it comes to Vim keybindings - you'll figure out incredibly quickly whether it's true or not. "Oh , the binding does indeed not exist."
Then that's when you have to either roll the gamble again or end up in the documentation yourself anyway. For a low cost gamble, that fine, I guess. I have apprehensions from seeing too many people rely on it.
Im tired of AI gen posts like this can anyone tell me how to get rid of garbage like this. I already downvote, show less posts like this, hide, and block every single one I see.
I'm not sure what the point of the article is. "KISS"?
Who's the audience? People who can publish a static site already know the benefits of a static site. People who don't publish a static site usually do so because they don't have the technical capability to do so or are utilising a service where it isn't an option.
I can publish a "hello world" html file and it'll be the bloody fastest site to ever exist. It's also not a very interesting or valuable blog subject.
I think with a lot of the newer frontend technologies, like react, it's not obvious to newcomers that they could achieve similar results with static HTML.
Also, even for those of us with more experience, JavaScript and interactive websites are so normal that it's harder to design a static site.
I've got a single page web app that I was very proud to make static, but it requires a couple of server-side tricks that a lot of frontend-only people won't know.
The point of the article is that there's often very cheap ways to generate fast site content that's "free" to host.
The audience is everyone who wants to build a quasi-dynamic site. I've personally hosted static sites before, but didn't realize that the hosting possibilities had grown so much, so I certainly learned something.
The site in question is actually interesting, unlike your example.
I see some developers and pm's here, i.e. build what makes sense in the least effort possible.
But I personally love what OP did - engineer the optimal thing. Not a product in a business sense, but a simple product that does what it should and nothing more.
"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away."
Interesting, but the complexity is comparable to "setting up a web server and putting it behind an aggressive CDN or Varnish or whatever", right?
Like, the author has Eleventy floating around in their pipeline, had to set up a github action to periodically refetch the data and automatically commit, &c. There's still complexity it's just different.
I don't object to simplicity being extremely useful, and there are many cases where simplicity - such as a static website - is necessary. For instance, on an elderly relative's computer, I can basically only use a browser, and then HTML, CSS and JavaScript, whereas here I use ruby and java in addition to that (ruby as the ultimate glue, java+GraalVM for when speed and efficiency is necessary, aka the post-glue or post-prototype stage). But ...
I much prefer being ABLE to be flexible and let the computer do as much as that is possible, when it is useful. So I kind of want dynamic components when they are useful and help me solve things in less time, with less effort (let's assume this is the case so).
Webframeworks got insanely complicated and complex and I find the whole stack very annoying. It also gets more and more like a huge cathedral where more and more layers are put onto older layers. But this is more a problem with the framework getting too complicated, and the world wide web also becoming more annoying (plus we have de-facto monopolies such as Google controlling a huge segment of the information flow, change, web-standards - just see their evil battle against ublock origin and other hero-blockers). With a static website I would be heavily tied down to the browser-ecosystem. In ruby I can use sinatra, which, while I don't think is very elegant, is simple, and with DSL sugar (even though some DSLs are also too complex, IMO, such as rails), I can treat a code base as "write once, run everywhere". Sort of.
So, ruby is more elegant and more efficient (writing time, that is; I am not referring to execution time; I am much faster and more productive in Ruby than in JavaScript, and while I have more experience in ruby, so there is a bias, ruby is simply the superior language compared to JavaScript), hence it makes sense to tap into its features, in order to write applications that should work in a GUI setting, in a browser setting, ideally also the same code base on the terminal. See projects such as glimmer: https://github.com/AndyObtiva/glimmer (I am not necessarily saying that particular DSL is the one to use; I am saying that a DSL most definitely helps abstract away things one may not want to handle).
To me the blog seems to focus more on the negative parts of complicated frameworks, but dynamic features are still - or can be - more useful in non-static websites. You can do a lot with static content, and I use that for e. g. turning files into valid, pretty markdown files for instant display on a website - but dynamic elements still seem ultimately more useful to me than static ones. I can always autogenerate .html - for instance, in my own toy webframework, the method .to_html will generate a valid, standalone .html file. I don't have to think about that because it is autogenerated. I would not ever want to maintain a static website directly, when instead the computer can autogenerate all of that (and, correspondingly, it also has a .to_pdf method and so forth). Similar with the example given above in regards to glimmer: a button remains the same, be it in SDL, for a webpage or for a traditional GUI such as gtk. It should always respond to:
.on_clicked {
}
That's the abstraction my brain can most easily handle, e. g. button.on_clicked { invoke_this_method_called_cat_eats_mouse() } # or something like that
I have a static site now. I started with a faux static site, made my own pipeline to convert markdown to html, then hard to start adding meta blocks to set html title/description/other headers and then was like 🤦, i'm just rebuilding jekyll.
So I dumped that, didn't want to deal with it, switched to wordpress, and it was fine but everything felt really slow to me and I was having caching problems.
So I dumped that and now my site is a really really basic LAMP page. Front end is html and js. I use vanjs for lightweight "react" components where I feel it's appropriate. I do have an API, I use js to hit the API to populate my vanjs components, but the php I'm running is so simple a monkey could work with it. No framework. And it has been a delight to work with.
Extra bonus, somehow bots don't understand how to use the comment form I made, but humans can, so I haven't been getting any spam.
I have been using cloudflare pages to great success in this way. Though I only have a modest number of static sites built for it. See https://www.globalsites.ai/showcase/ for a list if interested.
PersianMG@reddit
Static sites are great, light weight, fast and cheap...but also ugly and not interactive. I'll always prefer a nicer looking site with a bit of JS interactive if applicable over a purely static website.
Maybe it's just me but I don't particularly enjoy browsing sites that look like they were made 50 years ago. It's fine for simple stuff and in OPs case it's more than fine but anything more complicated and it falls short quickly.
AlexKazumi@reddit
Ugly? This site launched back in 2003 :) https://csszengarden.com/
NostraDavid@reddit
It did look a little different: http://web.archive.org/web/20031001180317/https://csszengarden.com/
TimoJarv@reddit
Being ugly has nothing to do with the site being static. Static sites can utilize CSS all the same. And the same goes for JS interactivity. OP’s site is interactive, even though it is a static site.
AdeptFelix@reddit
Obligatory https://motherfuckingwebsite.com/
PurpleYoshiEgg@reddit
Also obligatory: http://bettermotherfuckingwebsite.com
NostraDavid@reddit
Also obligatory: https://evenbettermotherfucking.website/
igot2pair@reddit
How exactly does the site reload when the JSON data file is refreshed? And how does the integration with the JSON file populated by python work, how is the file sent to the static site?
EducationalBridge307@reddit
On a cron job, a GitHub action rebuilds the JSON file and commits it, which triggers the Cloudflare Pages deploy action, and deploys a new version of the site with the newest JSON.
cheezballs@reddit
This .... This is just server side processing right? How is this any different than a plain old jsp or whatever. You're just offloading the processing to the backend rather than multiple tiny calls from the front end?
rsclient@reddit
No, because "server side rendering" conceptually generates the page for every request. Each user might get their own unique page based on their IP address, or cookies, or stored per-user information.
This is created files once, and just sending those, unchanged, to every request, but updating the files once an hour
Classic-Try2484@reddit
Yes react in this case would be order of magnitude worse (because op would still require the meta refresh)
cheezballs@reddit
... A unique page based on their cookie and session... Sounds like server side rendering to me. I get that if the page doesn't change it'd be caxhed, but what kind of page doesn't change with each request?
bundt_chi@reddit
We've come full circle. This is server side rendering with a long cache TTL.
lunchmeat317@reddit
Dude, we've at least done a full 1080 in web stuff over the years. It'll never really change.
Inevitable-Plan-7604@reddit
Believe it or not, the world before react came along is now "olden times". There will be now-senior devs out there to whom server side rendering is a revelation.
tes_kitty@reddit
That's what have been doing at work for a long time. Some monitoring scripts run by cron in the background generate a few .html files, copy them to the web server root after generation and that's it. The HTML contains the old
<meta http-equiv="refresh" content="60">
tag so the browser automatically refreshes the page every minute.Simple, works.
FamiliarSoftware@reddit
God I hate pages that work this way. They completely fuck up text selection and probably most accessibility tools such as screen readers. Not to mention they constantly make the tab bar flicker, adding a bunch of visual noise to my screen.
Out of all the ways of getting a dynamic website, I consider this to be the only actively user hostile one and I am so glad Firefox can block that shit.
tes_kitty@reddit
I need that feature for my application. If you don't need periodic refresh, don't include that tag.
Classic-Try2484@reddit
No use Ajax instead. Check out query load. It would be better to add a sign to tell the user he needs to refresh than auto refreshing whether needed or not. Unless you are weather data. And then still better to just refresh the div not the whole page
tes_kitty@reddit
The HTML for the whole page is not even 10 KB. It's display only, so no links on it and there is no reason to copy text from it since you can get the same data from a logfile as CSV.
There is no reason to get more complicated.
coloredgreyscale@reddit
That refresh feature was added in 1995. Long before js frameworks got popular / commonplace
Classic-Try2484@reddit
God no don’t do this. You eat the users bandwidth. And you serve useless pages if they leave for lunch and come back Monday. Make it not cache is enough. Let the user hit refresh.
tes_kitty@reddit
The application where I use it needs it. Other pages that do not need a refresh of course don't use an auto refresh.
CherryLongjump1989@reddit
The refresh option is for a forced reload. It bypasses the browser cache and downloads everything over and over again whether you need to or not. There is no “normal” reload that you can do with a header, but you can add a script tag with a line of JavaScript in the html to call location.reload() after a minute. That’s a lot better and won’t waste your users bandwidth.
If you’re serving your html from something simple like an S3 bucket it will automatically hash your file contents and create an Etag. This will cause your navigation.reload() to only re-download the website if something actually changed.
tes_kitty@reddit
Unfortunately, Edge seems to stop that in tabs that haven't been displayed in a while. I'll look into the JavaScript way.
Entmaan@reddit
does this mean that it refreshes automatically for users? So a guy is browsing your site and then out of a sudden it gets refreshed on his end? Probably less than ideal I'd say
tes_kitty@reddit
This is for a page that displays information that changes over time. I want to be able to leave the tab open for hours and still have the current page displayed automatically.
For other applications where this is not needed, don't include that tag and you get a fully static page without automatic reloads.
coloredgreyscale@reddit
If it makes no sense to refresh the page periodically you could just not include that tag
shevy-java@reddit
Ok. But in your example, cron kind of meta-generates files; though probably also from some code that resides in other files. So you have a dynamic controller, right? Or do you handle the static .html pages directly?
tes_kitty@reddit
The scripts are called by cron, grab data, convert it to a HTML file and copy that file, once generated, to the web server root. So the files are only generated once per period and the web server itself only serves static html files no matter how many people ask for them while many sites do generate the HTML fresh for every request.
The example in the article does about the same, just doesn't recreate the page as often as my setup does it.
The refresh tag is so that I can leave the browser tab open and always get the latest version of the page since those pages are for monitoring things that change slowly, like file system usage.
There are a lot of applications that don't need an automatic page refresh, this was just an extra I needed.
Ytrog@reddit
What do you use to generate the files?
tes_kitty@reddit
bash scripts. It's all text processing so sed, awk, cut and all the other usual suspects make that easy.
barrows_arctic@reddit
This exact setup is what I do for some at-home self-hosted stuff that I run for my wife to conveniently access some local services. The cron-bash combo is so much simpler, leaner, and easier to make quick-and-dirty changes than the other options.
That said, it's also plainly obvious that it wouldn't scale all that well. But my 'scale' will always be just 'home', so I don't need to care.
tes_kitty@reddit
Since pushing a static html file uses a lot less CPU than creating that same HTML dynamically for every incoming request, it can scale pretty well. It depends a bit on how complex that page creation is. If your scripts need 10 mins to create the page, it might be a good idea to look into a faster way to do it.
But whatever method you use, if it works for you, it's all good.
barrows_arctic@reddit
I actually scale of managing the code. The CPU and network resources are indeed minuscule.
tes_kitty@reddit
Well, you can split up a large bash script into multiple ones that get called and all provide a part to the result and you can use functions/procedures inside a script. That does help with managing things.
Or you can not do that and end up with a script that grew slowly, still does what it's supposed to do but is now a monster. If you have the latter, I hope you remembered to comment your code... :)
unpaid_official@reddit
my first gig in the industry was at a saas company that used a similar setup for their internal infrastructure. it started from bash scripts and HTML, and grew inti a 250,000 lines of code monstrosity that was mission critical to the org. i have fond memories of it because of how much i learned while refactoring it over 2 years, but evebtually the org moved to kubernetes instead.
Anyway. if you ever need to scale beyond bash scripts, i recommend transitioning to PHP scripts. simple setup, you can avoid OOP if you prefer, and its a bit easier to maintain and test. plus it makes it really easy to switch between coding scripts and coding what gets displayed on the webpage.
tes_kitty@reddit
I'd probably use PERL, I mean it's the language meant for things like this.
falconfetus8@reddit
You needed an AI to tell you that you can send HTML files over the internet?
SnooPaintings8639@reddit
Isn't this what AI is for? Diging deep to find what is already well known, but we personally have no experience with?
AdeptFelix@reddit
Considering how it has accuracy issues? No, it's a terrible use for AI. AI is fine for a draft to be reviewed.
Classic-Try2484@reddit
You do have to know it’s well known and then verify the answer but then this works great
modernkennnern@reddit
I think documentation is probably AI's biggest strength. It's the definition of "knowledge vs wisdom"
AdeptFelix@reddit
I don't think AI (in terms of LLMs) is capable of knowledge or wisdom. I think it is capable of statistically likely combinations of words that correlate strongly with a given input relative to its training data with a sprinkling of RNG.
modernkennnern@reddit
Try googling for documentation on the various commands in Vim and do the same exact thing in ChatGPT. I can almost guarantee you that ChatGPT can parse the near-unreadable Vim documentation better than you (or at least me) can. Add in some context, "how do I use this command in conjunction with this other command" and an LLM is the only possible place to even ask the question.
Personally, documentation is the only place I still use LLMs. I've stopped using it for practically everything else; it's way too unreliable for most things.
PurpleYoshiEgg@reddit
What's unreadable about vim's documentation? I find it goes into quite enough detail in most cases that a web search could not actually find a solution, but vim's help shed a lot of light.
TargetIcy1318@reddit
Vim's documentation is excellent. This guy is on one.
AdeptFelix@reddit
I wouldn't trust it to present me new information, regardless of if it could do it more quickly than I could. I wouldn't be in a position to know if it's telling me accurate information or not and I'd question anything it presents. I don't like the uncertainty I feel from it.
I don't blame people for using it to get quick results but I do hope they know that they run the risk of it shitting the bed every so often. It's like having a calculator except one in every dozen calculations it's wrong.
jcouch210@reddit
The first thing AI got good at was summarizing. AFAIK GPT 2.0 could do it very reliably, and with clearer, more concise results than most humans. Newer algorithms are (speculation) even better at this, although off the shelf models may be worse as they could be weighed down by all of the superfluous information they have trained on.
As documenting code is very often equivalent to summarizing its contents, you can have LLMs do it pretty reliably for those cases.
modernkennnern@reddit
The thing is that - at least when it comes to Vim keybindings - you'll figure out incredibly quickly whether it's true or not. "Oh , the binding does indeed not exist."
AdeptFelix@reddit
Then that's when you have to either roll the gamble again or end up in the documentation yourself anyway. For a low cost gamble, that fine, I guess. I have apprehensions from seeing too many people rely on it.
Asqit@reddit
My exact question…
Uberhipster@reddit
your thing works better like this
cupcake_thot@reddit
nigga ur dumb
apf6@reddit
Jamstack is back in style!
smika@reddit
2012 just called. They want their blog post back.
Glittering-Can-9397@reddit
Im tired of AI gen posts like this can anyone tell me how to get rid of garbage like this. I already downvote, show less posts like this, hide, and block every single one I see.
Dwedit@reddit
Static sites exclude Forums and other user interaction.
Reverent@reddit
I'm not sure what the point of the article is. "KISS"?
Who's the audience? People who can publish a static site already know the benefits of a static site. People who don't publish a static site usually do so because they don't have the technical capability to do so or are utilising a service where it isn't an option.
I can publish a "hello world" html file and it'll be the bloody fastest site to ever exist. It's also not a very interesting or valuable blog subject.
mouse_8b@reddit
I think with a lot of the newer frontend technologies, like react, it's not obvious to newcomers that they could achieve similar results with static HTML.
Also, even for those of us with more experience, JavaScript and interactive websites are so normal that it's harder to design a static site.
I've got a single page web app that I was very proud to make static, but it requires a couple of server-side tricks that a lot of frontend-only people won't know.
barrows_arctic@reddit
It's an age-old problem in our industry: Shiny Red Ball Syndrome.
rsclient@reddit
The point of the article is that there's often very cheap ways to generate fast site content that's "free" to host.
The audience is everyone who wants to build a quasi-dynamic site. I've personally hosted static sites before, but didn't realize that the hosting possibilities had grown so much, so I certainly learned something.
The site in question is actually interesting, unlike your example.
SnooPaintings8639@reddit
I see some developers and pm's here, i.e. build what makes sense in the least effort possible.
But I personally love what OP did - engineer the optimal thing. Not a product in a business sense, but a simple product that does what it should and nothing more.
"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away."
scratchisthebest@reddit
Interesting, but the complexity is comparable to "setting up a web server and putting it behind an aggressive CDN or Varnish or whatever", right?
Like, the author has Eleventy floating around in their pipeline, had to set up a github action to periodically refetch the data and automatically commit, &c. There's still complexity it's just different.
shevy-java@reddit
I don't object to simplicity being extremely useful, and there are many cases where simplicity - such as a static website - is necessary. For instance, on an elderly relative's computer, I can basically only use a browser, and then HTML, CSS and JavaScript, whereas here I use ruby and java in addition to that (ruby as the ultimate glue, java+GraalVM for when speed and efficiency is necessary, aka the post-glue or post-prototype stage). But ...
I much prefer being ABLE to be flexible and let the computer do as much as that is possible, when it is useful. So I kind of want dynamic components when they are useful and help me solve things in less time, with less effort (let's assume this is the case so).
Webframeworks got insanely complicated and complex and I find the whole stack very annoying. It also gets more and more like a huge cathedral where more and more layers are put onto older layers. But this is more a problem with the framework getting too complicated, and the world wide web also becoming more annoying (plus we have de-facto monopolies such as Google controlling a huge segment of the information flow, change, web-standards - just see their evil battle against ublock origin and other hero-blockers). With a static website I would be heavily tied down to the browser-ecosystem. In ruby I can use sinatra, which, while I don't think is very elegant, is simple, and with DSL sugar (even though some DSLs are also too complex, IMO, such as rails), I can treat a code base as "write once, run everywhere". Sort of.
So, ruby is more elegant and more efficient (writing time, that is; I am not referring to execution time; I am much faster and more productive in Ruby than in JavaScript, and while I have more experience in ruby, so there is a bias, ruby is simply the superior language compared to JavaScript), hence it makes sense to tap into its features, in order to write applications that should work in a GUI setting, in a browser setting, ideally also the same code base on the terminal. See projects such as glimmer: https://github.com/AndyObtiva/glimmer (I am not necessarily saying that particular DSL is the one to use; I am saying that a DSL most definitely helps abstract away things one may not want to handle).
To me the blog seems to focus more on the negative parts of complicated frameworks, but dynamic features are still - or can be - more useful in non-static websites. You can do a lot with static content, and I use that for e. g. turning files into valid, pretty markdown files for instant display on a website - but dynamic elements still seem ultimately more useful to me than static ones. I can always autogenerate .html - for instance, in my own toy webframework, the method .to_html will generate a valid, standalone .html file. I don't have to think about that because it is autogenerated. I would not ever want to maintain a static website directly, when instead the computer can autogenerate all of that (and, correspondingly, it also has a .to_pdf method and so forth). Similar with the example given above in regards to glimmer: a button remains the same, be it in SDL, for a webpage or for a traditional GUI such as gtk. It should always respond to:
That's the abstraction my brain can most easily handle, e. g. button.on_clicked { invoke_this_method_called_cat_eats_mouse() } # or something like that
Perfect-Campaign9551@reddit
Static website ties you down to browsers? Bro this isn't the late 90s
appsolutelywonderful@reddit
I have a static site now. I started with a faux static site, made my own pipeline to convert markdown to html, then hard to start adding meta blocks to set html title/description/other headers and then was like 🤦, i'm just rebuilding jekyll.
So I dumped that, didn't want to deal with it, switched to wordpress, and it was fine but everything felt really slow to me and I was having caching problems.
So I dumped that and now my site is a really really basic LAMP page. Front end is html and js. I use vanjs for lightweight "react" components where I feel it's appropriate. I do have an API, I use js to hit the API to populate my vanjs components, but the php I'm running is so simple a monkey could work with it. No framework. And it has been a delight to work with.
Extra bonus, somehow bots don't understand how to use the comment form I made, but humans can, so I haven't been getting any spam.
cptrootbeer@reddit
I have been using cloudflare pages to great success in this way. Though I only have a modest number of static sites built for it. See https://www.globalsites.ai/showcase/ for a list if interested.