Problematic Second: How the leap second, occurring only 27 times in history, has caused significant issues for technology and science.
Posted by sarvendev@reddit | programming | View on Reddit | 159 comments
Kered13@reddit
Leap seconds are a good idea. The problem is that Unix time includes leap seconds. In theory this is to simplify time math, one day is always 606024 "seconds" in Unix time. In reality it makes the math worse, because some of those "seconds" are 2 seconds, and some are 0 seconds. Unix time should ignore leap seconds, it should simply be the number of real seconds since the Unix epoch. UTC should obviously incorporate leap seconds, and then to convert from Unix time to UTC or back you simply need to look up the net number of leap seconds.
non-serious-thing@reddit
For future reference: this comment is WRONG, Unix time DOES NOT include leap seconds.
Unix time it's a raw numerical representation of time, not a human-readable date and time format.
Kered13@reddit
Unix time does use leap seconds. It is not the number of seconds since the epoch. It is the number of seconds since the epoch adjusted for leap seconds.
non-serious-thing@reddit
You are RIGHT!
UGMadness@reddit
I utterly baffled Unix Time isn’t already this simple. Anyone reading the technical definition of it would’ve deduced it’s simply a dumb time counter of real time and nothing more, leaving the actual formatting to external APIs and libraries.
FatStoic@reddit
it's such a weird departure from the promise of unix time (number goes up 1 second per second forever, so ignore all timezone and leap year tomfoolery) that I can only conclude that the original engineers must not have considered leap seconds until systems were already in production that depended on 606024 seconds being a whole day, and by that point it was too much work to change
HacDMac@reddit
Who the heck from back then could imagine we’d still be using UNIX in 2024?!
NotSoButFarOtherwise@reddit
To be fair before like 1982 the idea that anyone would use Unix for anything other than teaching, experimentation, or low priority workloads probably seemed crazy. Same with Linux ten years later, really.
And I’m not even sure it was the wrong decision at the time (if it were even a decision anyone made). The complicated logic for handling leap seconds has to live somewhere, and having the OS handle it probably seemed like a better idea than expecting all user code to do so.
TheGoodOldCoder@reddit
I don't think we should change standards to make up for shittily-written software.
OnlyForF1@reddit
The promise of Unix time at the time would have been mathematical simplicity though. It took decades for genuinely useful timekeeping libraries to become widely available.
StoicWeasle@reddit
Unix time should have been this simple. It should be TAI + some well-defined offset. But POSIX.1 fucking destroyed that when they linked Unix time with, you guessed it, UTC.
At that point, conforming systems had to experience a discontinuity in Unix time, b/c some fucking asshole who didn’t understand any of this (or didn’t bother to think it through) decided: “Oh, UTC sounds fancy. Let’s use that!”
zokier@reddit
Basing UNIX time on UTC is not a problem (afterall UTC ticks at SI seconds same as TAI). The problem is that UNIX time requires every day to be 86400 counts which breaks everything.
StoicWeasle@reddit
UNIX and TAI require every day to be 86,400 seconds. That's the correct thing to do, and the only sensible thing to do, today.
UTC is not even a continuous timescale. You understand the problem with discontinuous timescales, right? I mean, do you understand this issue, before asserting yourself?
As in, actually understand?
zokier@reddit
In what sense UTC not continuous? Do you actually understand what UTC is?
jorge1209@reddit
Having 86400 seconds in a day is correct. What needs to be adjusted is the length of the second to correct for deviations.
Programs the query time generally need at most two of the following three features:
Excepting astronomers correcting historic datasets nobody needs all three.
Unix time being the measure of time progression on systems that have relatively imprecise clocks, and longer uptimes, do the correct thing and sacrifice precision in the length of the second to achieve the best result.
If you need high precision on a unix system you need to be using a real time kernel and asking for a different clock like a monotonic clock.
Kered13@reddit
The second is an SI unit of time with an exact definition. It should never be anything else.
jorge1209@reddit
That's just idiotic. When you bake a cake do you with about how well calibrated your units are you SI standards? Of course not that would be unproductive.
Besides the "second" as a concept predates the SI standard by hundreds of years. Unix never claimed that their second was an SI second, and it's perfectly reasonable and natural for it not to be an SI second.
empire314@reddit
They are not. They are a complete attrocity.
Kered13@reddit
It is perfectly reasonable and useful to keep clocks roughly synchronized with solar time. And this scheme wouldn't cause any problems as long as you had a parallel system to simply and uniquely identify instants in time. Like, for example, measuring the number of real seconds since January 1, 1970. As long as no one fucks that second system up, leap seconds will not cause any real issues.
empire314@reddit
No its not. People havent used solar time for 100 years, and when we did, seconds did not matter.
Making the future dates indeterministic by essentially random minor fluctuations in orbit is utter insanity.
zokier@reddit
While I don't love UTC with its leap seconds, it is useful to recognize the history here. While some hubris was undoubtedly involved, UTC originated from Naval Observatories whose primary concern was having timescale for navigation purposes (and indirectly also other astronomical uses) and for that tracking UT1 somewhat makes sense. The same timescale then getting adopted as general civil time was just more of a side-product
StoicWeasle@reddit
We had TAI. UTC is a civil timekeeping abomination.
zokier@reddit
UTC predates TAI by significant margin, indeed UTC predates the redefinition of second to become based on atomic clocks
StoicWeasle@reddit
UTC was discontinous from the start. Just not to a degree that civil timekeeping noticed.
So, it was already an abomination from its start.
empire314@reddit
How on earth are leap seconds relevant in anything, considering that solar noon fluctuates 16 minutes back and forth every year, due to Earths eccentric orbit making half of solar days longer, and half of solar days shorter? This ofc on top of summer time breaking the time by 1 hour.
mccoyn@reddit
Universal coordinated time is a good idea. But, a significant portion of the population wouldn’t use it if it didn’t start in sync with the sun, for religious reasons. If we didn’t have leap seconds, we wouldn’t have universal time.
StoicWeasle@reddit
This attitude is why we have stupid solutions to complex problem. At any moment in time, there is only one spot where the “sun is overhead”. On the edges of timezones. The sun is definitely not overhead.
Plus, have you even LOOKED at a timezone map? Time zones are fucking political. No one actually gives a single rat’s ass about the position of the sun.
This is the worst argument ever for UT1.
mccoyn@reddit
1.9 million Muslims care about the position of the sun. Muslim countries won't adapt a system that isn't kept in sync with the sun.
StoicWeasle@reddit
Muslims, then, I suppose, can continue to live in their own little bubble that pretends like it's still the, IDK, 11th century.
Plus, I hate to break it to you, but Muslim timezones are political, as well, and no Muslim gives a shit to within 30 minutes of when the sun is directly overhead. If they did, whatever ridiculous thing depends on that would have to literally be moving across the earth at that speed. Hard to pray while you're running at earth's rotational speed.
Plus, the last time I gave a single shit about what religion thinks about international scientific standards was...wait...let me check my HP 5701 Cesium Primary Frequency Standard...NEVER.
edman007@reddit
No they wouldn't, they already use local time instead of solar time, local time is typically +/- 30minutes from solar time, and then we add an hour for DST. In many places, local time is off by many hours (see China).
If we waited until the impact from this was on the order of timezones, we would go many millenia between leap hours. And a leap hour would just be "starting today, we stay on DST", letting all the timezones shift an hour from UTC, SW has a much better time dealing with DST changing.
wPatriot@reddit
Tell that to my chat client that disconnected me because the server was an hour late responding to its ping :P
StoicWeasle@reddit
Not today, it isn’t. It’s a fucking travesty. Astronomers can keep their own time, and choose their own timescale, and not give a shit about UT1. And civil timekeeping doesn’t need it at all.
The problem is that we have technology butting heads with social problems. And the social problems are decided by people who have absolutely no fucking clue about science or the real world or the horrors they inflict on those of us who keep the world spinning—like bullshit leap seconds.
MCRusher@reddit
keeping time is an atrocity. Leap seconds are a symptom of the imperfect solution.
StoicWeasle@reddit
No. Leap seconds are a terrible idea.
Fluid-Replacement-51@reddit
I think what you're proposing has just as many if not more problems. For one, it makes it impossible to know the future Unix timestamp corresponding to a future clock time. The best thing to do is abolish leap seconds. The only thing we won't be able to do then is perfectly correlate a future time with the position of the sun, which we can't do perfectly anyway, so no loss. If something needs to be tied to the angle of the sun, then this should be specified directly, not done by using clock time as a proxy for the angle of the sun.
edman007@reddit
Leap seconds are an absolutly terrible idea.
The only people who care about leap seconds are the people looking at the stars, how much do they care that the clock lines up with it? I'd argue that leap hours are better. Does your religion/etc care that UTC is aligned with solar noon? You [probably] don't live on the exact longitude line required for that alignment due, and instead, your local time is +/- 30 minutes from solar noon, due to time zones.
Now some people do care, they have telescopes, but many people with telescopes say 1 second is not enough, so they instead a UTC offset to use their telescope.
I work in one of the few industries that care a LOT about it, we need the solar time, down to the milisecond. So we always get the report with what that solar time is, and do the proper adjustments. Leap seconds not only cause problems with the SW, but the systems maintaining the time, because the guy with the telescope needs to switch from the "pre-leap second solar report" to the "post leap second solar report", and they need to do it on the same second that the clock implements it. Total pain in the ass, and if we wanted to change to leap hours, nothing in our process would change, other than we would do this operation once every 5-10 milenia. And it could be implemented in the time zone database, by just shifting everyone's timezone.
Leap seconds are honestly an archaic thing, from before people had internet, when seconds were not important to anything a normal person cared about, and when astronomers couldn't get weekly reports reasonably easy. Today they cause problems for daily users, while also being insufficient for astronomers.
Nimrod5000@reddit
Sleep(1)
Problem solved
bikeridingmonkey@reddit
1 millisecond?
Nimrod5000@reddit
Snap haha sleep(1000)
Google__En_Passant@reddit
Resident-Trouble-574@reddit
27 times... so far.
zed857@reddit
Things may really get interesting if we end up needing a negative leap second.
Repeating a second seems like it would cause more software issues than skipping one would.
beaurepair@reddit
A leap second neither skips nor repeats a second, it adds a new second (23:59:60).
A negative second would just skip 23:59:59.
buldozr@reddit
Due to some highly technical and mostly historical reasons, the behavior of software clocks in the popular operating systems is such that the clock timestamp leaps back a second. So it's not possible for an application to distinguish between the positive leap second and the one preceding it from the standard time APIs.
Properly, the system ought to provide an interface that would give complete information about the current ISO time. But historically, it was not seen as a priority to address the discrepancy that has only occurred for 27 seconds over the last half-century.
beaurepair@reddit
Thanks, I thought it was handled but it seems like most OSs just DGAF. Windows just ends up ahead by one second until it runs NTP synch, Ubuntu (and many other Linux flavours) will flick back to :59 etc.
So even less to worry about the "repeated" time as it already happens!
G_Morgan@reddit
Honestly the best way to write an OS is to ignore stupid rules and let NTP sort it all out.
buldozr@reddit
There are mitigation schemes where either the NTP servers (the whole network that a client talks to must agree to use the same smear) or the local time service implements a gradual smear over the leap second, so that localized clock drift is not significant. But for applications that need precise legal UTC time, this is not satisfactory.
beaurepair@reddit
Google uses time smearing for this reason.
bomphcheese@reddit
This is my thought too. Any OS could easily get out of sync by a second or two in either direction, so it should be fairly standard for it to handle jumping forward of backward in time by that amount. Let NTP worry about the leap second and let the OS treat it like any other sync discrepancy.
proverbialbunny@reddit
I know it's cliché, but that is a beautiful solution.
SanityInAnarchy@reddit
Leap-second smearing is the obvious solution. Solve the problem once at the OS level, then every other app just thinks it's running slightly slow for a bit.
nzodd@reddit
There may be situations where that's inadvisable. Just spitballing here but things like radiation therapy machines for example. Probably shouldn't be using consumer-oriented OSs anyway for those but the point stands that there are some applications where you simply can't allow that, so it's not really a one-size-fits-all solution.
jorge1209@reddit
Excepting astronomy which has long had its own ways of dealing with this, the number of tasks that require accurate lengths of seconds over multi-year periods, and alignment to official calendars it's approximately zero.
Your radiation therapy example needs accurate lengths of seconds but doesn't care about alignment with the calendar and doesn't run for more than a few minutes.
Physics experiments at places like CERN are going to be sensitive to the length of a second, but aren't calendar aligned or anything like that.
SanityInAnarchy@reddit
Right, definitely shouldn't be using consumer-oriented OSes, at least not to directly drive the hardware -- either you need something with proper realtime capabilities, or you do it in firmware.
Conscious-Ball8373@reddit
The OS is irrelevant. You shouldn't be using wall time to do almost anything important. The problems with doing so are well-known; wall time can speed up, slow down, skip forward, skip backwards, repeat itself etc etc etc.
Whether you're using a "consumer-oriented" OS or not, they all provide monotonic clocks for these sorts of purposes.
nzodd@reddit
Yeah, admittedly it's a rather poor example
SanityInAnarchy@reddit
I don't think it is. I think this is going to be true of a lot of things that can't handle leap-smearing: If they're that sensitive to running perfectly realtime, either to being under a second out of sync with the rest of the world or to being slowed down or spend up by one second per hour (or day, or...) then a consumer OS is not for them.
Coffee_Ops@reddit
If you're relying on ntp synced time to track radiation dose you're doing it very wrong.
mok000@reddit
It happens twice a year, in time zones different from UTC, there are hours that don't exist, or exist twice, due to summertime.
syklemil@reddit
Yeah, I can recall the change dates as being troublesome for oncall, lots of services that needed restarting. It hasn't been like that for a long while though.
These-Bedroom-5694@reddit
We could just not use leap seconds like a rational species.
gavinhoward@reddit
I guess I should post my idea for how to handle leap seconds: https://gavinhoward.com/2023/02/make-the-leap-second-first-class-an-open-letter-to-the-international-telecommunication-union/ .
zokier@reddit
Are you mixing up UNIX/POSIX time and UTC? UTC is monotonic and completely unambiguous, and not lossy in any way.
gavinhoward@reddit
UTC has leap seconds.
zokier@reddit
Yes, leap seconds are pretty much the defining characteristic of UTC. Leap seconds do not cause non-monotonicity, ambiguity, or lossiness.
gavinhoward@reddit
Sure, when you stay in UTC.
Once you needed to convert UTC another format, you do get ambiguity and loss.
zokier@reddit
UTC<->TAI (or GPS time) is unambiguous and lossless, and that is what people usually care about.
gavinhoward@reddit
TAI and UTC is lossless, yes. UTC to a DateTime is not.
zokier@reddit
What the heck is "DateTime"?!?? If it has issues with leap seconds then sounds like the problem is in "DateTime" and not UTC.
Trying to figure out what you are talking about is like squeezing water from stone. You throw statements like "UTC is lossy" as if they are self-evident and/or widely accepted, and when asked for clarification you shift the claim bit by bit ("UTC is lossy"->"Conversion to/from UTC is lossy"->"UTC to a DateTime is lossy"). Before jumping to solutions, it is essential to make clear what problem are you trying to solve; again, examples would go really long way here.
Right now it feels like my original question is still relevant, are you mixing up UNIX timestamps and UTC?
StoicWeasle@reddit
The leap second is inserted into the UTC timescale. It may be unambiguous, but it sure as shit isn’t predictable. UTC sucks.
I mean, sure, Unix time sucks more. But that’s a different ball of wax.
Laurexxxx@reddit
If leap second then time - second. Gucci straight to prod
double_chump@reddit
This was an interesting read. I always figured leap seconds were annoying but causing problems in the Linux kernel itself? Damn.
HildartheDorf@reddit
Yeah, I remember one time all the servers we had went to 100% CPU usage until we rebooted. Because of a leap second.
warbeforepeace@reddit
I got a call from a vendor several hours before a leap second telling me all their load balancers would reboot at the leap second. Nothing we could do about it and we had 100s of pairs. Super fun time especially with the ones that failed during the reboot and had to be recovered manually.
HildartheDorf@reddit
At least they rung you! I'm sure it would have been far worse if you got zero warning.
warbeforepeace@reddit
Not really. How do you prepare 1000 LBs for impending reboot.
Ancient-Ebb-669@reddit
How the hell did you diagnose that or are you taking the piss?
joshjje@reddit
Right? That's bonkers. Maybe they had very precise logs.
FunkYourself55@reddit
It's ok though I'm pretty sure you didn't dodge this bullet. And you definitely won't dodge the next one
FunkYourself55@reddit
How about they make the judgment for themselves instead of listening to someone who is clearly in denial defending someone who is batshit crazy? I gave them the recipe. They have the choice to follow their own instructions or to listen to someone else? I mean what do they have to lose by listening to me? That's how you know you are giving up your butthole for protection. I don't do that
HildartheDorf@reddit
We saw the load spiked on monitoring. We had no idea until the old "turn it off and on again" returned performance to normal.
There were a lot of write ups on the internet in the following days on the internet describing the same problem we had.
postitnote@reddit
Those people in 2135 are going to curse us for pushing the problem down to them.
squigs@reddit
What will be the result of the change in practice?
It means the prime meridian will shift.a few miles. Is this a problem in practice? I guess astronomers will need to make an adjustment, but that's always been part of astronomy. Are there any other areas where this will be an issue?
postitnote@reddit
The time would get more and more off in practice. They would need a way to correct the clocks to align with reality. This would probably be a one off large correction in 2135, and then maybe standardizing how they will handle having more accurate clocks. Maybe they will also push it off another 100 years, ha.
squigs@reddit
What do you mean by "reality" though?
The Greenwich meridian is an arbitrary line we can draw anywhere. Countries can change time zones, although in 100 years we'll probably only be out by a minute.
NotSoButFarOtherwise@reddit
Fun fact: it already did. If you take a GPS receiver to London, it shows 0º0'0" about 100m away from where the meridian was drawn through Greenwich observatory.
The reason for this is that there's a local gravitational anomaly at Greenwich and a plumb line doesn't point exactly straight down (Earth's gravitational field is actually very irregular). As a result, the projection of the Greenwich meridian into space doesn't go quite straight up. When they were constructing the reference coordinate system to use with GPS, they had a choice: keep the position of 0º where it was on Earth's surface and use a new astronomical reference for 0º in space, or keep the same astronomical meridian but move its position on Earth. They chose the latter, which was, all things considered, the far better option. Most maps at a small enough scale that the difference matters are in projected coordinate systems anyway, which introduces its own error, and it meant that astronomical data, where that kind of discrepancy would make a difference, wouldn't have to be changed.
postitnote@reddit
They would need to develop a standard for i.e when is it 12 noon. Sure maybe it's only a minute or two in 100 years, but it would just keep getting worse and worse. If we human society survives another thousand years, it could be off by enough that they would want a solution at some point. Like I said, they could just put it off again for another 100 years, but then that would be up to the people in 2235 to figure out if their few minutes of error is worth fixing, and the longer it is delayed, the worse the error gets.
zokier@reddit
For vast majority of people civil time is way more off from local solar time than the few minutes leap seconds cause. The time in China can be as much as three hours off. In Galicia, the westernmost region of mainland Spain, the difference between the official local time and the mean solar time is about two and a half hours during summer time.
Even in places with sane time zones the fact that time zones usually are at hour-level granularity means that the local time is almost certainly off from solar time by more than few minutes.
squigs@reddit
The standard will be the same as it is now. It will be based on the UTC time plus an offset.
In a few thousand years, perhaps the UK will switch to UTC-1 and central Europe to UTC+0 (or do I have that backwards?) but since time already depend on what country you're in, there's no reason to fix UTC.
syklemil@reddit
Lots of countries already have weird timezones seen from a meridian perspective, because it makes things easier when dealing with their neighbours. Between that and the existence of DST it's really hard to predict what will be the political result.
For all we know people could wind up switching to just having UTC clocks and live with noon being at very different timestamps around the world.
postitnote@reddit
I guess you would know better. What are the consequences of ignoring leap seconds? How would we reconcile time systems between ones that require extremely accurate time, and those that do not?
squigs@reddit
I don't necessarily know better. I might have it completely wrong.
But if I understand it, the really accurate time and clock time will be identical (if you stick with UTC). It's just there will always be exactly 31536000 seconds in a non-leap year rather than an occasional 31536001.
postitnote@reddit
But then how would you deal with time with things like satellites that depend entirely on the rotation of the earth rather than arbitrary ticks of a clock? You can't ignore leap seconds, you would have to incorporate them in some way so that your calculations would make sense so there's no drift on where the satellite is above the earth. I imagine there are a lot of reasons why they want leap seconds in the first place, not just for some nerdy reason.
Mysterious_Worry_612@reddit
GPS systems already ignore leap seconds for positioning because it's easier that way: https://en.wikipedia.org/wiki/Global_Positioning_System#Timekeeping
So I guess it makes things easier? Or space stuff is already so hard it doesn't matter anymore by now?
squigs@reddit
Yeah, that's one I hadn't thought about. I honestly have no idea what's even involved here.
Syncopat3d@reddit
Society can adjust to gradual changes. Language evolves easily over a generational time scale. Perception of what times of days subjectively mean can, too. 2135 is many generations away.
Conscious-Ball8373@reddit
One arc-second at the equator is about 31 metres. It would take a long, long time for the meridian to move by a mile.
squigs@reddit
The Earth spins at 15 seconds of arc per second though. So that's about 400 metres. At Greenwich's latitude we're probably looking at around half that but that's 8 leap seconds per mile.
MCRusher@reddit
As well as those people in 292277026596. We should move to expandable cloud storage big integer years to solve this problem once and for all.
jecowa@reddit
In 1 billion years, the orbit of the Earth around the sun probably won’t be a relevant as it is today. I wonder what timekeeping will be like when no one lives on Earth.
javasyntax@reddit
agreed, when the cloud storage dies the problem will disappear along with it!
very_mechanical@reddit
With any luck humanity will have done itself in by then or, at least, will no longer have use for computers.
Captain_Cowboy@reddit
I know this was just an aside for the article, but this is one of the silliest reproofs I've read on the Y2K problem (emphasis mine):
As even the lede admits, there was a real cost associated with those extra digits, too. You can admonish programmers to think past the near future, but it's likely many of the projects developed with that optimization didn't survive into the new millennium and wouldn't have benefited from the added cost. Among the programs that did live on, the developers may have reasonably expected the programs would not have such longevity, or expected that for those that would, either the bug wouldn't be a bug deal or the software could be updated to cope. And I reckon in most cases, developers who made those decisions were correct.
This sort of consideration plays out all the time, with any sort of development. In engineering and project management, it isn't enough just to anticipate future needs, but to balance the cost of mitigation against their expected impact and risk. It's rather flippant to write it off as a "stupid mistake".
sarvendev@reddit (OP)
u/Captain_Cowboy You are right, my wording about this problem should be better because to judge if something was a stupid mistake we need to know the full context. So maybe it was a conscious decision, and programmers considered that solution's limitation.
Helaasch@reddit
As an IBM i (AS/400) programmer, I maintain numerous programs that date back to the 90s. The standard date format was *DMY (01/01/(19)40 – 31/12/(20)39), which indicates the challenges I'll face in 2037 and 2038. It's certainly good for job security, I suppose.
In the early 2000s, they began using *CDMY (1900 to 2899), which deferred the problem to a future where the software is unlikely to be in use. However, working with the data is cumbersome (e.g., today's date is 1010724).
wPatriot@reddit
What was the point of going for CDMY instead of just expanding Y to cover the four digits? Surely at that point it wasn't about saving one byte?
Helaasch@reddit
YYYY formats were for sure available.
I was a small child when these decisions were taken, so I can only guess on the reasoning. What I think is most likely is not that they tried to save disk, but rather tape. Tape is very expensive and the back up ran every night.
TheGoodOldCoder@reddit
Do you think they expected their software to last 10 years? They could have used one digit years. Or 16 years if they went for hex. Or 36 years if they used all letters, or 62 years if they used capital and lowercase.
Surely they didn't expect that software to last 36 years, and most of that software was still written less than 62 years ago. Why were these people in the past so extremely wasteful? Didn't they know there was a real cost associated with that second digit? They doubled the cost for no good reason, the fools.
AnnoyedVelociraptor@reddit
Frankly I prefer Google's smudging approach. Every second happens. They're just slower / faster.
lordlod@reddit
The leap smear is a different compromise, you avoid the discontinuity but time based measurements over the smear period are invalid. For example monitoring the rate of events per second gets weird when your seconds change in duration.
We also now have NTP servers that use the smearing method and servers that don't, these will obviously conflict during the smear interval. So you can't consume both a cloud provider NTP server and a public pool or hardware NTP server.
To me the inconsistency is the worst part. There's a handful of different time smearing/smoothing systems that have been used. We seem to be standardising on a 24 hour smear (initially selected by AWS), but any detailed historical data requires understanding if and which smear was used, Google alone has used three different smear systems.
Getting rid of leap seconds makes everything much simpler with virtually no negative impacts.
nibselfib_kyua_72@reddit
This fucking site dared to interrupt my music with a stupid ‘bot’ notification… ugh
dead_alchemy@reddit
Eeuuugh!
Push this leap second nonsense into the display layer and everything else can just handle real seconds.
Synth_Sapiens@reddit
Except, it caused about zero issues.
Fuck off.
69WaysToFuck@reddit
“I didn’t see it so it didn’t happen” type of guy I see. You could at least open the article and see if there are any issues referenced spoiler >!they are!<
Synth_Sapiens@reddit
Oh, yeah, just like the imaginable Y2K bug issues.
asphias@reddit
You mean all the issues solved by developers working hard to make sure nothing bad happened? Tell the programmers working overtime that their issues were imaginary
Coda17@reddit
To be fair, there were definitely some bugs from Y2K, but the possible effects of these bugs were way overblown.
Schmittfried@reddit
Like the effect of CFCs on the ozone layer were way overblown… no wait, they were simply alleviated through concerted effort.
booch@reddit
The possible effects were not overblown. The actual effects were just not as big/pervasive as the possible effects. The problem was that time had to be spent on each of the possible effects to confirm it was or wasn't an actual effect. Then the actual effects could be fixed.
Synth_Sapiens@reddit
"overtime"
lmao
Imagine being THAT dumb.
asphias@reddit
Half of your comment history is laughing at people, calling them idiots or calling everything bullshit.
Feeling that way about everything is not a healthy attitude. Nor are you likely to convince anyone else of your ideas.
If all you want to do is talk into the void about how everyone else must be wrong and foolish, well, you do you. But if you want to perhaps help us see things the way you see them, or maybe even learn something from others, i suggest you hold your laughter and try and actually ask some followup questions, or explain why you think something is bullshit.
I hope you can one day experience more positive interactions on the internet, best of luck.
Synth_Sapiens@reddit
Except, I'm feeling that way not about everything.
But yes, there's a lot of bullshit goes on, and idiots are simply incapable of understanding this.
Why would I want to do this?
Totally serious question.
I learn from others all the time, which is why I'm considered one of the best in my field and this enables me to transition between field as I please.
I absolutely do ask questions if I lack the in the knowledge.
Why would I want to spend many hours to compose a profound and irrefutable article without being compensated?
If you checked by posting history deeper you would've noticed that once in a while I do have positive interactions.
Why not more?
Have you ever heard of the Pareto principle? The 80/20 one?
asphias@reddit
Why? Because you too can contribute to making the world a better place. Because people may actually be thankful if you give them good suggestions. Because you will be seen as kind of a dick or immature child with the way you're commenting.
And by posting in such a negative way, you'll get negative responses as well.
By being more constructive or polite, you can create a much more pleasant environment for yourself, and for others.
Synth_Sapiens@reddit
ROFLMAOAAAA
Go on. Show me one such programmer.
I'll wait.
Schmittfried@reddit
It’s causing issues to this day lol.
Synth_Sapiens@reddit
Evidence?
Schmittfried@reddit
https://www.bbc.com/news/articles/c9wz7pvvjypo.amp
booch@reddit
Spoken by someone who wasn't there and didn't put in the time to make sure the systems they were in charge of didn't have problems.
The Y2K bug had the potential to cause serious problems. It actually did have the potential to cause things like planes falling out of the sky.
Synth_Sapiens@reddit
lmao
You can't even imagine where I was and what I've done lol
Oh, and no, it didn't have the potential to cause planes falling out of the sky. Don't make shit up.
booch@reddit
It had the potential to cause anything that ran software to fail catastrophically. In order to get from "ran software" to "fail catastrophically", a number of other conditions needed to be met
Planes do run on software; very complicated software. Boeing 787 can suffer a complete loss of power (fall out of the sky) if they haven't been rebooted in a long enough period of time. Were there any planes that had any code that could fail catastrophically due to the y2k problem? I don't have any idea. But there could have been; and people had to spend the time finding out and, if necessary, fixing it. Or, alternatively, just risk it and hope for the best.
Saying it wasn't possible is just plain ignoring the facts. So either
I was giving you the benefit of the doubt and assuming the first one.
Arts_Prodigy@reddit
Do you actually know anything about technology or are you larping?
Maybe you just deserve a dunning-Kruger award??
Synth_Sapiens@reddit
Well, in the light of the fact that to this day no one was able to provide even a shred of evidence that these are real issues - apparently I'm ok.
Arts_Prodigy@reddit
This is just the strangest take if you’ve done anything significant with a computer and seen what happens when the date is out of sync and doesn’t have a proper NTP connection then it’s obvious what the potential issues are.
Synth_Sapiens@reddit
The potential issues that are caused by programmers practicing subpar solutions because they are being pressured by clueless managers.
Yeah. The same as any other potential issues.
69WaysToFuck@reddit
Even after I told you to look up the issues referenced in the article, you keep your completely wrong opinion…
Synth_Sapiens@reddit
References?
No. Bullshit invented by semiliterate journalist isn't "references"
Arts_Prodigy@reddit
Also have you just not been paying attention the leap day for just a few months ago caused outages at large companies
Synth_Sapiens@reddit
lmao
No. It did not.
Arts_Prodigy@reddit
https://codeofmatt.com/list-of-2024-leap-day-bugs/amp/
Seems boring to be intentionally misinformed but do you, I guess.
Synth_Sapiens@reddit
Read up to "Sophos, a cybersecurity software vendor, issued an advisory that its products Sophos Endpoint, Sophos Server, and Sophos Home may experience an issue related to SSL certificates if the software is booted on February 29th."
So bullshit worthless coders wrote bullshit worthless bugged code even though they knew about this issue years in advance.
LMAO
dusktrail@reddit
That's the point. Bugs occur.
Synth_Sapiens@reddit
So problems are caused by bugs, not by leap seconds.
PaintItPurple@reddit
This is just "guns don't kill people" sophistry but for date-based technical issues.
Synth_Sapiens@reddit
It's not a sophistry - it's a fact.
But the so-called people really hate to take responsibility, so they shift blame to objects that are devoid of agency.
dusktrail@reddit
No, you fool, it's just root cause analysis and not blame.
Synth_Sapiens@reddit
Oh. So now if person A shoots person B the toot cause is the gun.
ROFL
dusktrail@reddit
The gun is absolutely part of the root cause analysis chain for why somebody got shot with a gun and died. And if you don't get that you are not going to do well in life
Synth_Sapiens@reddit
So if there was no gun and the person was stabbed then a knife would've been the root cause.
I a gun is a material object and under normal circumstances it doesn't have enough potential energy to cause anything unlike, say, an asteroid, which can cause large heat outburst my merely falling.
dusktrail@reddit
Yes. Everything involved in an incident is part of the root cause analysis. This is not hard to understand.
PaintItPurple@reddit
Agency isn't necessary for causation or instrumentality. You seem to be reading in moral blame where people are just saying "A led to B."
dusktrail@reddit
It's a bug caused by somebody not taking leap seconds into account
All bugs are like this
Synth_Sapiens@reddit
Yep.
Same as bugs that would be caused by, say, not taking different months length into account.
Nothing to do with the phenomena per se.
dusktrail@reddit
No, it has everything to do with the phenomena.
A bug caused by not taking different months into account would be caught immediately.
Leap seconds are an aspect of time tracking many people are not aware of and which cause problems when people are not aware of them.
If you don't get this, you are not going to do well in life.
Synth_Sapiens@reddit
Funny how you are trying to shift blame from poorly educated individuals to physical phenomena.
dusktrail@reddit
No, I'm actually just not talking about blame
MarsupialMisanthrope@reddit
Leap seconds don’t happen on a defined schedule. It’s impossible to account for “In 4 years, a bunch of nerds in another country will decide we need to add a second at the end of June”. It’s like timezone changes. Real world things change in ways that can’t be predicted in software because they’re related to the political process, which is a cluster involving egos on a large scale.
ThreeLeggedChimp@reddit
Why not create a universal time and a local time? With a conversion for both.
That would also help keep track of time outside earth.
frud@reddit
https://xkcd.com/927/
buldozr@reddit
Check out TAI