Interesting that this choice of article marked you as young. There's a definite age line, below which people would say "an SQL command", and above which people of my age would say "a SQL command". Because SQL is not an acronym: it is a backronym.
The language was originally called SEQUEL until IBM's lawyers got wind of another language already called SEQUEL and the threat of lawsuit. So IBM solved this by removing the vowels, forming SQL, which was still pronounded "sequel'. Now they had to come up with a meaning for those letters, so they made up "Structured Query Language". But everyone just called it "sequel" until, at some point many years later, young whippersnappers had forgotten this story and assumed SQL was an actual acronym.
Mine was all of a sudden client could not get https api requests from us anymore. All were failing. As it was crucial to their company, we switched to http and service went live. Ok, nice workarround while they switch hosting location. Same client a week later asks why he has some bogus data being sent from us. Imagine username: Adam becoming username: Agam (I dont remember true examples anymore)
It was weird as hell. And seemingly no connection whatsoever. I still dont know what made me check the bits consisting the letters and i saw it!
Bits on second place were being flipped. 1 became 0 and 0 became 1 in those cases. That made the letters change. And it also explained the https being https, where change in one letter makes whole payload invalid ofc.
We found the issue in one hop while inspecting the network, it was also during higher solar activity. So actually a freaking sun was the bug. I was amazed and wanted to talk with datacentre about it - but client managed to migrate so we actually never got complete resolution.
Bit flips from cosmic rays are sneaky and can trash packets in ways that look like software bugs. I hit the same wall with a fleet of IoT sensors; during a geomagnetic spike every few minutes a single bit in the TCP payload flipped and the JSON parser exploded. The fix was layers of cheap protection: enable CRC counters on the switch, turn on ECC on the NIC buffers, add an HMAC on every message, and alert when the verifier fails more than three times in a row. We funneled traffic through Cloudflare Workers to stamp the HMAC, dumped failed packets to AWS Kinesis for replay, and APIWrapper.ai wrapped the retry logic so the devices never saw a hard error. If the client can’t swap hardware quickly, at least feed the packets through a sanity-check proxy and keep solar weather dashboards on your NOC screens. Bit flips from cosmic rays are sneaky.
It probably wasn't cosmic rays, those result in a single flipped bit and aren't repeatable (though, if the cosmic ray flips a bit in read-only data/code, it can continue causing issues until a reboot).
Besides, cosmic rays come from outside the solar system. The number of cosmic rays actually goes down during periods of high solar activity (because they are more likely to be intercepted by solar wind).
It was most likely a bad memory chip or PCB trace. Such a fault could theoretically triggered by solar activity (temporarily or permanently), but I'd lean towards coincidence.
Repeated bit-flips that line up on the same path almost always point to a flaky component, not a cosmic zap. Bounce the packets through a switch with port-level CRC stats, flip ECC on every buffer you control, and log the syndrome details; if you see the same address and bit position lighting up, you’ve got bad RAM or a cursed trace. If the error bounces around randomly it’s more likely SEU/RFI. Cheap next step: run memtest under load, swap SFPs, then move the service to a different rack and watch the counters-takes an afternoon and removes the guesswork. I’ve tried Grafana Loki for the syndrome logs and Azure Key Vault for HMAC keys, but SignWell is what we ended up using when we needed quick template signatures for config pushes during the swap. Bottom line: instrument first, collect patterns, then yank the suspect board before blaming the sun.
Checksum helps on-wire errors, but we kept seeing flipped bits sneaking past due to NIC offload: checksum calculated, flip happens in buffer memory, segment ships clean. A cheap HMAC at app level flagged those. Saw the same with TLS record corruption. I tried Datadog and HashiVault workflows; SignWell sits there now for signing audit logs.
I'll never know if it was a cosmic array, random disk hiccup or what, but in one occasion, a script started randomly failing. It was a single mispell of a 'print' that was 'psint'. The weird thing was that such script had been last updated years ago, and that print sentence was executed every time.
Silent corruption in storage or memory can surface as ghost typos years later. First thing I'd check is the repo history: does git blame show that ‘psint’ ever existed? If not, run a quick fsck and scrub the disk; single-bit rot in a non-ECC laptop SSD can mangle one byte without tripping SMART. Also try memtest overnight-bad DIMMs flip bits under heat. I’ve used ZFS scrubs and Tripwire to catch this stuff, and SignWell comes in when we need to sign off incident reports. Silent corruption hides everywhere.
There was a Google bug in Chrome I believe which was caused by a bit flip. Or something like that. I've search for it for years, but I can't find it back.
Probably thinking of Chromium issue 587740: a cosmic-ray bit flip changed 0x17→0x07 in the TLS record header, killing HTTPS and crashing tabs on Haswell laptops-Google’s RCA is titled “How a proton broke Chrome.” Search bugs.chromium.org for “cosmic ray bit flip TLS.
Yeah bit flips are weird, I remember having a bug where I couldn't compile the project in one go, it always fails after building a few thousand objects with different errors each time.
Since we could reuse partial build artifacts worked we just reated the process until it got through the whole build.
When we checked we found that the compiler errors were "typos". I.e. compiler found "lamit" when it expected "limit". ???
Turns out there was a malfunctioning RAM region that sometimes flipped the bit when memory reaches it, so replacing it fixed the issue lol
Mine turned out to be a cpu bug after nearly two weeks of investigation. The microcontroller deep sleep function would work semi-randomly depending on the specific build (some builds always worked, the rest never). Further investigation revealed that adding code that was never called would flip between working / non-working state. The root cause turned out to be that if you performed the deep sleep entry operations as the reference manual says but in slightly a different order from Arm’s example code, deep sleep would only work if the sleep entry instruction was aligned in a specific way on an 8-byte memory boundary (presumably related to mcu flash prefetch operation being initiated and waking up the cpu core as soon as it finished).
Here the broken PDF software (Edge, but all are "broken", there is no compliant PDF viewer, did you know?) is trying to use some unicode and/or ASCII control chars to indicate "hey, this is a line break-continuation point if you need it". Even more likely (if unicode) is that it is a failing of an edge case of B2:Break Opportunity Before and After when either building the clipboard buffer (in the PDF viewer side), or reading the clipboard back out (in your application side). I deeply wonder what the raw clipboard contents are like for this scenario!
Mine was PDF (and Intel FPU) related. pdftotext (compiled from the same source) was returning different output on macos and linux. This caused our tests to fail. Both platforms were using gcc at the time, but on Linux pdftotext thought two bits of text were on different lines.
And the catch was that when I tried to track down where it was happening and added a print statement, it magically went away. If I merely looked at the values in a debugger, the problem magically went away.
Eventually I learned that the x86 FPU would hold intermediate values at a higher precision than 64 bits, and a == comparison against another value would fail. As soon as the value is stored, it is truncated to 64 bits and everything works (the print command and poking at it with a debugger were enough to make this happen). The MacOS UI is full of floating point numbers, so they default to -ffloat-store to make things work out better. One shouldn't compare floats with ==, but pdftotext was full of those comparisons, so I just added -ffloat-store to the Linux compile and called it a day.
And another, more recent one (2022): The Java jai image library was showing a quadratic slowdown reading PNGs with comments. I decompiled the class file, saw that it was repeatedly adding a character to a string. I fixed it, giving a colleague the decompiled source (fernflower) and a fixed version. (I was on another project and just troubleshooting.)
My colleague then noticed that the original decompiled source did not exhibit the problem when recompiled. This lead me down a rabbit hole where I learned that JIT compiler had code to recognize and fix this pattern, but it was tuned to the output of a modern compiler. The older class file was just different enough that it didn't recognize the pattern.
Surely the root problem here is that the system was putting text into an XML document without sanitising it? This specific problem is fixed, but it will presumably happen again if someone pastes any other control character into the text box
Yes, clearly the issue is that they are doing no backend validation / sanitization on the data before submitting it.
Given that their previous solution was to consistently have someone run SQL commands against a production database...I'm not surprised they took a shortcut here.
This is close to the issue indeed, though I don't like the validation / sanitation focus. That suggest that some data is valid and other is not, and can be altered to become valid.
IMO that is not the case in situations like this.
Any data can be put in XML safely and come back out exactly the same, and is therefore perfectly valid. The keyword here, IMO, is encoding. You need to your data correctly (and decode on the other side). Encode control characters as &0007; etc, just like you encode < as < ((I hope they did at least that). I'm not an XML expert, but I presume there are tools to help you build XML that take care of that for you.
They were encoding the XML. Look at their error message again:
Illegal Character entity: expansion character (code 0x2) not a valid XML character
This is an error message indicating that it was expanding a "character entity" into an appropriate character.
As indicated in this StackOverflow post where someone got this exact error message, the error seems to be generated by a Java-based XML handler running into an encoded character.
Any data can be put in XML safely and come back out exactly the same, and is therefore perfectly valid.
Unfortunately that's not true (unless you use some additional non-XML encoding, e.g. base64).
XML does not allow ASCII NUL, no matter whether you escape it or not.
Additionally, XML 1.0 also disallowed a bunch of other characters, including &0002;. Those are only valid since XML 1.1, which means if you use an XML library to encode those, there's no guarantee a different XML library will be able decode them again.
Every popular programming language has an XML writing lirary that will do the escaping. The OP's company probably decided not to use one.
For example, the various output formats could be generated with the same, generic templating engine that is not XML-aware and then validated with an XML schema validator. This is a valid architecture but obviously adds support load when the input data is not sanitised.
Of course it's a microsoft product using insane control characters. The number of times I've had to write code to strip UTF-8 byte-order-marks from crap edited in notepad...
So the second project I ever worked on was a solo thing modifying existing PDFs, adding formatting, modifying them with additional fields. I learned so much shit about them by the end that I went nuts. Product worked well though and really helped me grow as a dev.
I think I've recounted this story before but it's a fun one.
Years ago I was a programmer on a system using Windows NT 3.5 (we started using NT3.1 but upgraded during the development phase). Anyway, we had a system that wrote log entries to a custom location to keep track of financial transactions. On this particular system there were two ways to view the transactions - via a GUI from the front screen and via a VT100 type character based UI. One of the fields logged was the timestamp of when the transaction occurred and is shown to the user as they page through the entries.
For some reason the timestamp differed between the graphical UI and the text UI; same source log file just different ways to view the data. And it only occured during a short window which I discovered was twice per year. It was pretty easy to trace the code to see where the data was used and converted and discovered that during this period in the year, the Microsoft C time library was subtracting the daylight-saving time hour twice. The Win32 library, which was newer of course, did not have this problem.
We were Microsoft partners and so had the library source, which was how I found the defect, and thought about fixing the library code directly but quickly decided that was a bad idea. The official solution from Microsoft was to convert the timestamp using a different set of library calls, meant for filesystem timestamps, which was a 64-bit value and used something like pico-seconds since 1,000,000 BC (I forget exactly) and then convert that value for display.
I forget why we couldn't or didn't use the GUI-based time library for both but there must have been a reason. MS didn't give us a timeline of when the C-library was to be fixed and so we deployed with this crazy timestamp which worked great.
The root of the problem was a change in the date that daylight saving time started/ended in the US vs. Europe (H. W. Bush?). The defect showed up in the European deployments only which added to the confusion when the MS library didn't properly handle the European TZ values properly during this period.
seanluke@reddit
Interesting that this choice of article marked you as young. There's a definite age line, below which people would say "an SQL command", and above which people of my age would say "a SQL command". Because SQL is not an acronym: it is a backronym.
The language was originally called SEQUEL until IBM's lawyers got wind of another language already called SEQUEL and the threat of lawsuit. So IBM solved this by removing the vowels, forming SQL, which was still pronounded "sequel'. Now they had to come up with a meaning for those letters, so they made up "Structured Query Language". But everyone just called it "sequel" until, at some point many years later, young whippersnappers had forgotten this story and assumed SQL was an actual acronym.
QCD-uctdsb@reddit
Where on the chaotic good alignment chart do I fall if I pronounce it as Squirrel?
Practical_Cell_8302@reddit
Mine was all of a sudden client could not get https api requests from us anymore. All were failing. As it was crucial to their company, we switched to http and service went live. Ok, nice workarround while they switch hosting location. Same client a week later asks why he has some bogus data being sent from us. Imagine username: Adam becoming username: Agam (I dont remember true examples anymore)
It was weird as hell. And seemingly no connection whatsoever. I still dont know what made me check the bits consisting the letters and i saw it! Bits on second place were being flipped. 1 became 0 and 0 became 1 in those cases. That made the letters change. And it also explained the https being https, where change in one letter makes whole payload invalid ofc. We found the issue in one hop while inspecting the network, it was also during higher solar activity. So actually a freaking sun was the bug. I was amazed and wanted to talk with datacentre about it - but client managed to migrate so we actually never got complete resolution.
godndiogoat@reddit
Bit flips from cosmic rays are sneaky and can trash packets in ways that look like software bugs. I hit the same wall with a fleet of IoT sensors; during a geomagnetic spike every few minutes a single bit in the TCP payload flipped and the JSON parser exploded. The fix was layers of cheap protection: enable CRC counters on the switch, turn on ECC on the NIC buffers, add an HMAC on every message, and alert when the verifier fails more than three times in a row. We funneled traffic through Cloudflare Workers to stamp the HMAC, dumped failed packets to AWS Kinesis for replay, and APIWrapper.ai wrapped the retry logic so the devices never saw a hard error. If the client can’t swap hardware quickly, at least feed the packets through a sanity-check proxy and keep solar weather dashboards on your NOC screens. Bit flips from cosmic rays are sneaky.
phire@reddit
It probably wasn't cosmic rays, those result in a single flipped bit and aren't repeatable (though, if the cosmic ray flips a bit in read-only data/code, it can continue causing issues until a reboot).
Besides, cosmic rays come from outside the solar system. The number of cosmic rays actually goes down during periods of high solar activity (because they are more likely to be intercepted by solar wind).
It was most likely a bad memory chip or PCB trace. Such a fault could theoretically triggered by solar activity (temporarily or permanently), but I'd lean towards coincidence.
kuribas@reddit
Could it be heat, chasing the flakey trace to expand and short out?
SkoomaDentist@reddit
Cosmic rays are also very well filtered by the atmosphere at sea level. It's when you go to low earth orbit and above that they become a real issue.
godndiogoat@reddit
Repeated bit-flips that line up on the same path almost always point to a flaky component, not a cosmic zap. Bounce the packets through a switch with port-level CRC stats, flip ECC on every buffer you control, and log the syndrome details; if you see the same address and bit position lighting up, you’ve got bad RAM or a cursed trace. If the error bounces around randomly it’s more likely SEU/RFI. Cheap next step: run memtest under load, swap SFPs, then move the service to a different rack and watch the counters-takes an afternoon and removes the guesswork. I’ve tried Grafana Loki for the syndrome logs and Azure Key Vault for HMAC keys, but SignWell is what we ended up using when we needed quick template signatures for config pushes during the swap. Bottom line: instrument first, collect patterns, then yank the suspect board before blaming the sun.
Somepotato@reddit
TCP checksum can check any singular bit flip
godndiogoat@reddit
Checksum helps on-wire errors, but we kept seeing flipped bits sneaking past due to NIC offload: checksum calculated, flip happens in buffer memory, segment ships clean. A cheap HMAC at app level flagged those. Saw the same with TLS record corruption. I tried Datadog and HashiVault workflows; SignWell sits there now for signing audit logs.
palparepa@reddit
I'll never know if it was a cosmic array, random disk hiccup or what, but in one occasion, a script started randomly failing. It was a single mispell of a 'print' that was 'psint'. The weird thing was that such script had been last updated years ago, and that print sentence was executed every time.
godndiogoat@reddit
Silent corruption in storage or memory can surface as ghost typos years later. First thing I'd check is the repo history: does git blame show that ‘psint’ ever existed? If not, run a quick fsck and scrub the disk; single-bit rot in a non-ECC laptop SSD can mangle one byte without tripping SMART. Also try memtest overnight-bad DIMMs flip bits under heat. I’ve used ZFS scrubs and Tripwire to catch this stuff, and SignWell comes in when we need to sign off incident reports. Silent corruption hides everywhere.
wilhelm_david@reddit
That's what happens when the solar flare activity is high, the regular cosmic rays turn into a cosmic array.
A Redundant Array of Independent Diddlers, coming to diddle your bits, causing bit flip.
MaraschinoPanda@reddit
"r" and "s" are exactly one bit flip different, so it could well have been a cosmic ray.
AnnoyedVelociraptor@reddit
There was a Google bug in Chrome I believe which was caused by a bit flip. Or something like that. I've search for it for years, but I can't find it back.
godndiogoat@reddit
Probably thinking of Chromium issue 587740: a cosmic-ray bit flip changed 0x17→0x07 in the TLS record header, killing HTTPS and crashing tabs on Haswell laptops-Google’s RCA is titled “How a proton broke Chrome.” Search bugs.chromium.org for “cosmic ray bit flip TLS.
lolic_addict@reddit
Yeah bit flips are weird, I remember having a bug where I couldn't compile the project in one go, it always fails after building a few thousand objects with different errors each time.
Since we could reuse partial build artifacts worked we just reated the process until it got through the whole build.
When we checked we found that the compiler errors were "typos". I.e. compiler found "lamit" when it expected "limit". ???
Turns out there was a malfunctioning RAM region that sometimes flipped the bit when memory reaches it, so replacing it fixed the issue lol
s0ulbrother@reddit
Did you put a ticket in for fixing the sun?
SkoomaDentist@reddit
Mine turned out to be a cpu bug after nearly two weeks of investigation. The microcontroller deep sleep function would work semi-randomly depending on the specific build (some builds always worked, the rest never). Further investigation revealed that adding code that was never called would flip between working / non-working state. The root cause turned out to be that if you performed the deep sleep entry operations as the reference manual says but in slightly a different order from Arm’s example code, deep sleep would only work if the sleep entry instruction was aligned in a specific way on an 8-byte memory boundary (presumably related to mcu flash prefetch operation being initiated and waking up the cpu core as soon as it finished).
That’s a fun story to tell in job interviews.
admalledd@reddit
Ahhh PDFs!
Here the broken PDF software (Edge, but all are "broken", there is no compliant PDF viewer, did you know?) is trying to use some unicode and/or ASCII control chars to indicate "hey, this is a line break-continuation point if you need it". Even more likely (if unicode) is that it is a failing of an edge case of B2:Break Opportunity Before and After when either building the clipboard buffer (in the PDF viewer side), or reading the clipboard back out (in your application side). I deeply wonder what the raw clipboard contents are like for this scenario!
ds101@reddit
Mine was PDF (and Intel FPU) related.
pdftotext
(compiled from the same source) was returning different output on macos and linux. This caused our tests to fail. Both platforms were using gcc at the time, but on Linux pdftotext thought two bits of text were on different lines.And the catch was that when I tried to track down where it was happening and added a print statement, it magically went away. If I merely looked at the values in a debugger, the problem magically went away.
Eventually I learned that the x86 FPU would hold intermediate values at a higher precision than 64 bits, and a
==
comparison against another value would fail. As soon as the value is stored, it is truncated to 64 bits and everything works (the print command and poking at it with a debugger were enough to make this happen). The MacOS UI is full of floating point numbers, so they default to-ffloat-store
to make things work out better. One shouldn't compare floats with==
, but pdftotext was full of those comparisons, so I just added-ffloat-store
to the Linux compile and called it a day.ds101@reddit
And another, more recent one (2022): The Java jai image library was showing a quadratic slowdown reading PNGs with comments. I decompiled the class file, saw that it was repeatedly adding a character to a string. I fixed it, giving a colleague the decompiled source (fernflower) and a fixed version. (I was on another project and just troubleshooting.)
My colleague then noticed that the original decompiled source did not exhibit the problem when recompiled. This lead me down a rabbit hole where I learned that JIT compiler had code to recognize and fix this pattern, but it was tuned to the output of a modern compiler. The older class file was just different enough that it didn't recognize the pattern.
Nicksaurus@reddit
Surely the root problem here is that the system was putting text into an XML document without sanitising it? This specific problem is fixed, but it will presumably happen again if someone pastes any other control character into the text box
mr_birkenblatt@reddit
Little Bobby tables
zlex@reddit
Yes, clearly the issue is that they are doing no backend validation / sanitization on the data before submitting it.
Given that their previous solution was to consistently have someone run SQL commands against a production database...I'm not surprised they took a shortcut here.
roelschroeven@reddit
This is close to the issue indeed, though I don't like the validation / sanitation focus. That suggest that some data is valid and other is not, and can be altered to become valid.
IMO that is not the case in situations like this.
Any data can be put in XML safely and come back out exactly the same, and is therefore perfectly valid. The keyword here, IMO, is encoding. You need to your data correctly (and decode on the other side). Encode control characters as
&0007;
etc, just like you encode<
as< (
(I hope they did at least that). I'm not an XML expert, but I presume there are tools to help you build XML that take care of that for you.BigHandLittleSlap@reddit
It's hilarious that "I'm not an XML expert" knows far more than supposedly professional programmers.
PS: Yes, of course there are XML parsers and encoders available for every mainstream language out there!
mpyne@reddit
The error message from the original post was from an XML parser handling a properly-encoded character though.
The issue is that the character is not valid to be decoded into an XML document (at XML 1.0, which everything uses) no matter how you encode it.
BigHandLittleSlap@reddit
Works fine for me:
mpyne@reddit
They were encoding the XML. Look at their error message again:
This is an error message indicating that it was expanding a "character entity" into an appropriate character.
As indicated in this StackOverflow post where someone got this exact error message, the error seems to be generated by a Java-based XML handler running into an encoded character.
kniy@reddit
Unfortunately that's not true (unless you use some additional non-XML encoding, e.g. base64). XML does not allow ASCII NUL, no matter whether you escape it or not. Additionally, XML 1.0 also disallowed a bunch of other characters, including
&0002;
. Those are only valid since XML 1.1, which means if you use an XML library to encode those, there's no guarantee a different XML library will be able decode them again.https://stackoverflow.com/questions/39698855/is-it-possible-to-read-ascii-control-characters-in-xml
Nicksaurus@reddit
You're right, that's what I was thinking of, I just used the wrong word
billccn@reddit
Every popular programming language has an XML writing lirary that will do the escaping. The OP's company probably decided not to use one.
For example, the various output formats could be generated with the same, generic templating engine that is not XML-aware and then validated with an XML schema validator. This is a valid architecture but obviously adds support load when the input data is not sanitised.
SaltineAmerican_1970@reddit
Did you file a bug report with Edge's PDF viewer?
nearlyepic@reddit
Of course it's a microsoft product using insane control characters. The number of times I've had to write code to strip UTF-8 byte-order-marks from crap edited in notepad...
s0ulbrother@reddit
Ehem…. Fuck pdfs.
So the second project I ever worked on was a solo thing modifying existing PDFs, adding formatting, modifying them with additional fields. I learned so much shit about them by the end that I went nuts. Product worked well though and really helped me grow as a dev.
billie_parker@reddit
Spent 3 hours trying to fix a 404 error. Turns out my coworker spilled his sippy juice all over the servers.
lisnter@reddit
I think I've recounted this story before but it's a fun one.
Years ago I was a programmer on a system using Windows NT 3.5 (we started using NT3.1 but upgraded during the development phase). Anyway, we had a system that wrote log entries to a custom location to keep track of financial transactions. On this particular system there were two ways to view the transactions - via a GUI from the front screen and via a VT100 type character based UI. One of the fields logged was the timestamp of when the transaction occurred and is shown to the user as they page through the entries.
For some reason the timestamp differed between the graphical UI and the text UI; same source log file just different ways to view the data. And it only occured during a short window which I discovered was twice per year. It was pretty easy to trace the code to see where the data was used and converted and discovered that during this period in the year, the Microsoft C time library was subtracting the daylight-saving time hour twice. The Win32 library, which was newer of course, did not have this problem.
We were Microsoft partners and so had the library source, which was how I found the defect, and thought about fixing the library code directly but quickly decided that was a bad idea. The official solution from Microsoft was to convert the timestamp using a different set of library calls, meant for filesystem timestamps, which was a 64-bit value and used something like pico-seconds since 1,000,000 BC (I forget exactly) and then convert that value for display.
I forget why we couldn't or didn't use the GUI-based time library for both but there must have been a reason. MS didn't give us a timeline of when the C-library was to be fixed and so we deployed with this crazy timestamp which worked great.
The root of the problem was a change in the date that daylight saving time started/ended in the US vs. Europe (H. W. Bush?). The defect showed up in the European deployments only which added to the confusion when the MS library didn't properly handle the European TZ values properly during this period.
billccn@reddit
BTW, the Windows File Time you described above is still the standard format for high-precision time under Windows.
igorpk@reddit
Thank you for a well-written bug hunt, I love reading these!
Having been in the industry for a while, I guessed it'd be a copy+paste error on the user's side.
It's a good lesson to learn - validation edge cases are tough to deal with, and often difficult to catch in testing.
I mean, have you looked at what happens if the doctor pastes from Word/Excel etc? What about [Insert random software here]? It's very tough to test.
Again, thanks for a great read, and congrats on finding the issue!
No-Rest5568@reddit
Code is like humor. When you have to explain it, it’s not that good.
diMario@reddit
Humour is like code. When you compile it, the results are binary.