The Geek's Millennium

The Year 2000 bug (Y2K) was down to sloppy programming methods that counted only two digits for the year by simply subtracting 1900 from it...

           1999-1900=99 but 2000-1900=100 (three digits)
There is a simple way to do it without problems but these programmers didn't - LogiTEL never wrote software like that, even in 1981 when we began.

The wrap-around was widely believed to be the harbinger of doom; so-called experts on TV saying daft stuff like "You won't catch me in an elevator on New Years Eve 1999" simply whipping up public fear and exposing their ignorance. (NOTE: We weren't in a lift on NYE1999 either - having watched the celebrations at Sydney Harbour Bridge on TV, we were at a top party getting jolly with the rest of humanity.

The widely doom-cast planes dropping from the skies etc. did not happen - these "experts" will say it's down to the mammoth task to correct the bug before it happened. We say rubbish! It was all over-hyped nonsense. Why does an elevator need to know what date it is let alone calculate with them? OK, there are smart elevator systems that balance their duties to the tidal flows of the work day (we have done that too!), but why will it crash to the well if it gets it wrong? Little knowledge is a dangerous thing. Still, they got well paid on both sides of the argument.

Even given the massive effort to find and fix Y2K issues, there were odd ones that sneaked through. Four years after, we stumbled across an OSS tool reporting the year as "104" and look at the dates in this log we found a few years ago: This is Enterprise-class software (read: $thousands) on a client's live servers... 13 years after and still not fixed and this was the latest version installed a decade after 2000 - see what we mean about sloppy programing?

Y2K Fail

There is however another "millennium" this century... A lot of the linux/unix and mainframe computers on the planet (and there is a lot of them) still use a 31 bit counter to count seconds from midnight, 1st January 1970 (it's actually 32 bits but one of them is the sign of the number). This is especially important for embedded Linux systems (cars, control systems, media players, fridges, phones etc. etc.) which are *everywhere*. This is called "UnixTime" or sometimes EpochTime. 31 bits can count to a very large number - enough to count every second for 68 years infact. Holding dates like this makes it easy to calculate with them (rather than the plethora of "Human Friendly" formats) and lots of systems do just that.

The so-called "Geek's Millennium" occurs when this counter goes back to zero - this is very likely to cause mayhem - it might just be possible to prevent... New operating systems, patches to existing - we still have 19 or so years to get it right, but not many are doing anything about it... In human terms, the date and time of the roll-over is

19th January 2038 03:14:07

At this point, date counters will roll-over from 2147483647 (the biggest number you can count in 31 bits) to 0. You can understand the problem easier by looking at the binary:

                <---------- 31 bits ---------->
               01111111111111111111111111111111 = 2147483647 = every second in the last 68 years
                                             +1
               10000000000000000000000000000000 = 2147483648
               ^
               32nd bit (should be ignored for unixtime)

The developers who first wrote the code all those decades ago used what is called a "signed long" which is/was 32 bits wide. 31 bits are used to store the actual number and the 32nd bit indicates whether the number is positive or negative. It's been working really well so it tends to get forgotten (you only notice computers when they break). Remember epoch-time uses the right-most 31 bits - and they will all go to 0. Effectively the clocks will reset to 1970! Worse still, the clocks might be interpretted as the year -2038... How would the banking systems calculating your mortgage interest cope with a negative date? It is highly probable that it didn't even occur to the developers who wrote them - why would it?

What really scares us; The filesystem under a huge amount of Linux and Unix boxes, which goes by the catchy name of "ext3", will definately break - The core operating system on any affected machine (servers, desktops, routers - anything with embedded Linux), if it uses ext3, it will be unable to process files on its disks after that date (and that includes ramdisk i.e. not actually a disk mechanism but simulated in memory - which is most embedded devices). EXT3 ABSOLUTELY WILL BREAK! NO DOUBT ABOUT IT! But, there is light at the end of this tunnel... ext4 is an evolution of ext3 which covers (among other things) a fix for the Geek's Millennium. Let's hope everyone adopts it in time - here we are, 11 years later and ext4 is still almost non-existent in embedded systems!

If all 32 bits were used, the geeks millennium would not occur until the 22nd century (actually 2106), but that is only postponing the problem - it's not fixed, just pushed somewhere else - Actually it's not possible to permanently fix it anyway, but it can be made insignificant: Modern operating systems use 64 bit unsigned counters (or bigger) - enough to count every second from 1970 to roughly the year 585,000,000,000. Again, this is simply pushing the issue further out, but it's unlikely to be a problem this time - the Sun will have incinerated planet Earth about 500 billion years earlier so we'll most likely be past caring.

So on that Winter's morning, 19 years from now, it might be that all the old systems have been replaced or patched and nothing happens... or it might be that things really go wrong - of course we'll help clean up the mess... As you can see from the example above, things will always slip through the net. Our fee is negotiable :o)

FootNote:
<SMUG MODE>
We have been shouting about this since well before 2000 (way before people started thinking about the "Y2K bug"), this article on The Register backs up our worries but is dated February 2015!
</SMUG MODE>