Year 2000 problem
The year 2000 problem (also known as the Y2K problem and the millennium bug) was a flaw in computer program design that caused some date-related processing to operate incorrectly for dates and times on and after January 1, 2000. It turned into a major fear that critical industries (electricity, financial, etc.) and government functions would stop working at 12:00 AM, January 1, 2000, and at other critical dates which were billed as "event horizons." This fear was fueled by huge amounts of press coverage and speculation, as well as copious official corporate and government reports. All over the world companies and organisations checked and upgraded their computer systems. The preparation for Y2K had a significant effect on the computer industry.
In the end, significant disasters such as nuclear reactor meltdowns or plane crashes were avoided, but the number of non-critical Y2K errors encountered on January 1, 2000 was very big. But because of the lack of disasters combined with the "end of the world" expectations the public may have wrongly regarded the Y2K as a non-event.
Y2K (or Y2k) was the common slang for the year 2000 problem. (The abbreviation combines the letter Y for "year", and K for the Greek prefix kilo meaning 1000; hence, 2K means 2000.) It also went by millennium bug (though there is a popular debate on whether or not the year 2000 was actually the start of the new millennium).
It was thought computer programs could stop working or produce erroneous results because they stored years with only two digits and that the year 2000 would be represented by 00 and would be interpreted by software as the year 1900. This would cause date comparisons to produce incorrect results. It was also thought that embedded systems, making use of similar date logic, might fail and cause utilities and other crucial infrastructure to fail.
In the years prior to 2000, some corporations and governments, when they did testing to determine the extent of the potential impact, reported that some of their critical systems really would need significant repairs or risk serious breakdowns. Throughout 1997 and 1998, there were news reports about major corporations and industries that had made uncertain estimates as to their preparedness. The vagueness of these reports, and the apparent uncertainty regarding what sort of breakdowns were possible—and the fact that literally hundreds of billions of dollars were reportedly spent in remediation efforts—were a major part of the reason for the public fear. Special committees were set up by governments to monitor remedial work and contingency planning, particularly by crucial infrastructure such a telecommunications, utilities and the like, to ensure that the most critical services had fixed their own problems and were prepared for problems with others. By early- to mid-1999, when the same corporations, industry organizations, and governments were claiming to be largely prepared, the public relations damage had been done. It was only the safe passing of the main "event horizon" itself, January 1, 2000, that fully quelled public fears.
In North America the actions taken to remedy the possible problems did have an unexpected benefit during the 2003 North America blackout, on August 14, 2003. The previous activities had included the installation of new electrical generation equipment and systems which allowed for a relatively rapid restoration of power in some areas.
The programming problem
The underlying programming problem was quite real. In the 1960s, computer memory and storage were scarce and expensive, and most data processing was done on punch cards which represented text data in 80-column records. Programming languages of the time, such as COBOL and RPG, processed numbers in their ASCII or EBCDIC representations. They occasionally used an extra bit called a "zone punch" to save one character for a minus sign on a negative number, or compressed two digits into one byte in a form called binary-coded decimal, but otherwise processed numbers as straight text. Over time the punch cards were converted to magnetic tape and then disk files and later to simple databases like ISAM, but the structure of the programs usually changed very little. Popular software like dBase continued the practice of storing dates as text well into the 1980s and 1990s.
Saving two characters for every date field was a significant savings in the 1960s. Since programs at that time were mostly short-lived affairs programmed to solve a specific problem, or control a specific hardware-setup, most programmers of that time did not expect their programs to remain in use for many decades. The realisation that databases were a new type of program with different characteristics had not yet come, and hence most did not consider fixing two digits of the year a significant problem. There were exceptions, of course; the first person known to publicly address the problem was Bob Bemer who had noticed it in 1958, as a result of work on genealogical software. He spent the next twenty years trying to make programmers, IBM, the US government and the ISO care about the problem, with little result. This included the recommendation that the COBOL PICTURE clause should be used to specify four digit years for dates. This could have been done by programmers at any time from the initial release of the first COBOL compiler in 1961 onwards. However lack of foresight, the desire to save storage space, and overall complacency prevented this advice from being followed. Despite magazine articles on the subject from 1970 onwards, the majority of programmers only started recognizing Y2K as a looming problem in the mid-1990s, but even then, inertia and complacency caused it to be mostly ignored until the last few years of the decade.
Storage of a combined date and time within a fixed binary field is often considered a solution, but the possibility for software to misinterpret dates remains, because such date and time representations must be relative to a defined origin. Roll-over of such systems is still a problem but can happen at varying dates and can fail in various ways. For example:
- The typical Unix timestamp stores a date and time as a 32-bit integer number representing, roughly speaking, the number of seconds since January 1 1970, and will roll over in 2038.
- The popular spreadsheet Microsoft Excel stores a date as a number of days since an origin (often erroneously called a Julian date). A Julian date stored in a 16-bit integer will overflow after 65,536 days (approximately 179 years). Unfortunately, some releases of the program start at 1900, others at 1904.
Even before January 1, 2000 arrived, there were also some worries about September 9, 1999 (albeit lesser compared to those generated by Y2K). This date could also be written in the numeric format, 9/9/99, which is somewhat similar to the end-of-file code, 9999, in old programming languages. It was feared that some programs might unexpectedly terminate on that date. This is actually an urban legend, because computers do not store dates in that manner. In this case, the date would be stored 090999 or 9/9/99, to prevent confusion of the month-day boundary.
Another related problem for the year 2000 was that it was a leap year even though years ending in "00" are normally not leap years. (A year is a leap year if it is divisible by 4 unless it is both divisible by 100 and not divisible by 400.) Fortunately, like Y2K, most programs were fixed in time.
Public reaction to the problem
Some industries started experiencing related problems early in the 1990s as software began to process future dates past 1999. For example, in 1993, some people with financial loans that were due in 2000 received (incorrect) notices that they were 93 years past due. As the decade progressed, more and more companies experienced problems and lost money due to erroneous date data. As another example, meat-processing companies incorrectly destroyed large amounts of good meat because the computerized inventory system identified the meat as expired. There were, in fact, many such minor "horror stories" like these, which received much play in the press as 2000 approached.
As the decade progressed, identifying and correcting or replacing affected computer systems or computerized devices became the major focus of information technology departments in most large companies and organizations. Millions of lines of programming code were reviewed and fixed during this period. Many corporations replaced major software systems with completely new ones that did not have the date processing problems. It was frequently reported that corporations had already experienced at least minor Y2K problems, and some major problems as well, due to date look-ahead functions in code and embedded systems, but it was and still is not clear what the full cost and seriousness of these problems were.
Y2K was a big media story in 1999. In some countries public apprehension was tremendous, reaching, in some quarters, enormous proportions. Some individuals stockpiled canned or dried food in anticipation of food shortages. A few commentators predicted a full-scale apocalypse, among them computer consultant Edward Yourdon, religious commentator Gary North, and economist Edward Yardeni .
What actually happened
Before the year 2000
Even before the beginning of the year 2000, there were a number of minor problems that occurred.
One such example was a supermarket chain in the midwestern United States. When a cash register encountered a credit card that had an expiration date that was after the year 2000, it created a serious glitch within the computer systems running the cash register. The glitch caused the computer network to shut down all the cash registers throughout the entire supermarket chain. This was used by experts to illustrate the need for businesses to study whether or not a Y2K bug could cripple them also.
In 1996, a Marks & Spencer can of corned beef was rejected by the cash register. The register thought it to be ninety-six years past its expiration date, because the label read "12-1-00" and the register misinterpreted this as 12-1-1900.
After the beginning of the year
When January 1, 2000, finally came, there were few major problems reported, contrary to many expectations. They mostly occurred in countries with less experience with computers, and/or less money to address the problem. A few made the news, such as a nuclear power plant in Japan that shut down for a short while due to a problem in an auxiliary system. But in most cases, the problems encountered were minor and were fixed by programmers without difficulty.
Ironically, many people were upset that there appeared to be so much hype over nothing, because the vast majority of problems had been fixed correctly. Some critics have suggested that much preventive effort was unnecessary. Their argument is it would have been cheaper not to spend as much examining non-critical systems for flaws and simply fix the few that would have failed after the event. The argument of their opponents is that, had it not been for such efforts, the problem would have been much worse and widespread.
For those not involved in the preventive effort, the conclusion that all the efforts have been a waste was easy to draw, as they had no knowledge of the countless systems that had been corrected, but had only witnessed the problems that had not been fixed in time. Also, few of them realized that fixing the problems afterwards would have been much harder as active millennium problems would have complicated matters. But in any case, for many systems the checking procedure involved replacement with new, improved functionality and thus in many cases the expenditure proved useful regardless. Preparing for Y2K resulted in many more computer programming and testing jobs than would have otherwise existed. Programs were reviewed and tested that otherwise would have been considered "done."
Items of interest
- The United States established the Year 2000 Information and Readiness Disclosure Act , which limited the liability of businesses who had properly disclosed their Y2K readiness.
- Insurance companies sold insurance policies covering failure of businesses due to Y2K problems.
- Attorneys organized and mobilized for Y2K class action lawsuits (which were not pursued).
- No major failures of infrastructure were reported in the United States or even in many places where they had been widely expected, such as Russia.
- The Y2K problem mainly affected countries that follow the western calendar (Saudi Arabia, for example, does not).
- One theory has it that the Federal Reserve increased the money supply in 1999 to compensate for anticipated hoarding by a frightened populace. The populace, however, was not frightened, and the flood of new money fueled a stock market high tide that went out in spring of 2000.
- Many organisations finally realised the critical importance of their IT infrastructure to their business, and put in place plans to keep it running and restore capability in case of disaster. Such planning may well have helped the relatively speedy return to functioning of New York's critical financial IT systems after the September 11, 2001 attacks.
- Speculatively, the Y2K spending on information infrastructure caused a slowdown in information technology spending in the year 2000 and 2001 and may eventually lead to higher productivity in future years.
- The Long Now Foundation, which (in their words) "seeks to promote 'slower/better' thinking and to foster creativity in the framework of the next 10,000 years", has a policy of anticipating the Year 10,000 problem by writing all years with five digits. For example, they list "01996" as their year of founding.
- One of the founders of the Long Now Foundation, Danny Hillis, was one of the few commentators who publicly predicted that Y2K bugs would cause no significant problems (see "Why Do We Buy the Myth of Y2K?", Newsweek, May 31, 1999).
- Univision news reported that on the evening of December 31, 1999, a couple in Peru had committed suicide, for fears of what Y2K would bring.
- A few (but not many) computer systems did actually fail on January 1st, although some of those did so on a yearly basis. An almost amusing postscript to the Y2K problem was the fact that a number of computers not set up for leap years actually failed the following February 29th.
- "We may not have got everything right, but at least we knew the century was going to end."
- "Computing consultants laughing all the way to the bank."
- Popular catchphrase used by the Australian media on the First of January 2000.
- The Halloween episode of The Simpsons for the 1999–2000 season, Treehouse of Horror X , contained a sketch fittingly entitled "Life's A Glitch, Then You Die." Homer's failure to check Y2K preparedness at the Springfield Nuclear Power Plant results in a global technology-related catastrophe.
- The Family Guy episode "Da Boom" (aired December 26, 1999) featured the Griffin family surviving the end of civilization, caused by the Y2K bug.
- The Newsradio episode "Meet the Max Louis" had a subplot in which the station's electrician Joe Garelli dealt with the effects of him programming the computer system to Jesus's "actual" birth-date. The episode was filmed in 1998, so they were experiencing the year 2000 problem two years early.
- DeJesus, Edmund X. (1998). "Year 2000 Survival Guide." BYTE, July 1998, vol. 23, no. 7 (the last issue of BYTE)
- A Day in the Hype of America – Y2K documentary by Global Griot Productions, filmed entirely the 31st of December 1999