2014 is a big year for tech anniversaries. There was a big celebration for the 30th anniversary of the Mac, which changed personal computing forever and the way people felt about their computers. There was also the 10th anniversary of Facebook, which changed the way we connect with each other on the Internet. However, 20 years before the launch of the Mac, IBM launched System/360, a mainframe platform that heralded a new era of computing and that would serve as the backbone of big business computing and continue to evolve into the powerful machines we have today.
Below is a brief look back at 50 years of the evolution of the mainframe, presenting a few highlights from each decade.
The 60s
Leading up to the 1960s, most large computing systems were custom-built for each customer to suit their particular needs. Programs consequently were not compatible from one system to the next. However, on April 7, 1964, IBM announced System/360, which would introduce a new system architecture that would be compatible across a range of computing performance levels. This became the system architecture that would come to dominate large-scale computing for decades to come. These mainframes were put to good use. It was a System/360 mainframe that helped NASA put the first man on the moon.
The 70s
In June of 1970, IBM released the next generation of its mainframes, System/370. The new system maintained the operating principles of its predecessor, making sure that older code and applications would continue to operate properly even after hardware upgrades. Prior to this, code and programs would have to be re-written and re-complied with each generation of hardware. Also, new technologies, like virtualization, allowed for greater utilization and performance for the mainframe systems. virtualization let more parts of the business share the mainframe resources to get things done, letting one mainframe accomplish the work of several.
The 80s
The mainframes of the 1980s began supporting graphical terminals and could be operated through other computers via terminal emulators. This was also the decade when the mainframe industry first started competing on a larger scale with PCs, servers, and other distributed computing platforms. Windows and Unix systems threatened the mainframe by presenting a viable alternative to the mainframe. It was thought that these distributed systems would be able to take over many of the mainframe’s jobs.
The 90s
Graphical user interfaces began to be a more common way for regular users to interact with the mainframe. There were many premature (and incorrect) predictions of the mainframe’s demise, stating that “Big Iron” would become obsolete before the end of the decade. However, System/390 came out in the early 1990s, introducing another wave of innovations. The mainframe survived this decade, showing that its benefits were able to hold up against the client/server computing model. The heightened need for security and availability brought about by the Internet age set up the mainframe for a renaissance in the decades to come.
The 2000s
This was the decade the zSeries architecture and z/OS came into use. Linux also became widely used on mainframes during this time, opening up many other uses for mainframes. Linux made for greater portability of applications. Programs written for other systems could be run on the mainframe and on a wider variety of hardware. In the 2000s, the mainframe morphed into a more connected machine. The mainframe continued to prove itself to be adaptable and flexible enough to meet new business needs, such as e-commerce and the additional complexities of globalization.
The 2010s
The current crop of mainframes available are many thousands of times more powerful than the ones introduced in 1964. But they are built on the same principles of security, reliability, availability, and scalability. More than the previous decade, this is the real information age. In many large organizations, web servers and Java workloads are increasingly moving back to the mainframe. Advancements in software are becoming the driving growth factor of the mainframe as Moore’s’ law levels out in processing power. Companies have come to have a better understanding of the true cost comparisons between mainframe, client/server, and cloud computing systems. IBM and others are investing in academic initiatives to train the next generation of mainframe developers.
What’s next?
It’s odd that there is a negative connotation to “legacy systems,” especially since we tend to admire the positive legacy others leave behind. In computing, legacy systems are just systems that work. Their reliability and longevity is what gives them the “legacy” in their name. And this was by design. From System/360 on, mainframes have been designed to keep working, and to let businesses work on new applications to develop their businesses, rather than having to re-write old ones to stay compatible. Despite all the developments and improvements, current systems are still backwards compatible, making it possible to run programs written in the 1960s. But they also do so much more than they did in the past. That’s why there is still a bright future for the mainframe, just as there was in 1964.
If you want to learn more about the development of the mainframe, IBM Systems Magazine has a more comprehensive history of the mainframe presented in a five-part video series. Enjoy!