When I first started with computers, they were huge monster sitting in a closed off air-conditioned room which no one other than the chosen ones could enter. You wrote programs on punch cards or paper tape and submitted your program deck through a window to a clerk sort of like going to a bank. You then waited, sometimes for hours, before a servant of the machine brought out a stack of paper with the results of your program. You always feared that your paper stack would be thin because that often meant that your program failed and you would have to go back through the card deck and find out where your logic went wrong. You could spend a whole day, sometimes several days in the dungeon room where they kept the card punch machines. The noise of the machines punching out your card was deafening. However, we felt that we were doing something important, that we were powerful, that we were able to control the output of the machine by our commands.
Time passed and the technology evolved. First there were CRT or green screen terminals in which you could type and edit your program. There were no graphics, no Windows, no fancy editors. On the other hand, we no longer had to fear dropping a deck of a couple hundred punch cards and having to resort them before we continuing to do work. Life was great. How could it get any better?
When the first micro computers were introduced, they were more for the electronic hobbyist than for the serious application developer. Using toggle switches to enter a program would never catch on for the average person. But again the technology evolved and soon we had personal computers with a whopping 4K of memory that could be used to store and run programs. Programs were stored on cassette tapes. You know, those things you could buy albums on that looked like mini reel-to-reel tape decks. This made it possible to not only store your program creation, but also to make copies on multiple cassettes and share or sell your creation to others with the same computer model. But for most companies, these machines were still toys and not meant for serious work.
Over time, the toys became more powerful and soon software appeared to help you create and print documents. This was a magnificent improvement over a standard typewriter. If you needed to make a change to a document, you could simply go into the text and edit the document and reprint it. This sure beat having to retype the entire document. I remember our department secretary resisted the change at first because no one was going to take away her trusted typewriter. However, our department head had other ideas. Instead of dictating correspondence or writing it out long hand for her to enter, he gave her a floppy disk (yeah that is what they were called because unlike a hard disk, they were kind of floppy) with the document. Then after she printed it, he would purposefully make changes forcing her to learn how to make those changes on the desktop computer. Within a few weeks she saw the light and asked to have her old typewriter removed and the computer placed on the center of her desk.
Many years have passed and decentralized personal computers slowly replaced most of the mainframe computers and their slightly smaller cousins the mini computers. A lot of people resisted the change, but the momentum was too strong. As applications because more powerful and the need to share documents and data became more of a concern, network computing came into existence as a way to link all these separate personal computers together electronically to allow the sharing of their information. This probably marked the turn of the first pendulum pass from large machine computing to small personal computing back to a more collective approach.
Over the subsequent years, networking became more robust and the introduction of specialized servers to hold documents, databases, and even to facilitate communication grew. It was soon not uncommon to see companies with hundreds of networked computers with centralized file stores called file shares and centralized data called databases. But early networking only worked with a company’s walls. There was no sharing with the outside world. Yet the need to share data with customers and suppliers forced a new wave of innovation to solve this problem. Modems and remote access to other machines soon grew in use, but it was not enough.
One day a new technology started to be talked about. It was a way for companies to share information without dedicated connections to others. It used something called the Internet, a communication backbone that anyone could in theory access to send information to others. Some initial companies provided services to allow people to get instant access to news, stock information, and even to communicate with others having a similar connection to the company. While they were successful for several years, the Internet was bigger than they were and soon took on a life of its own.
Today access to the internet seems like an unalienable right. Some younger children have never known a day when there was no Internet or for that matter hundreds of TV stations not only on cable, but directly available through that Internet. At the same time servers at companies to hold documents and data proliferated until they started to talk almost as much room and require almost as much infrastructure in terms of power and cooling as the original large mainframe computers. In fact, many of the servers are more powerful than those mainframe computers of just a few decades ago.
Now the change is to move everything to the cloud. The cloud has this magical spell on some people like the Pied Piper. They think the cloud is limitless and they will never have to worry about their data, someone else will. There are all different types of clouds with a variety of different services to support their customers, but the main thrust is to move data out of the corporation to a ‘trusted’ box somewhere else so that the company no longer has to support the infrastructure. Furthermore, with the push for hand-held mobile devices, the emphasis is to place the bulk of the computing on these servers as well and merely use the hand-held devices to display the data.
This sounds to me like we are heading back to the days of big centralized computing. But it will only last so long. One of the major concerns that could start the pendulum swinging back the other way again is security. With data from not just one company, but potentially hundreds of companies in a single site, the temptation to hack and steal the data becomes irresistible to some. It may only take a couple of major breaches until people begin questioning the wisdom of these centralized cloud services. It was only the other week that the IRS ‘accidentally’ released thousands of social security numbers on the Internet. Oops! When the government starts collecting all of the health care data for the entire nation in their databases, what makes you think for a second that the protection of your health data will be any better protected? It will make the Sarbanes–Oxley Act of 2002 look like a joke. Beside, concentration of that much data, just like the concentration of power, is not a wise thing. Someone will always be out there looking for a way to exploit that concentration.
So will the pendulum start to swing back again to the personal devices. Will tomorrow’s hand-held devices, whatever they look like, be more powerful than today’s servers? Will it be common for individually to walk around with terabytes of information in their personal devices? Will the computing capability of these devices, aided with voice control and artificial intelligence to create new program solutions dominate the next pendulum swing? Or will personal computing devices go away entirely and be replaced with publicly available interfaces that can access your information from anywhere and be able to perform research and develop applications that anyone can create by simple voice requests?
What do you think? C’ya next time.