Cloud Computing 04 Evolution of Computers



Evolution of Computers

In this video let us take a look at how the Computers evolved? Actually how the Information Technology and its architecture evolved.

Evolution of Computers
Evolution of Computers



The popular architecture during the early stages of using Information Technology systems in a corporate environment was about having a single powerful computer connected to multiple display devices. This single powerful computer was generally a Mainframe computer. All the programs are executed in this computer. However you do not have to sit in front of it to run the programs. Instead, the terminals, which are nothing but essentially a keyboard and a monitor running a piece of inbuilt program called Terminal program, are connected to this computer. They are connected using a port called serial port which is at least a multi-million times slower than your current  broadband Internet speed at your home. These serial ports are powerful enough to transmit data over long cables. So, it was common in a large company that, the mainframe was in a small data-center room while multiple terminals from other rooms and floors were connected to it though long running cables.

Personal Computers
Personal Computers


Then during 70s, Personal Computers or PCs if I may use its acronym, started to revolutionize the Information Technology industry. Why? These were relatively inexpensive compared to Mainframes, commonly available as commodity devices from the stores and malls if needed. So, all of a sudden a small business could afford to buy couple of PCs. The executive assistants to senior managements started receiving a PC at their work, running Word processing applications such as Wordstar, Wordperfect or Word. The Financing team had couple of PCs with each running Accounting software. The marketing team had couple of PCs to print marketing materials such as brochures, catalogs etc.


However these computers are not connected. If someone needs to transfer a file from one to another, most probably they would have to copy the files to the floppy disks, then take them to the other computer and copy the files. If you are in your twenties now, and now being the year 2020, please google what is a floppy disk to understand what it is. I could say Just kidding. But no seriously, pls google it.

Mainframe vs Personal Computers

These were very small in size compared to mainframes. A mainframe would occupy a significant amount of space in a datacenter room while a Personal Computer can be put on a desk. And so, a PC was also referred as a Desktop in those days. Nowadays Desktop is one of the forms that a Personal Computer come in.


Anyway, at some point even the big businesses started using a hybrid environment containing Mainframes and PCs. The Dumb-Terminals continued to use Mainframe computer as they did normally. The PCs have been loaded with Terminal Emulation software which allowed the PCs to interact with the Mainframes just like a terminal did. The important thing to understand is, the PCs were also connected using those slow speed serial cables. These weren’t the full fledged networking as we see now.

Network Interface Cards

Soon the industry started racing in the Computer Networking aspect. Dedicated Network Interface Cards were installed in the PCs. Network cables which can transmit data faster than the serial cables were being installed around the premises. The computers are also installed with certain network protocols, which are equivalent to a language that all the computers must use when communicating with each other. And the Computer Networking began to flourish. All of a sudden people did not have to use floppy discs to transfer files between two computers. They could use the network to do that. But who controlled the network? Answer is, it depends on the type of the network.

Peer to Peer Networking

Peer to Peer networking is one of the earliest form of networking concepts. In this, the resources are shared by the individual computers. For example, a computer can make its entire C Drive available on the network. Other computers can use that share, that is how a resource being shared in the network is referred, and read or write on it based on the access provided by the computer which is sharing that share. Another computer can share just a folder. Other computers can use that share too. Here there isn’t a centralized control. If the operator of a computer shuts off that computer, then the resources shared by that computer will not be available on that network until the computer is powered back.

Then Centralized Servers came into picture as Personal Computers started becoming more and more powerful. This was due to the lightning pace at which the processors were becoming more and more powerful. This along with the cost of storage and memory getting cheaper enabled even small businesses to afford multiple powerful PCs.


Here the centralized computer contains the resources that were shared across the network. They would share folders or entire disk drives. Since these shares are from a single, powerful computer, it was easy to control from a central location. Since this computer’s whole purpose is to serve its resources to the network, it was referred as a Server Computer or just Server. This powerful computer would reside in a well protected location called Data Center as opposed to the user computers, which would sit either on or under the user’s desks. Only the administrators were given physical access to the central Server computer. With uninterrupted power supply, aka, UPS powering these computers, a Server Computer running 24 x 7, which is a reference to saying, running all the time, became a common thing. Because of that, Network sharing became a stable and common environment since 90s. The shared resources will be available even if a user computer is turned off.

Since these Server Computers are sharing their resources, the Operating Systems running in these computers had to be different than the Operating Systems running in the user computers. They had to be much more powerful, multi-user oriented, as well as, highly secured. These Operating Systems are called as Server Operating Systems. UNIX, Novell Netware, Windows NT (back in 90s) and now Windows Server OS are and were good examples of Server Operating systems. The reason I am saying were, as in past tense is, Novell Netware, which revolutionized the PC Networking Industry does not exist any more. So is Windows NT, well, technically. That name has been discontinued by Microsoft, which was a little bit of a disappointment for me personally as I was one among the earliest Microsoft Certified Systems Engineers in Windows NT 3.51 back in mid 90s. But I also understood that NT, which was an acronym for New Technology cannot be continued to be called as New Technology once it got a little bit old.


Around that time, the Servers were running Server Operating Systems such Windows Server, Linux and UNIX, while the desktops were running the desktop version of Window OS such as Windows 95, XP, 2000 to now, Windows 10.


However what about the security of the data? How did the IT industry address the need of restricting the access to the data based on who is access it? In the next video we will talk about how security was implemented.

Comments