A Brief History of Cloud Computing

July 29, 2013

Believe it or not, "cloud computing" concepts date back to the 1950s when large-scale mainframes were made available to schools and corporations. The mainframe's colossal hardware infrastructure was installed in what could literally be called a "server room" (since the room would generally only be able to hold a single mainframe), and multiple users were able to access the mainframe via "dumb terminals" – stations whose sole function was to facilitate access to the mainframes. Due to the cost of buying and maintaining mainframes, an organization wouldn't be able to afford a mainframe for each user, so it became practice to allow multiple users to share access to the same data storage layer and CPU power from any station. By enabling shared mainframe access, an organization would get a better return on its investment in this sophisticated piece of technology.

Mainframe Computer

A couple decades later in the 1970s, IBM released an operating system called VM that allowed admins on their System/370 mainframe systems to have multiple virtual systems, or "Virtual Machines" (VMs) on a single physical node. The VM operating system took the 1950s application of shared access of a mainframe to the next level by allowing multiple distinct compute environments to live in the same physical environment. Most of the basic functions of any virtualization software that you see nowadays can be traced back to this early VM OS: Every VM could run custom operating systems or guest operating systems that had their "own" memory, CPU, and hard drives along with CD-ROMs, keyboards and networking, despite the fact that all of those resources would be shared. "Virtualization" became a technology driver, and it became a huge catalyst for some of the biggest evolutions in communications and computing.

Mainframe Computer

In the 1990s, telecommunications companies that had historically only offered single dedicated point–to-point data connections started offering virtualized private network connections with the same service quality as their dedicated services at a reduced cost. Rather than building out physical infrastructure to allow for more users to have their own connections, telco companies were able to provide users with shared access to the same physical infrastructure. This change allowed the telcos to shift traffic as necessary to allow for better network balance and more control over bandwidth usage. Meanwhile, virtualization for PC-based systems started in earnest, and as the Internet became more accessible, the next logical step was to take virtualization online.

If you were in the market to buy servers ten or twenty years ago, you know that the costs of physical hardware, while not at the same level as the mainframes of the 1950s, were pretty outrageous. As more and more people expressed demand to get online, the costs had to come out of the stratosphere, and one of the ways that was made possible was by ... you guessed it ... virtualization. Servers were virtualized into shared hosting environments, Virtual Private Servers, and Virtual Dedicated Servers using the same types of functionality provided by the VM OS in the 1950s. As an example of what that looked like in practice, let's say your company required 13 physical systems to run your sites and applications. With virtualization, you can take those 13 distinct systems and split them up between two physical nodes. Obviously, this kind of environment saves on infrastructure costs and minimizes the amount of actual hardware you would need to meet your company's needs.

Virtualization

As the costs of server hardware slowly came down, more users were able to purchase their own dedicated servers, and they started running into a different kind of problem: One server isn't enough to provide the resources I need. The market shifted from a belief that "these servers are expensive, let's split them up" to "these servers are cheap, let's figure out how to combine them." Because of that shift, the most basic understanding of "cloud computing" was born online. By installing and configuring a piece of software called a hypervisor across multiple physical nodes, a system would present all of the environment's resources as though those resources were in a single physical node. To help visualize that environment, technologists used terms like "utility computing" and "cloud computing" since the sum of the parts seemed to become a nebulous blob of computing resources that you could then segment out as needed (like telcos did in the 90s). In these cloud computing environments, it became easy add resources to the "cloud": Just add another server to the rack and configure it to become part of the bigger system.

Clouds

As technologies and hypervisors got better at reliably sharing and delivering resources, many enterprising companies decided to start carving up the bigger environment to make the cloud's benefits to users who don't happen to have an abundance of physical servers available to create their own cloud computing infrastructure. Those users could order "cloud computing instances" (also known as "cloud servers") by ordering the resources they need from the larger pool of available cloud resources, and because the servers are already online, the process of "powering up" a new instance or server is almost instantaneous. Because little overhead is involved for the owner of the cloud computing environment when a new instance is ordered or cancelled (since it's all handled by the cloud's software), management of the environment is much easier. Most companies today operate with this idea of "the cloud" as the current definition, but SoftLayer isn't "most companies."

SoftLayer took the idea of a cloud computing environment and pulled it back one more step: Instead of installing software on a cluster of machines to allow for users to grab pieces, we built a platform that could automate all of the manual aspects of bringing a server online without a hypervisor on the server. We call this platform "IMS." What hypervisors and virtualization do for a group of servers, IMS does for an entire data center. As a result, you can order a bare metal server with all of the resources you need and without any unnecessary software installed, and that server will be delivered to you in a matter of hours. Without a hypervisor layer between your operating system and the bare metal hardware, your servers perform better. Because we automate almost everything in our data centers, you're able to spin up load balancers and firewalls and storage devices on demand and turn them off when you're done with them. Other providers have cloud-enabled servers. We have cloud-enabled data centers.

SoftLayer Pod

IBM and SoftLayer are leading the drive toward wider adoption of innovative cloud services, and we have ambitious goals for the future. If you think we've come a long way from the mainframes of the 1950s, you ain't seen nothin' yet.

-James

Comments

August 28th, 2013 at 2:29pm

Thank you for this great info. I did not know that Softlayer had developed its own Admin/control layer. What does IMS means?

Is it possible to install any legacy application or data base in a "cloud" environment ? Does the cloud includes high availability? Under which technology?

Thank you for your answer to my questions and congratulations to join forces with Big Blue...

Luis Sanchez

August 28th, 2013 at 2:37pm

Bare metal servers will always perform better than a hypervisor managed VM – there is no danger of resource contention from the other VMs, and no hypervisor overhead. Automating server and network configuration in a data center sounds like a great combination of the performance of bare metal servers with the convenience of offsite, web managed hardware.

August 28th, 2013 at 3:41pm

Thank you for the questions, Luis.

IMS stands for "Infrastructure Management System." We see a significant number of customers who want to move their legacy applications into virtual or bare metal cloud environment. Depending on how the application or database is coded, the difficulty of migrating will vary, but the long term benefits of those efforts are more than worth any of the short term challenges of tweaking your code.

The purest example of a "high availability" architecture is one in which two identical server environments are online in two different physical geographies. If something brings one of those environments offline, the other environment seamlessly processes all server requests until both are back online. In that regard, ordering a bare metal or cloud server from a single data center would not qualify as "high availability," but those individual environments have multiple types of redundancy in place to prevent or mitigate widespread outages and issues. Because all of your servers are connected to each other via SoftLayer's private network, it's easy to mirror your environment from one location to another if you'd like your application to be run on a true "high availability" environment.

September 2nd, 2013 at 6:01am

Do you not think that with cloud computing evolving it will result in turning full circle and be equivalent service to what mainframes offered in the early days. Putting the technology aside, a customer wants access to information that runs their business, in the last 20 years the only solution for relatively small organisations was to buy infrastructure - tin. Nowadays, virtualisation - reduce the amount of tin but the complexity is inherited. Complexity is still a problem, the inefficiency is multiple virtual operating systems running on one piece of tin. Simplify this and the result is something that runs on a single operating system, like a mainframe from all those years ago. Multi-tenancy, lots of apps running on it, security, managed by one entity, the single operating system. What was a green screen years ago will be a tablet, a laptop or a smartphone - an entry point to access data.
I am a firm believer that simplest is best, data users or organisations will take one step back and ask their technologists to create and give them access to applications to run their business. Better spend their time, money and effort on the application. And they do not want to pay for all the junk in between. Is this turning full circle, perhaps.

September 3rd, 2013 at 10:17am

CTSS 7094
http://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
some of the CTSS people went to Project MAC and multics on the 5th flr, others went to science center on the 4th flr and did (virtual machine) cp40/cms
http://en.wikipedia.org/wiki/CP-40
Comeau's cp40 paper
http://www.garlic.com/~lynn/cp40seas1982.txt
which morphs into (virtual machine) cp67/cms (and later into vm370)
http://en.wikipedia.org/wiki/CP/CMS
above also mentions early spin-offs of science center started offerring commercial, online (virtual-machine) cp67/cms based services ... was also some of the early 7x24 work ... including non-disruptive migration between systems in loosely-coupled configurations in support of requirement to take systems down for hardware maintenance.

and then there was ms/dos
http://en.wikipedia.org/wiki/MS-DOS
before ms/dos there was seattle computer
http://en.wikipedia.org/wiki/Seattle_Computer_Products
and before seattle computer there was cp/m
http://en.wikipedia.org/wiki/CP/M
and before cp/m, kildall worked on cp67/cms at npg school (gone 404 but lives on at wayback machine)
http://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html
npg reference
http://en.wikipedia.org/wiki/Naval_Postgraduate_School

also used by institutions needed high-integrity, high-security, online access ... gone 404 but (also) lives on at the wayback machine
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

Grid Computing; Hook enough computers together and what do you get? A new kind of utility that offers supercomputer processing on tap.
http://www.technologyreview.com/featuredstory/401444/grid-computing/

from above:

Back in the 1980s, the National Science Foundation created the NSFnet: a communications network intended to give scientific researchers easy access to its new supercomputer centers. Very quickly, one smaller network after another linked in-and the result was the Internet as we now know it. The scientists whose needs the NSFnet originally served are barely remembered by the online masses.

... snip ...

since also morphed into cloud computing

as I've periodically mentioned, tcp/ip is the technology basis for the modern internet, nsfnet backbone was the operational basis for the modern internet and cix was the business basis for the modern internet.

originally we were to get $20M to tie together the NSF supercomputer centers, then congress cuts the budget and a few other things happened, finally NSF released an RFP. Internal politics prevents us from bidding on the RFP. The director of NSF tries to help by writing the company a letter (copying the CEO) ... but that just makes the internal politics worse (as does references like what we already have running is at least five years ahead of all RFP responses).

misc. past NSFNET related email
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

And early “GRID” computing … some old email
http://www.garlic.com/~lynn/lhwemail.html#medusa

working with both the national labs as well as commercial for large number of processors in large number of racks. Within hrs after the last email in above (end Jan1992) … the cluster scaleup was transferred and we were told we couldn’t work on anything with more than four processors. Within a couple weeks, it was then announced as supercomputer for numerical intensive and scientific *ONLY* … old press from 17Feb1992
http://www.garlic.com/~lynn/2001n.html#6000clusters1
to add insult to injury … 11May1992, interest in cluster caught company by *surprise*
http://www.garlic.com/~lynn/2001n.html#6000clusters2

more archeological trivia

old email about doing vm/4341 LLNL benchmark, they were looking at getting 70 systems for compute farm … sort of early GRID-computing precursor and altnerative to large monolithic supercomputers.
http://www.garlic.com/~lynn/2006y.html#email790220

the big explosion in vm/4341 systems was major reason that the internal network passed 1000 nodes in 1983 … old post with list of corporate locations that had one or more new nodes added during 1983
http://www.garlic.com/~lynn/2006k.html#8
other old email about the 1983 1000 node event
http://www.garlic.com/~lynn/2006k.html#email830422
co-worker at the science center responsible for the internal network technology (internal network larger than the arpanet/internet from just about the beginning until sometime late ’85 or early ’86)
http://en.wikipedia.org/wiki/Edson_Hendricks
also used for the univ. bitnet
http://en.wikipedia.org/wiki/BITNET

post with reference to commercial cluster scaleup meeting in Ellison’s conference room early Jan1992 (before cluster scaleup was transferred and we were told we couldn’t work on anything with more than four processors)
http://www.garlic.com/~lynn/95.html#13

two people in that meeting later leave and join a small silicon valley client/server startup. after cluster scaleup transfers and we are told we can’t work on anything with more than four processors, we also decide to leave. We are later brought in as consultants to the client/server startup because they want to do payment transactions on the server; the startup had also invented some technology they call “SSL” they want to use. We work on mapping “SSL” technology to the payment transaction business processes … some people may recognize it; it is now frequently called “electronic commerce”.

first webserver in the US on slac’s vm370 system:
http://www.slac.stanford.edu/history/earlyweb/history.shtml

GML was invented at the science center in 1969 (letters chosen because they are first letter of last name of the inventors) and gml tag processing support added to cms script. past posts mentioning gml/sgml
http://www.garlic.com/~lynn/submain.html#sgml

after a decade, gml morphs into iso standard sgml
http://www.sgmlsource.com/history/roots.htm

after another decade, sgml morphs into html at cern
http://infomesh.net/html/history/early

native, general purpose operating system tends to become extremely complex, inefficient and bloated. Running an complex, inefficient, bloated general operating system under a hypervisor will add the processing that the hypervisor needs to do.

Partitioning can frequently result in simplicity and more efficient. The simplicity of hypervisor implementation can result in significantly increased focus on efficient implementation. Then doing efficient focused services running in different virtual address spaces. Frequently in large bloated general purpose operating system it is frequently impossible to attribute overhead and/or performance issues. It is much simpler in a less complex, partitioned environment.

These were called “service virtual machines” back under cp67 … and may now be referred to as “virtual appliances”

In vm370 days, my pathlength to turn a page was 1/10th that required in MVS. Running MVS under vm370 … with both doing paging … added the vm370 paging overhead to what MVS was already doing. However, VS1 was modified for “hand-shaking” mode running under vm370 (where it bypassed doing a lot of things that it would do in a real machine) and could run more efficiently than when running directly on real hardware.

September 3rd, 2013 at 11:05pm

A great story! It is sometimes useful to tell youngsters the new technologies don't come from nowhere.

September 6th, 2013 at 2:27am

@Susan Bilder: "Bare metal servers will always perform better than a hypervisor managed VM"

Not true. We see several cases where the hypervisor can exploit hardware resources more efficiently and provide a virtual machine that runs the workload faster than real hardware would have done. The obvious case is when the guest operating system is older than the hardware.

I realize this situation is unique for the mainframe environment where the extreme degree of compatibility (generations as well as virtualization layers) means that nobody would consider to upgrade hardware and software at the same time. Unlike some other platforms where a hardware replacement almost always implies replacement of the operating system and upgrade of the application stack.

And sometimes it just does not matter anymore. For many workloads we don't care when the transaction takes 2% more CPU cycles or gets delayed by 5-10 ms. Availability and consistent response times are often more significant and may explain why IBM mainframes don't run without the PR/SM virtualization layer since 20 years or so...

During my presentation http://www.rvdheij.nl/Presentations/nluug-2007.pdf someone asked "how long does it take to boot Linux with those two virtualization layers" and the audience was seriously stunned when I explained we "get the root prompt in 2.3 seconds" :-) Performance instrumentation and associated disciplines is what makes z/VM virtualization stand out, since it helps you diagnose performance problems and address the issues.

PS My desktop PC is bare metal, but why is my workload competing with virus scanning, backup, and endless interference of unscheduled software upgrades...

December 27th, 2013 at 3:55am

It's quite amazing how quickly technology has come to this; pre-50's era we had virtually no electronic computational capacity, fast forward half a century later and we practically live in the cloud! Cloud computing technologies and networks really foster the growth of our technology and the rate at which our technology advances. Just think of the possibilities 20 years further down the line from today. Whether we like it or not, cloud technologies have revolutionized what it means to be human. Just the other day I came across a site which was offering QR codes for the dead! The QR code would be etched on the gravestone of the deceased, when a loved one would scan it with their mobile technology it would take them to a facebook-like page that would have a timeline and detailed biography about that person's life....a great memory. Just my two cents!

January 7th, 2014 at 10:48am

There were a number of things done to cp67 virtual machine operation for 7x24 online operation. Part of it was drastically reducing operator and human intervention requirement as part of extending into offshift operation. The service bureaus that spun-off from the science center provided commercial cp67 online 7x24 access (most similar to current cloud operation) made some additiional changes that didn't show up in standard product. One was transparent non-disruptive process migration in loosely-coupled operation. The issue was hardware of the period required downtown for regularly scheduled preventive maintenance ... the transparent non-disruptive process migration allowing system to be taken offline for maintenance (transparent to all the users). Later the system could be brought back online ... and system load-balancing would redistribute workload across all available systems. This predated PR/SM by 20yrs.

Leave a Reply

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • You can enable syntax highlighting of source code with the following tags: <pre>, <blockcode>, <bash>, <c>, <cpp>, <drupal5>, <drupal6>, <java>, <javascript>, <php>, <python>, <ruby>. The supported tag styles are: <foo>, [foo].
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Categories: 

Comments

August 28th, 2013 at 2:29pm

Thank you for this great info. I did not know that Softlayer had developed its own Admin/control layer. What does IMS means?

Is it possible to install any legacy application or data base in a "cloud" environment ? Does the cloud includes high availability? Under which technology?

Thank you for your answer to my questions and congratulations to join forces with Big Blue...

Luis Sanchez

August 28th, 2013 at 2:37pm

Bare metal servers will always perform better than a hypervisor managed VM – there is no danger of resource contention from the other VMs, and no hypervisor overhead. Automating server and network configuration in a data center sounds like a great combination of the performance of bare metal servers with the convenience of offsite, web managed hardware.

August 28th, 2013 at 3:41pm

Thank you for the questions, Luis.

IMS stands for "Infrastructure Management System." We see a significant number of customers who want to move their legacy applications into virtual or bare metal cloud environment. Depending on how the application or database is coded, the difficulty of migrating will vary, but the long term benefits of those efforts are more than worth any of the short term challenges of tweaking your code.

The purest example of a "high availability" architecture is one in which two identical server environments are online in two different physical geographies. If something brings one of those environments offline, the other environment seamlessly processes all server requests until both are back online. In that regard, ordering a bare metal or cloud server from a single data center would not qualify as "high availability," but those individual environments have multiple types of redundancy in place to prevent or mitigate widespread outages and issues. Because all of your servers are connected to each other via SoftLayer's private network, it's easy to mirror your environment from one location to another if you'd like your application to be run on a true "high availability" environment.

September 2nd, 2013 at 6:01am

Do you not think that with cloud computing evolving it will result in turning full circle and be equivalent service to what mainframes offered in the early days. Putting the technology aside, a customer wants access to information that runs their business, in the last 20 years the only solution for relatively small organisations was to buy infrastructure - tin. Nowadays, virtualisation - reduce the amount of tin but the complexity is inherited. Complexity is still a problem, the inefficiency is multiple virtual operating systems running on one piece of tin. Simplify this and the result is something that runs on a single operating system, like a mainframe from all those years ago. Multi-tenancy, lots of apps running on it, security, managed by one entity, the single operating system. What was a green screen years ago will be a tablet, a laptop or a smartphone - an entry point to access data.
I am a firm believer that simplest is best, data users or organisations will take one step back and ask their technologists to create and give them access to applications to run their business. Better spend their time, money and effort on the application. And they do not want to pay for all the junk in between. Is this turning full circle, perhaps.

September 3rd, 2013 at 10:17am

CTSS 7094
http://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
some of the CTSS people went to Project MAC and multics on the 5th flr, others went to science center on the 4th flr and did (virtual machine) cp40/cms
http://en.wikipedia.org/wiki/CP-40
Comeau's cp40 paper
http://www.garlic.com/~lynn/cp40seas1982.txt
which morphs into (virtual machine) cp67/cms (and later into vm370)
http://en.wikipedia.org/wiki/CP/CMS
above also mentions early spin-offs of science center started offerring commercial, online (virtual-machine) cp67/cms based services ... was also some of the early 7x24 work ... including non-disruptive migration between systems in loosely-coupled configurations in support of requirement to take systems down for hardware maintenance.

and then there was ms/dos
http://en.wikipedia.org/wiki/MS-DOS
before ms/dos there was seattle computer
http://en.wikipedia.org/wiki/Seattle_Computer_Products
and before seattle computer there was cp/m
http://en.wikipedia.org/wiki/CP/M
and before cp/m, kildall worked on cp67/cms at npg school (gone 404 but lives on at wayback machine)
http://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html
npg reference
http://en.wikipedia.org/wiki/Naval_Postgraduate_School

also used by institutions needed high-integrity, high-security, online access ... gone 404 but (also) lives on at the wayback machine
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

Grid Computing; Hook enough computers together and what do you get? A new kind of utility that offers supercomputer processing on tap.
http://www.technologyreview.com/featuredstory/401444/grid-computing/

from above:

Back in the 1980s, the National Science Foundation created the NSFnet: a communications network intended to give scientific researchers easy access to its new supercomputer centers. Very quickly, one smaller network after another linked in-and the result was the Internet as we now know it. The scientists whose needs the NSFnet originally served are barely remembered by the online masses.

... snip ...

since also morphed into cloud computing

as I've periodically mentioned, tcp/ip is the technology basis for the modern internet, nsfnet backbone was the operational basis for the modern internet and cix was the business basis for the modern internet.

originally we were to get $20M to tie together the NSF supercomputer centers, then congress cuts the budget and a few other things happened, finally NSF released an RFP. Internal politics prevents us from bidding on the RFP. The director of NSF tries to help by writing the company a letter (copying the CEO) ... but that just makes the internal politics worse (as does references like what we already have running is at least five years ahead of all RFP responses).

misc. past NSFNET related email
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

And early “GRID” computing … some old email
http://www.garlic.com/~lynn/lhwemail.html#medusa

working with both the national labs as well as commercial for large number of processors in large number of racks. Within hrs after the last email in above (end Jan1992) … the cluster scaleup was transferred and we were told we couldn’t work on anything with more than four processors. Within a couple weeks, it was then announced as supercomputer for numerical intensive and scientific *ONLY* … old press from 17Feb1992
http://www.garlic.com/~lynn/2001n.html#6000clusters1
to add insult to injury … 11May1992, interest in cluster caught company by *surprise*
http://www.garlic.com/~lynn/2001n.html#6000clusters2

more archeological trivia

old email about doing vm/4341 LLNL benchmark, they were looking at getting 70 systems for compute farm … sort of early GRID-computing precursor and altnerative to large monolithic supercomputers.
http://www.garlic.com/~lynn/2006y.html#email790220

the big explosion in vm/4341 systems was major reason that the internal network passed 1000 nodes in 1983 … old post with list of corporate locations that had one or more new nodes added during 1983
http://www.garlic.com/~lynn/2006k.html#8
other old email about the 1983 1000 node event
http://www.garlic.com/~lynn/2006k.html#email830422
co-worker at the science center responsible for the internal network technology (internal network larger than the arpanet/internet from just about the beginning until sometime late ’85 or early ’86)
http://en.wikipedia.org/wiki/Edson_Hendricks
also used for the univ. bitnet
http://en.wikipedia.org/wiki/BITNET

post with reference to commercial cluster scaleup meeting in Ellison’s conference room early Jan1992 (before cluster scaleup was transferred and we were told we couldn’t work on anything with more than four processors)
http://www.garlic.com/~lynn/95.html#13

two people in that meeting later leave and join a small silicon valley client/server startup. after cluster scaleup transfers and we are told we can’t work on anything with more than four processors, we also decide to leave. We are later brought in as consultants to the client/server startup because they want to do payment transactions on the server; the startup had also invented some technology they call “SSL” they want to use. We work on mapping “SSL” technology to the payment transaction business processes … some people may recognize it; it is now frequently called “electronic commerce”.

first webserver in the US on slac’s vm370 system:
http://www.slac.stanford.edu/history/earlyweb/history.shtml

GML was invented at the science center in 1969 (letters chosen because they are first letter of last name of the inventors) and gml tag processing support added to cms script. past posts mentioning gml/sgml
http://www.garlic.com/~lynn/submain.html#sgml

after a decade, gml morphs into iso standard sgml
http://www.sgmlsource.com/history/roots.htm

after another decade, sgml morphs into html at cern
http://infomesh.net/html/history/early

native, general purpose operating system tends to become extremely complex, inefficient and bloated. Running an complex, inefficient, bloated general operating system under a hypervisor will add the processing that the hypervisor needs to do.

Partitioning can frequently result in simplicity and more efficient. The simplicity of hypervisor implementation can result in significantly increased focus on efficient implementation. Then doing efficient focused services running in different virtual address spaces. Frequently in large bloated general purpose operating system it is frequently impossible to attribute overhead and/or performance issues. It is much simpler in a less complex, partitioned environment.

These were called “service virtual machines” back under cp67 … and may now be referred to as “virtual appliances”

In vm370 days, my pathlength to turn a page was 1/10th that required in MVS. Running MVS under vm370 … with both doing paging … added the vm370 paging overhead to what MVS was already doing. However, VS1 was modified for “hand-shaking” mode running under vm370 (where it bypassed doing a lot of things that it would do in a real machine) and could run more efficiently than when running directly on real hardware.

September 3rd, 2013 at 11:05pm

A great story! It is sometimes useful to tell youngsters the new technologies don't come from nowhere.

September 6th, 2013 at 2:27am

@Susan Bilder: "Bare metal servers will always perform better than a hypervisor managed VM"

Not true. We see several cases where the hypervisor can exploit hardware resources more efficiently and provide a virtual machine that runs the workload faster than real hardware would have done. The obvious case is when the guest operating system is older than the hardware.

I realize this situation is unique for the mainframe environment where the extreme degree of compatibility (generations as well as virtualization layers) means that nobody would consider to upgrade hardware and software at the same time. Unlike some other platforms where a hardware replacement almost always implies replacement of the operating system and upgrade of the application stack.

And sometimes it just does not matter anymore. For many workloads we don't care when the transaction takes 2% more CPU cycles or gets delayed by 5-10 ms. Availability and consistent response times are often more significant and may explain why IBM mainframes don't run without the PR/SM virtualization layer since 20 years or so...

During my presentation http://www.rvdheij.nl/Presentations/nluug-2007.pdf someone asked "how long does it take to boot Linux with those two virtualization layers" and the audience was seriously stunned when I explained we "get the root prompt in 2.3 seconds" :-) Performance instrumentation and associated disciplines is what makes z/VM virtualization stand out, since it helps you diagnose performance problems and address the issues.

PS My desktop PC is bare metal, but why is my workload competing with virus scanning, backup, and endless interference of unscheduled software upgrades...

December 27th, 2013 at 3:55am

It's quite amazing how quickly technology has come to this; pre-50's era we had virtually no electronic computational capacity, fast forward half a century later and we practically live in the cloud! Cloud computing technologies and networks really foster the growth of our technology and the rate at which our technology advances. Just think of the possibilities 20 years further down the line from today. Whether we like it or not, cloud technologies have revolutionized what it means to be human. Just the other day I came across a site which was offering QR codes for the dead! The QR code would be etched on the gravestone of the deceased, when a loved one would scan it with their mobile technology it would take them to a facebook-like page that would have a timeline and detailed biography about that person's life....a great memory. Just my two cents!

January 7th, 2014 at 10:48am

There were a number of things done to cp67 virtual machine operation for 7x24 online operation. Part of it was drastically reducing operator and human intervention requirement as part of extending into offshift operation. The service bureaus that spun-off from the science center provided commercial cp67 online 7x24 access (most similar to current cloud operation) made some additiional changes that didn't show up in standard product. One was transparent non-disruptive process migration in loosely-coupled operation. The issue was hardware of the period required downtown for regularly scheduled preventive maintenance ... the transparent non-disruptive process migration allowing system to be taken offline for maintenance (transparent to all the users). Later the system could be brought back online ... and system load-balancing would redistribute workload across all available systems. This predated PR/SM by 20yrs.

Leave a Reply

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • You can enable syntax highlighting of source code with the following tags: <pre>, <blockcode>, <bash>, <c>, <cpp>, <drupal5>, <drupal6>, <java>, <javascript>, <php>, <python>, <ruby>. The supported tag styles are: <foo>, [foo].
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.