Meet Nina. Nina loves dogs, especially her labrador Max. She could play with him for hours. However, there are days when other things are more interesting than Max. School, friends and video games can sometimes push Max into the background and then Nina doesn’t play with him at all. She also doesn’t see the less attractive activities such as bathing, brushing and walking the dog when it’s pouring rain. I believe that all those activities would become her father’s (the author of this text) duty, if Max were really Nina’s dog. But he is not. He is our neighbour’s dog. But Nina loves him as if he was her own. When she is in the mood, she plays with him. When not, she sends him back to his owners, who bathe, feed and walk him at dawn. At the age of only 7, Nina discovered all the benefits of the DaaS model (Dog as a Service). She uses the dog when and as much as she likes, while she leaves the maintaining and all the costs that go with it to the owners
DaaS model in production
To be honest, adults have been using models similar to this one long since. A lot of people, for instance, don’t buy their own cars, but use public transport, because a car is too much of an expense, or they don’t know how to replace a flat tire, or they just don’t want to deal with it. It appears that people would rather use somebody else’s sailboat, skis and rackets, than their own. They are also rather going to organize a child’s birthday party in a playroom than in their own apartment. We must agree, those are all good, rational and economically profitable ideas. Why organize your child’s birthday party at your own home, prepare the food and clean the apartment afterwords, if you can have it in a playroom? People naturally appreciate concepts that bring them the bigger comfort and/or smaller expenses. The world of micro economy is long since familiar with the term „economies of scale“. Carriers, supermarkets, collective shopping and other similar branches have grown on that premise. The IT world with its computers has always been a completely different story, developing independently. But, with the development of technology, those concepts have finally entered the IT world.
The economies of scale say that with the increase in produced quantity, the costs of production per unit decrease. That is what the Chinese do.
Computing in cloud
Cloud computing enables easier business transactions with smaller expenses. To be able to realistically look at all the benefits cloud computing offers, we have to take a step back, to a cloudless world. In that „sunny“ world, a typical organization takes care of their IT infrastructure on its own. Basically, it is about budgeting, purchasing the hardware infrastructure, licensing expenses and system software installation. It gets even more complicated when you take into consideration the fact that hardware and software installation needs specific skills, which the organization has to have internally, or buy it on the market. Because of that, the infrastructure’s price just went up a bit. Every serious organization has to protect the continuity of its business, which means that the environment must be built to be highly approachable. Oops, the price just doubled. All that infrastructure uses a lot of electricity, so an air conditioner should be installed to keep the equipment from overheating. Therefore, let’s add a few overhead expenses. Every device has its Mean Time To Failure, which means it’s going to be necessary to replace it eventually. Let’s add more maintenance expenses. Ka-chiiiiing! We could go into even more details, but I believe it’s not necessary. Even from this brief reflection it is obvious that IT infrastructure actually costs some real money. And additionally, people with very specific knowledge need to spend a lot of time on it. Briefly, no sign of comfort or decreased expenses, to which we all aspire.
Some of the first attempts to decrease expenses were headed in the direction of co-locating the equipment to cut the costs of the overhead, but the real progress happened only about 10 years ago when virtualization technology was popularized, after which it was possible to execute more logical (virtual) systems on the same hardware infrastructure. All of a sudden we didn’t need 5 pieces of hardware for 5 servers, but all 5 of them could be executed on only one iron. Truth be told, such a concept has its flaws, such as slightly weaker performances, but the advantages of that kind of concept are endlessly bigger than its „flaws“. All the executors rarely need hardware’s full performances. Virtualization technology enables the executors to co-exist on the same hardware in a way that it gives them as many resources as they need in the given moment, like a strict but righteous master. Ultimately, instead of 5 half-used pieces of hardware we only have one iron, but used just right. The economies of scale used in the IT world.
Numerous data centers were developed on the economies of scale concept as external providers of hardware rental. The idea is real simple: if a private hardware is so expensive, and it is rarely used up to its full potential, let’s form a big data center with lots of processor power, and offer organizations a possibility to rent as much processor power as they need. Because of the economies of scale, creating a big shared data center is cheaper than buying a dedicated hardware for every organization. By using shared processor power, organizations can change their capacity in a flexible way, which is very important in brief peak load situations. Without the flexibility, organizations would have to capacitate their own systems for the peak load level, which means all that extra capacity would be unused the rest of time. In shared systems, an organization can increase its capacity during the peak load and decrease it afterwords, and all that without the need for additional hardware, because the service provider takes care of it. That’s why those kinds of systems are called cloud systems, they are ductile and not transparent just like nature’s real clouds, and we don’t actually care what is inside them and how they function, as long as we can expect the anticipated service: hiding the sun and producing rain and snow.
„Do I have to take care of everything in this house?“
Organizations want to have the liberty of choosing the things they want to pursue inside their IT infrastructure, as well as leaving other things for somebody else to do. On one side of the specter there are organizations with their own strong IT support, that have enough knowledge on all levels and feel safe maintaining all the infrastructure levels: from hardware, system software, to applicative solutions. Those kinds of organizations want to have as many infrastructure aspects as possible under their own control, so they won’t leave anything other than basics to someone else, like provision of hosting a private hardware in a rented space that ensures electricity, fire protection, air conditioning and a good internet link. On the other hand, organizations with a weaker IT support will rent a hardware, apart from the hosting service, so that they don’t have to worry about its maintenance, and some will even let the external service provider take care of the system software. On the opposite side of the specter there are organizations without any IT support, that will leave maintenance of all IT infrastructure aspects to others. As well as taking care of hardware and system software, that also generally includes taking care of applicative solutions.
All serious organizations, that are functioning in different industries, all have a similar IT infrastructure architecture. What differentiates them is the level to which they are ready to take care of the architecture’s layers.
In the picture there is a typical IT infrastructure architecture that is a standard today. Lower layers of the architecture provide infrastructure services, network links, storage devices, hardware servers and virtualization mechanisms. Those layers are ensuring a base on which virtual severs are being performed. Middle layers are ensuring an execution environment as an operating system, application server, database and middleware, such as an enterprise bus or a message exchange system. Upper layers are ensuring the application services and data needed for its work. Organizations have potentiometers, with which they decide which layers to engage in, depending on the budget and internal knowledge.
By rotating the potentiometer, organizations typically stop on one of three models. IaaS (Infrastructure as a Service) represents a model in which the external service provider takes care of lower, infrastructure layers, while the organization takes care of the execution environment, applications and data. In the PaaS (Platform as a Service) model the service provider takes care of the execution environment as well, while the organization takes care only of applications and data. In the third, SaaS (Software as a Service) model, the service provider takes care of everything, including both applications and data. The organization is solely a user and has no connection to the infrastructure.
Choosing the cloud model implicitly suggests the organization’s focus, i.e it shows what the organization chooses to spend their resources on. IaaS model suggests a technologically strong organization with strong infrastructure knowledge. Application development oriented organizations tend to lean towards to the PaaS model. SaaS model is used mostly by non-technological organizations that don’t want to spend much on the IT infrastructure, but want a finished application solution so they could focus on the development of their business.
IaaS model for infrastructure gurus
IaaS is an entry model for business in cloud, by which the service provider rents the bare infrastructure to the organization. Specifically, it is about network devices, disks, hardware providers and virtualization mechanisms. That model gives the organization an infrastructure in which it can create its own virtual machines and organize an IT ecosystem. The organization is responsible for the installation of the operational systems into the virtual machines and the installation of the system software (such as application providers, databases, messaging systems, …) and application installation. Service provider takes care of the background hardware infrastructure. Data centers are projected in a high availability regimes, so that the whole system is resistant to hardware components malfunctions. If a component is damaged, service provider is responsible for its recovery, while he offers his customers different levels of quality and service availability through the contractual obligation. Lower level of service quality implies using hardware with weaker performances, weaker processors or regular disks instead of the SDD memory. The lower service quality level also implies that longer periods, in which the service isn’t available, are allowed. Both affect the service’s price: lower quality means cheaper hardware, lower availability means projecting a cheaper data center with less sophisticated solutions for high availability, so in both cases the price is lower for the end user.
Finally, neither the end users nor the administrators see the difference in the fact that the virtual machines are situated across the world, and not in a basement. The organization has the liberty to change their capacity in their own, flexible way, and the cost of using the IaaS service is accounted at the end of every month. So it is useful to turn off non-productive environments in periods when they are not being used.
Among the renowned providers of the IaaS service, Softlayer (as a part of IBM’s offer), Amazon AWS and Windows Azure stand out, and OpenStack is a popular open code solution.
PaaS model for application wizards
One step further there is the PaaS model, in which the service provider takes care of infrastructure and the execution environment as well. In the PaaS model, the service provider installs the operational system and the system software and takes care of their versions and security patches installations. The organization only has to bring the application which will be executed. All the infrastructure necessary for application execution (application provider, database, middleware level…) is provided by the service provider. Application development-oriented organizations prefer the PaaS model because then they don’t have to worry about the background infrastructure, but can stay focused only on the software development. With the purpose of simplifying the development process, PaaS infrastructure usually has numerous services and support for DevOps activities installed, about which we wrote in previous issues of our magazine. The installed services include application providers, databases, messaging systems, VPN channels and more. They are all possible to use inside the PaaS infrastructure without previous installation. DevOps support includes the central code repository and implementing the continuous deployment of new functionalities into the production environment. Continuous Deployment implies defining the automatic process (so-called pipeline), which begins with entering changes into the original code of the application, it includes the activities of creating a new version of the application and delivering it to the test environment, so the implementation’s accuracy could be verified, and it ends with shipping the new version to the production environment. The focus is on the process automation, so the new version could be built, tested and shipped off to production with as few human interventions as possible. That kind of process automation enables companies like Amazon to deliver new versions into production every 11 seconds, as unreal as it may sound. PaaS infrastructure is usually charged according to the number of user applications that are being executed and the quantity of installed services, used by those applications.
The most popular players in the PaaS arena are the IBM Bluemix, about which we are writing in this issue, and the Pivotal Cloud Foundry – as solutions based on the Cloud Foundry open code platform, followed by Red Hat OpenShift, Amazon EC2 and Windows Azure.
SaaS as a model for ordinary civilians
The last terminal on a potentiometer is the SaaS model, in which the provider takes care of both the infrastructure and the applicative solution, while the organization uses only the applicative solutions. This model’s advantage is its extreme simplicity, which makes it ideal for non-technical organizations that don’t have their own IT support. The provider takes care of everything: both background hardware infrastructure and the development and shipment of applicative solution’s new versions. That is, at the same time, this model’s flaw, because it doesn’t give the organization control over the vision of the applicative solution’s development or the solution development dynamics. It can’t get any worse than opening a new functionality development ticket and then watching it stay in the „Waiting for Dev“ status for ages, which can easily happen with this model. SaaS model is most often charged per number of users using the service.
(N)either on heaven (n)or on earth
Cloud model brings a few different ways of implementation. Dedicated cloud is your private cloud, to which only you have access. Public cloud is your cloud, available to others as well. On-premise cloud is not actually a cloud, but a cloud infrastructure inside your basement. There is also the hybrid cloud, which enables combining the clouds with the basement.
Cloud computing is here to stay
Cloud is a good idea from every aspect. The economics says the cloud infrastructure is much cheaper than the one in the basement. Modern computing irrepressibly goes towards connection. Even regulations slowly follow trends through initiatives like PSD2, whose goal is to encourage financial institutions to open interface for client and account data management so that third sides could offer quality financial services too. Famous SOA concepts are evolving into microservices, from which comprehensive systems are built, and crossing the boarders of one organization, which is this issues’ subject. In that world there is no „mine“ and „yours“… they are all „our“ micro services. Each and every one of them has its own operating instructions in the form of APIs and is executed in cloud so it could be public and available to everyone. By breaking the barriers in the regulative and between industries and countries, we are headed to everything becoming one giant, colossal cloud. Get inside with us and pick the best seat.
Related News