The origin of cloud computing can be traced back as far as 1961 when John McCarthy first conceived the idea. His vision was that computation should be arranged and organized as a public utility to lower the cost of computing, enhance reliability, and relieve users from owning and operating complicated computing infrastructure. And, since then, improvement in legacy technologies like virtualization, commodity and Grid computing has brought about the realization of the cloud computing paradigm. Cloud computing’s flexibility and scalability has continued to enhance the agility and effectiveness of huge organizations, meeting reliability, and the efficiency of large-scale applications. However, performance requirements remain a serious concern and studies and have shown that performance variability is a critical concern for cloud application providers. The reason for that is because it impacts a lot the end-user’s quality of experience. In this discussion, we are going to see in details what cloud computing is, clouds deployment models, delivery models, datacenters, and more.
What Is Cloud Computing?
It can be defined as a paradigm change in how computation is delivered and utilized. It is based on the idea that the processing of information can take place more effectively on large computing farms and storage systems accessible via the Internet. It offers a set of technologies which enable provision of computational resources like compute and storage as a service via the Internet either on a pay-per-use or on-demand mode.
Two factors that contribute to the high adoption rate of the cloud are; – (1) its ability to offer smooth scalability and elasticity to consumers without huge initial capital expenses and (2) it provides resources that can be metered so that users are only billed for they have consumed. The two appeal factors have led to a drastic change in how IT (Information Technology) departments and application providers organize and manage IT services.
Cloud Delivery Models
Depending on the abstraction level in which a service is provided, public clouds have three main delivery models. These are as below:
- Infrastructure as a Service (IaaS)– this involves fundamental computing resources like servers, network, and storage that are provided. Consumers are responsible for installing the operating system (OS) and can create arbitrary applications using application development tools of their choice. Various IaaS providers are available and they include Google Cloud Platform, Rackspace, Amazon EC2, and Microsoft Azure Infrastructure.
- Platform as a Service (PaaS)– this model gives cloud users the capability to develop, deploy, and manage apps using app development tools, operating systems, APIs, and hardware supported by a provider. The cloud user controls only the application, its architecture, and hosting space configuration but not the network, storage, servers, or even the principal operating system. Among the most renowned PaaS offerings available today are Windows Azure, Google App Engine, and Amazon’s Elastic Beanstalk.
- Software as a Service (SaaS)- this occurs when end-users start using finished applications accessible via various client devices over the Internet, together with the required software, OS, network, and hardware. The users can enjoy some measure of minimal customization, but they have limited control on the application, its platform, and underlying infrastructure. Some of SaaS applications are Web 2.0 applications (like WordPress & LinkedIn), accounting systems (e.g. NetSuite), and Customer Relationship Management (e.g. ServiceNow).
Clouds Deployment Models
How are cloud infrastructures categorized? The classification can be based on either ownership or how they are managed. Five deployment models are available for cloud infrastructure so far and they are as described below:
- Private clouds– they are managed and used by individual entities. Several organizations adopt these clouds to accrue the benefits of cloud computing like scalability and flexibility, and at the same time having complete control on its infrastructure and security.
- Public clouds– these are infrastructures managed by an organization which leases them to third-parties on a pay-per-use basis. Public clouds include Rackspace, Google Cloud Platform, and Microsoft Azure.
- Community clouds– they are used and share by a group (s) of people or organizations with a mutual interest e.g. the North Carolina Education Cloud (NCEdCloud).
- Hybrid clouds– these are called hybrid because it involves a merger between two or more clouds. For instance, if a large organization which hosts its core IT infrastructure on a private or community clouds wants to expand its capacity to meet a sudden surge in user demand, it can team up with at least a public cloud for that purpose. It will do so by leasing public cloud resources and hence the name, “hybrid clouds.” The mechanism through which that happens is called cloud bursting.
- Federated clouds– these are an emerging type of clouds consisting of only public clouds, only private clouds or both to provide end-users with seemingly unlimited computing utility service. These clouds enable high interoperability and compatibility between different cloud services through open APIs that allows cloud users to distribute services based on offerings from various vendors or transfer data easily across platforms. An example of federated clouds is one established among numerous datacenters owned by a single cloud provider.
What are cloud datacenters? They can be referred to as the power houses of cloud computing as they house several servers, communications, and storage systems needed for cloud computing. The servers, communications, and storage systems are co-located in the datacenters because of their similar physical security, environmental, and maintenance requirements. Hence, the consolidation of such resources help in ensuring their effective utilization. Multiple applications cloud providers can share server resources, thus, avoiding under-utilization and server stretch in datacenters.
The technology that makes consolidation and sharing better is called virtualization technology. It provides performance isolation among co-located applications. Consolidation is cost-efficient as it reduces capital and operational expenses, and lowers energy consumption. Two mainstream platforms are available for realizing virtualization today (hypervisor-based virtualization and container-based virtualization). Hypervisor-based virtualization e.g. Xen, KVM, and Hyper-V imitates a machine hardware and allows instances of the imitation (i.e. virtual machines, VMs) to run on a different physical machine managed by a specialized operating system called the hypervisor. The approach, however, is OS agnostic since the guest OS (OS installed in the VM) may be different from the host OS (OS running on the physical host of the VM).
Container-based visualization does not emulate hardware platform, but provides visualization at the OS-level to reduce performance and speed overhead of the hypervisor-based visualization. It allows multiple isolated Linux environments (containers) to share the base kernel of the host OS. Examples of container-based visualization platforms are LXC, Docker, and OpenVZ. Despite enjoying an increasing adoption, containers still cannot match VMs which remain to be the common unit of cloud deployments because of the technology’s maturity. VMs can be easily moved from one server to another to balance the load across the datacenter or merge workloads on fewer servers.
VMs are also easy-to-manage software-defined units and they facilitate the elasticity of the cloud. It is also possible to replicate them across servers (horizontal scaling) and their resource capacity (such as CPU cores) can be reduced or increased to address overload or underload situations (vertical scaling). Infrastructure as a Software (IaaS) datacenters face a challenge of resource allocation. The concern is how to optimally share or distribute computing, storage, and network resources among a set of VMs in a way that tries to equally meet service objectives of both service providers and cloud provider.
There is a need for increased automation and programmability as the adoption of cloud computing for performing various roles grows. One such role is running mission-critical and performance-sensitive applications like analytics, mobile services, the Internet of Things (IoT), and machine-to-machine communications. It means that there is an increased demand for agility, performance, and energy efficiency. And, that is what has led to the recent trend towards Software-Defined Infrastructures (SDI).
To facilitate automation, several studies have applied autonomic computing techniques in different aspects of managing current clouds like resource allocation, rescheduling, and energy management. SDIs are anticipated to change the compute, storage, and network infrastructures to software-defined and vigorously programmable entities. But, what is a SDI? A simple definition could be; it is an infrastructure that is consistently transforming itself by properly exploiting heterogeneous capabilities and utilizing insights obtained from built-in deep monitoring to constantly honor consumer SLAs amid provider’s constraints such as cost and energy constraints. There are currently only a few SDI products or platforms available, but many of the technology’s aspects are still evolving. Even so, below are the key architectural components of SDIs:
- Software-defined Compute (SDC)– it performs two roles; – (1) enhancing the programmability of existing virtualized compute resources like VMs, containers, and virtual CPUs and (2) leveraging specialized processing units e.g. GPUs (graphical processing units), FPGAs (field-programmable gate arrays), and other accelerators. It decouples the provision of heterogeneous compute resources from the existing hardware or OS so that it is based on identified or discovered workload needs.
- Software-defined Network (SDN)- its function is the separation of control and management functions of the network infrastructures away from the hardware to the server. It does so to improve performance isolation, security, programmability, and effectiveness. By means of virtualization, it allows network resources to be transformed to virtual devices (like links, end-points, & switches) that connect various virtual storage and compute instances.
- Software-defined Storage- it functions to manage huge data volumes by separating the control and management functions from the data storage system. Such separation helps in reducing management complexity and lower the cost of infrastructure.
A SDI controller manages all SDI components by providing the control intelligence needed to meet workload requirements. It uses the classical MAPE loop to consistently monitor the current state of SDI entities.
Cloud Stakeholders and Their Roles
Cloud stakeholders are usually grouped into one of three roles depending on the service delivery models. The role of the cloud provider or IP (Infrastructure provider) ranges from infrastructure provision in IaaS, platforms and infrastructure provision in PaaS, to the provision of the whole cloud stack (infrastructure, platforms, and applications) in SaaS.
Cloud users or SP (Service Providers) are mostly found under the IaaS and PaaS model. They use the infrastructure and/or platform given by the cloud provider to host application services that are used the end-user. Interactions in terms of roles between the cloud stakeholders (cloud and service providers) are administered by an official document known as the Service Level Agreement (SLA).
The SLA describes what quality of service is expected and the legal agreements. A characteristic SLA constitutes important elements like the service credits and service guarantees(outlines functional metrics e.g. response time, availability, and safety that a cloud provider needs to meet during the service guarantee period). A service credit is a sum credited to a customer or put in place towards a future payment when the IP cannot meet the service guarantees.
Cloud computing is here to stay with us. It’s what drives every IT operation. Most cloud providers know what their role in providing hosting platforms is and they will strive to maintain their offering. It’s my hope that this discussion has shed some light about cloud computing for you.