Swapnil Saurav

Know Everything About Cloud Computing
Beginners guide to cloud computing

The origin of cloud computing can be traced back as far as 1961 when John McCarthy first conceived the idea. His vision was that computation should be arranged and organized as a public utility to lower the cost of computing, enhance reliability, and relieve users from owning and operating complicated computing infrastructure. And, since then, improvement in legacy technologies like virtualization, commodity and Grid computing has brought about the realization of the cloud computing paradigm. Cloud computing’s flexibility and scalability has continued to enhance the agility and effectiveness of huge organizations, meeting reliability, and the efficiency of large-scale applications. However, performance requirements remain a serious concern and studies and have shown that performance variability is a critical concern for cloud application providers. The reason for that is because it impacts a lot the end-user’s quality of experience. In this discussion, we are going to see in details what cloud computing is, clouds deployment models, delivery models, datacenters, and more.   

What Is Cloud Computing?

It can be defined as a paradigm change in how computation is delivered and utilized. It is based on the idea that the processing of information can take place more effectively on large computing farms and storage systems accessible via the Internet. It offers a set of technologies which enable provision of computational resources like compute and storage as a service via the Internet either on a pay-per-use or on-demand mode.

Two factors that contribute to the high adoption rate of the cloud are; – (1) its ability to offer smooth scalability and elasticity to consumers without huge initial capital expenses and (2) it provides resources that can be metered so that users are only billed for they have consumed. The two appeal factors have led to a drastic change in how IT (Information Technology) departments and application providers organize and manage IT services.

https://youtu.be/HegVrE0g6dIWatch the video of this content

Cloud Delivery Models

Depending on the abstraction level in which a service is provided, public clouds have three main delivery models. These are as below:

  • Infrastructure as a Service (IaaS)–  this involves fundamental computing resources like servers, network, and storage that are provided. Consumers are responsible for installing the operating system (OS) and can create arbitrary applications using application development tools of their choice. Various IaaS providers are available and they include Google Cloud Platform, Rackspace, Amazon EC2, and Microsoft Azure Infrastructure.
  • Platform as a Service (PaaS)– this model gives cloud users the capability to develop, deploy, and manage apps using app development tools, operating systems, APIs, and hardware supported by a provider. The cloud user controls only the application, its architecture, and hosting space configuration but not the network, storage, servers, or even the principal operating system. Among the most renowned PaaS offerings available today are Windows Azure, Google App Engine, and Amazon’s Elastic Beanstalk.
  • Software as a Service (SaaS)- this occurs when end-users start using finished applications accessible via various client devices over the Internet, together with the required software, OS, network, and hardware. The users can enjoy some measure of minimal customization, but they have limited control on the application, its platform, and underlying infrastructure. Some of SaaS applications are Web 2.0 applications (like WordPress & LinkedIn), accounting systems (e.g. NetSuite), and Customer Relationship Management (e.g. ServiceNow).

Clouds Deployment Models

How are cloud infrastructures categorized? The classification can be based on either ownership or how they are managed. Five deployment models are available for cloud infrastructure so far and they are as described below:

  1. Private clouds– they are managed and used by individual entities. Several organizations adopt these clouds to accrue the benefits of cloud computing like scalability and flexibility, and at the same time having complete control on its infrastructure and security.
  2. Public clouds– these are infrastructures managed by an organization which leases them to third-parties on a pay-per-use basis. Public clouds include Rackspace, Google Cloud Platform, and Microsoft Azure.
  3. Community clouds– they are used and share by a group (s) of people or organizations with a mutual interest e.g. the North Carolina Education Cloud (NCEdCloud).
  4. Hybrid clouds– these are called hybrid because it involves a merger between two or more clouds. For instance, if a large organization which hosts its core IT infrastructure on a private or community clouds wants to expand its capacity to meet a sudden surge in user demand, it can team up with at least a public cloud for that purpose. It will do so by leasing public cloud resources and hence the name, “hybrid clouds.” The mechanism through which that happens is called cloud bursting.
  5. Federated clouds– these are an emerging type of clouds consisting of only public clouds, only private clouds or both to provide end-users with seemingly unlimited computing utility service. These clouds enable high interoperability and compatibility between different cloud services through open APIs that allows cloud users to distribute services based on offerings from various vendors or transfer data easily across platforms. An example of federated clouds is one established among numerous datacenters owned by a single cloud provider.

Cloud Datacenters

What are cloud datacenters? They can be referred to as the power houses of cloud computing as they house several servers, communications, and storage systems needed for cloud computing. The servers, communications, and storage systems are co-located in the datacenters because of their similar physical security, environmental, and maintenance requirements. Hence, the consolidation of such resources help in ensuring their effective utilization. Multiple applications cloud providers can share server resources, thus, avoiding under-utilization and server stretch in datacenters.

The technology that makes consolidation and sharing better is called virtualization technology. It provides performance isolation among co-located applications. Consolidation is cost-efficient as it reduces capital and operational expenses, and lowers energy consumption. Two mainstream platforms are available for realizing virtualization today (hypervisor-based virtualization and container-based virtualization). Hypervisor-based virtualization e.g. Xen, KVM, and Hyper-V imitates a machine hardware and allows instances of the imitation (i.e. virtual machines, VMs) to run on a different physical machine managed by a specialized operating system called the hypervisor. The approach, however, is OS agnostic since the guest OS (OS installed in the VM) may be different from the host OS (OS running on the physical host of the VM).

Container-based visualization does not emulate hardware platform, but provides visualization at the OS-level to reduce performance and speed overhead of the hypervisor-based visualization. It allows multiple isolated Linux environments (containers) to share the base kernel of the host OS. Examples of container-based visualization platforms are LXC, Docker, and OpenVZ. Despite enjoying an increasing adoption, containers still cannot match VMs which remain to be the common unit of cloud deployments because of the technology’s maturity. VMs can be easily moved from one server to another to balance the load across the datacenter or merge workloads on fewer servers.

VMs are also easy-to-manage software-defined units and they facilitate the elasticity of the cloud. It is also possible to replicate them across servers (horizontal scaling) and their resource capacity (such as CPU cores) can be reduced or increased to address overload or underload situations (vertical scaling). Infrastructure as a Software (IaaS) datacenters face a challenge of resource allocation. The concern is how to optimally share or distribute computing, storage, and network resources among a set of VMs in a way that tries to equally meet service objectives of both service providers and cloud provider.

Software-Defined Infrastructures

There is a need for increased automation and programmability as the adoption of cloud computing for performing various roles grows. One such role is running mission-critical and performance-sensitive applications like analytics, mobile services, the Internet of Things (IoT), and machine-to-machine communications. It means that there is an increased demand for agility, performance, and energy efficiency. And, that is what has led to the recent trend towards Software-Defined Infrastructures (SDI).

To facilitate automation, several studies have applied autonomic computing techniques in different aspects of managing current clouds like resource allocation, rescheduling, and energy management. SDIs are anticipated to change the compute, storage, and network infrastructures to software-defined and vigorously programmable entities. But, what is a SDI? A simple definition could be; it is an infrastructure that is consistently transforming itself by properly exploiting heterogeneous capabilities and utilizing insights obtained from built-in deep monitoring to constantly honor consumer SLAs amid provider’s constraints such as cost and energy constraints. There are currently only a few SDI products or platforms available, but many of the technology’s aspects are still evolving. Even so, below are the key architectural components of SDIs:

  • Software-defined Compute (SDC)– it performs two roles; – (1) enhancing the programmability of existing virtualized compute resources like VMs, containers, and virtual CPUs and (2) leveraging specialized processing units e.g. GPUs (graphical processing units), FPGAs (field-programmable gate arrays), and other accelerators. It decouples the provision of heterogeneous compute resources from the existing hardware or OS so that it is based on identified or discovered workload needs.  
  • Software-defined Network (SDN)- its function is the separation of control and management functions of the network infrastructures away from the hardware to the server. It does so to improve performance isolation, security, programmability, and effectiveness. By means of virtualization, it allows network resources to be transformed to virtual devices (like links, end-points, & switches) that connect various virtual storage and compute instances.
  • Software-defined Storage- it functions to manage huge data volumes by separating the control and management functions from the data storage system. Such separation helps in reducing management complexity and lower the cost of infrastructure.

A SDI controller manages all SDI components by providing the control intelligence needed to meet workload requirements. It uses the classical MAPE loop to consistently monitor the current state of SDI entities.

Cloud Stakeholders and Their Roles

Cloud stakeholders are usually grouped into one of three roles depending on the service delivery models. The role of the cloud provider or IP (Infrastructure provider) ranges from infrastructure provision in IaaS, platforms and infrastructure provision in PaaS, to the provision of the whole cloud stack (infrastructure, platforms, and applications) in SaaS.

Cloud users or SP (Service Providers) are mostly found under the IaaS and PaaS model. They use the infrastructure and/or platform given by the cloud provider to host application services that are used the end-user. Interactions in terms of roles between the cloud stakeholders (cloud and service providers) are administered by an official document known as the Service Level Agreement (SLA). 

The SLA describes what quality of service is expected and the legal agreements. A characteristic SLA constitutes important elements like the service credits and service guarantees(outlines functional metrics e.g. response time, availability, and safety that a cloud provider needs to meet during the service guarantee period). A service credit is a sum credited to a customer or put in place towards a future payment when the IP cannot meet the service guarantees.

Conclusion

Cloud computing is here to stay with us. It’s what drives every IT operation. Most cloud providers know what their role in providing hosting platforms is and they will strive to maintain their offering. It’s my hope that this discussion has shed some light about cloud computing for you.

What Is IT Service Management?

Hello all, I am back with my second article. Couple of days back, I had posted my first article in which I briefly spoke about Machine Learning applications in IT Service Management (ITSM). I got great response from the readers, thank you so much for that. Some of you had asked about ITSM and what are its main functions. So I decided to write my this article talking about ITSM itself. Please read what I think ITSM is all about. When you’re brand new to IT service management (ITSM), it’s difficult to know where to start, much less even how to succeed. In this blog article, we’ll define terms like ITSM, ITIL, and others that may be unfamiliar to you. Most importantly, we’ll share 8 very easily achievable steps we suggest taking to enhance your ITSM and deliver exceptional IT service across your organization.

Why Do I Need IT Service Management?
Ad hoc IT services are often sufficient for very small organizations. For example, a company with a separate office may only require one ‘IT person,’ who accomplishes work and resolves problems as they arise. This method, however, quickly would become an obligation as organizations grow.
Comparatively tiny IT teams often struggle to keep track of all that remains to be improved on an ad hoc basis. Essential tasks and responsibilities begin to fall through the cracks, putting the organization’s overall efficiency at risk. At this point, most organizations begin their ITSM journey by detailing ways of implementing simple tools to assist them in managing the shipment of IT hardware, applications, and assistance.

What is an ITSM Ticketing Tool?
Ticketing is a critical element of any ITSM tool that every organization requires. ITSM ticketing tools record all conversations among a helpdesk and its internally or externally clients. A ‘ticket’ is simply a permanent record of an IT activity or accident that contains pertinent details about what occurred, who reported the problem, and what was work to fix it. This ensures that no incidents are ‘lost’ or forgotten, and it aids in maintaining a consistent level of service for all helpdesk consumers.
The majority of IT services have contracting SLAs that specify how rapidly new occurrences must be resolved. Ticketing systems keep records of these performance measures by instantly recording the time and date when a ticket is revised and providing easy-to-access reporting. While necessary, ticketing is only a minor component of ITSM tools. This means that, while all ITSM tools include a ticket booking module, a pure-play ticket booking tool is insufficient to ensure effective and efficient IT processes.

How to enhance ITSM with easy steps?
The right ITSM strategy is an important component of ongoing digitalization efforts and can improve both effectiveness and persistence throughout the organization. Following the eight key steps outlined below can assist businesses in increasing ITSM sophistication.
1.    Assessing current ITSM maturity
2.    Setting clear goals
3.    Securing executive buy-in
4.    Establishing a plan
5.    Assembling the right team
6.    Implementation automation
7.    Finding the right software
8.    Implementing Continual Service Improvement (CSI) Success

What does ITSM do for your business?
ITSM provides a variety of structures for companies to use in developing quality management for IT service quality and customer service practices. Quality management, software development, project management, security vulnerabilities, and widely used management structure standards are all covered.
They are intended to bring structure and framework to service-oriented IT divisions by aligning IT goals with company needs and requirements. It is used as a guide to assist businesses in effectively aligning IT goals and business goals, particularly for customer service enterprises.

ITSM service desk:
The service desk, as defined in the ITIL manual, is one primary discipline that falls under ITSM. Support desks are viewed as a Single Point of Contact (SPOC) by ITIL, which can help to simplify interaction within an organization or business unit. Service desks serve as a central point for clients and stakeholders to contact well-trained staff in order to resolve issues in an integrated and efficient sort of way.

ITSM frameworks:
ITIL is the most widely used ITSM structure, but there are numerous other ITSM methodologies that business owners could use. A few of these paradigms are geared toward specific sectors or business requirements, such as healthcare, gov’t, and telecommunication services. If your company has technology requirements that are unique to your industry, you should look for a structure that discusses your particular challenges.

ITSM certification:
You can obtain a credential in the ITSM field of study, and there are choices for company education and training as well as personal training and certification. However, before you can find the right certification programme, you must first understand the framework you intend to use. While ITSM as a discipline can be certified, most programmes are focused on a particular structure.

The importance of ITSM:
ITSM is advantageous to your IT team, and managed services fundamentals can benefit your entire organization. ITSM increases efficiency and productivity. A organized concept of service planning also aligns IT with business goals by standardizing service delivery based on expenditures, resources, and results. It lowers costs and risks while also improving customer familiarity.

ITSM processes:
This broader approach more accurately reflects the realities of modern organizations. We won’t get into the slight variations in terminology used for practices or procedures here. What matters, and is true irrespective of the structure your team uses, is that contemporary IT service teams use organizational resources and follow repeatable procedures to provide consistent, efficient service. In fact, the ability to leverage practice or process is what differentiates ITSM from IT.

Final Thoughts:
ITSM is at the heart of organizational modernization. As the adoption of software-powered services grows, IT service teams are facilitating staff members across organizations to give value more rapidly. The IT team’s role has shifted from providing the necessary support to trying to distinguish the company. It is time to shift ITSM strategies to emphasize cooperation, ease of use, and quicker customer satisfaction.

(Appeared first at Lambda and Sigma, read here)

IT service management (ITSM) and machine learning

 IT Service Management (ITSM) is a form of strategy and operation for executing, delivering, and organizing IT services for end-users that meet the stated needs of the end-users and the stated goals of the business. Technology has enhanced the way companies operate in all industries around the world. At the same time, traditional IT service management (ITSM) solutions have failed to maintain customer satisfaction levels and meet the growing expectations of consumers in the fast-paced digital world.

Machine learning

Machine learning is an artificial intelligence application that enables systems to learn and improve from experience without explicitly programming spontaneously. Machine learning (ML) focuses on developing computer software that can access data and use it to learn for themselves. Machine learning has already begun to make a difference in our daily lives more than anyone could have imagined. Say, for example, a pair trains their sprinkler system to turn on automatically when cats are prevented from entering their lawn. In simple words, Machine learning (ML) is a type of artificial intelligence (AI) that allows software applications to be more exact in predicting results without explicitly programming them.

ITSM

The service desk management is about creating the “one-go-to” place for all the IT related needs, helps and issues. The desk is responsible for managing incidents or service disruptions, fulfill any IT related requests, and changes. The service desk scope of work is generally enormous and wide-ranging hence it needs to be managed effectively and efficiently. There are some benefits of ITSM.

1. ITSM makes it easy for teams to provide quick, proactive, shock-free responses to unforeseen events, new opportunities, and competitive threats.

2. By authorize enhanced method concerts, better accessibility, and fewer service barriers, ITSM helps clients work harder and do more business.

3. By systematically accelerating incident resolution, minimizing incidents and problems, and preventing or resolving issues automatically, ITSM helps businesses achieve greater productivity at a lower cost than IT infrastructure.

4. By incorporating observance into IT service aim, delivery, and management, ITSM can get better compliance and trim down risk.

5. ITSM helps the institute set and convene a sensible outlook for the service, leading to superior clearness and enhanced customer contentment.

Benefits of machine learning

There are some benefits of machine learning.

Automation for everything

One of the most potent benefits of machine learning is its ability to automate various decision-making tasks. It gives developers more time to use their time for more productive use. For example, we see some expected benefits in our daily lives: social media emotion analysis and chatbots. A chatbot responds immediately as first-level customer support when a negative tweet is related to a company’s product or service. Machine learning is shifting the world with its computerization for approximately everything we can think of.

Recommending the Right Product

Product recommendation is an essential aspect of any sales and marketing strategy, including up-selling and cross-selling. ML models will analyze a customer’s purchase history, and based on that; they will identify products from your product inventory that the customer is interested in. This process is known as non-supervised learning, which is a particular type of ML algorithm. Such a model would enable businesses to make better product recommendations for their customers, thus encouraging product purchases. Therefore, supervised learning helps to create a high product-based recommendation system.

 Application of Machine Learning in ITSM

We need to understand that in this ever changing world of technology, the traditional IT service management (ITSM) solutions and practices have become inefficient and does not work effectively to keep the customer satisfaction levels at the highest hence its obvious that the organizations are moving towards ML application to enhance their business scalability and improve business operations. Machine Learning algorithms have enabled the ITSM pratice to improve its speed and quality of service while keeping the cost low. Still there is lost of scope for the usage. In this short article, I will cover top 7 use cases which can elevate the service level of service desk.

 a) Predictive analytics – The first application of ML in any field that comes up for discussion is predictive analytics. What kind of predictive analytics can be make in ITSM? Here we can predict the number and nature of incidents, problems and issues. Even the risks associated with proposed changes, measure and predicting the future levels of customer satisfaction across different types of service desk offerings can be other areas of implementing predictive analytics.

 b) Demand planning – This use case is an extension of predictive analytics itself where we use machine learning algorithms to predict the future demand for both IT services and IT support capabilities which can help the management in the budgeting decisions as well as manage the entire process effectively with minimum cost. The result from demand planning can also help us to gauge the required levels of variables like cost, pricing, benefit, capacity, stock, etc.


 c) Predictive maintenance – We can treat this as extension of demand planning on engineering side than operations, which is the theme in demand planning. Machine learning algorithms enables to selectively apply maintenance to the IT infrastructure and critical business services to prevent disruptions in services or failures. The various processes that are being designed around this is mostly about the ability to track important parameters in real time and with the help of algorithms working on these real time data and give us the trigger to take prevention actions.

 d) Improved search capabilities – This might sound a trivial capability but its one of the important features as we are talking beyond the traditional search options and results. We are talking about intelligent search capabilities, which can predict the search criteria and provide the data related to relevant search keywords. This would give us number of relevant search result with a high degree of accuracy to use the content for our work.

 
 e) Providing recommendations – Everyone is probably aware of the product recommendations algorithm usage by Amazon to predict what we might want to buy. Similarly, Netflix and Spotify can give us best entertainment content. Extending the similar logic, the product recommendations in ITSM field can provide self-help in recommending knowledge base or solutions for service desk agents, or for end users. Thus, this can help us in speeding up processes to deliver resolutions or services more quickly and accurately which can not only help to improve customer satisfaction but also improve employee efficiency.

 f) Identifying and filling knowledge gaps – Machine learning algorithms capabilities are generally linked to identifying  and distribution of knowledge from the data but we can also use it to create the knowledge. Some of the top applications under this use case, we can think of are identification of knowledge-article gaps based on the analysis of aggregated incident ticket data. While converting a  resolution note of documented ticket into knowledge, algorithms can be used to identify the most pertinent and valuable information from which to create a new knowledge article.

 g) Intelligent autoresponders – Probably we might be trying to stretch the capabilities little too far in this case but its possible to achieve what we are trying. Depending upon the issue type and the nature of the problem, we could use technology to work on a ticket, understand the problem, find the solution, apply it and close the ticket without any human intervention with highest accuracy level. It’s a high-value use case scenario hence even we can use technology to close 1-2% of the issues created, it can give us a great benefit in terms of time, cost and convenience.

First published on BlogPost

https://lambdaandsigma.blogspot.com/2021/09/it-service-management-itsm-and-machine.html