SlideShare a Scribd company logo
Cloud computing is a virtualization-based technology that allows us to create, configure, and customize applications via an internet connection. The
cloud technology includes a development platform, hard disk, software application, and database.
What is Cloud Computing
The term cloud refers to a network or the internet. It is a technology that uses remote servers on the internet to store, manage, and access data
online rather than local drives. The data can be anything such as files, images, documents, audio, video, and more.
There are the following operations that we can do using cloud computing:
 Developing new applications and services
 Storage, back up, and recovery of data
 Hosting blogs and websites
 Delivery of software on demand
 Analysis of data
 Streaming videos and audios
Why Cloud Computing?
Small as well as large IT companies, follow the traditional methods to provide the IT infrastructure. That means for any IT company, we need a
Server Room that is the basic need of IT companies.
In that server room, there should be a database server, mail server, networking, firewalls, routers, modem, switches, QPS (Query Per Second
means how much queries or load will be handled by the server), configurable system, high net speed, and the maintenance engineers.
To establish such IT infrastructure, we need to spend lots of money. To overcome all these problems and to reduce the IT infrastructure cost,
Cloud Computing comes into existence.
Characteristics of Cloud Computing:
The characteristics of cloud computing are given below:
 Agility: The cloud works in a distributed computing environment. It shares resources among users and works very fast.
 High availability and reliability: The availability of servers is high and more reliable because the chances of infrastructure failure are
minimum.
 High Scalability: Cloud offers "on-demand" provisioning of resources on a large scale, without having engineers for peak loads.
 Multi-Sharing: With the help of cloud computing, multiple users and applications can work more efficiently with cost reductions by
sharing common infrastructure.
 Device and Location Independence: Cloud computing enables the users to access systems using a web browser regardless of their
location or what device they use e.g. PC, mobile phone, etc. As infrastructure is off-site (typically provided by a third-party) and accessed
via the Internet, users can connect from anywhere.
 Maintenance: Maintenance of cloud computing applications is easier, since they do not need to be installed on each user's computer
and can be accessed from different places. So, it reduces the cost also.
 Low Cost: By using cloud computing, the cost will be reduced because to take the services of cloud computing, IT company need not to
set its own infrastructure and pay-as-per usage of resources.
 Services in the pay-per-use mode: Application Programming Interfaces (APIs) are provided to the users so that they can access
services on the cloud by using these APIs and pay the charges as per the usage of services.
Prerequisite
Before learning cloud computing, you must have the basic knowledge of computer fundamentals.
Audience
Our cloud computing is designed to help beginners and professionals.
Problem
We assure that you will not find any difficulty while learning our cloud computing tutorial. But if there is any mistake in this tutorial, kindly post the
problem or error in the contact form.
 Advantages of Cloud Computing
As we all know that Cloud computing is trending technology. Almost every company switched their services on the cloud to rise the company
growth.
Here, we are going to discuss some important advantages of Cloud Computing-
o Back-up and restore data: Once the data is stored in the cloud, it is easier to get back-up and restore that data using the cloud.
o Improved collaboration: Cloud applications improve collaboration by allowing groups of people to quickly and easily share
information in the cloud via shared storage.
o Excellent accessibility: Cloud allows us to quickly and easily access store information anywhere, anytime in the whole world,
using an internet connection. An internet cloud infrastructure increases organization productivity and efficiency by ensuring that our
data is always accessible.
o Low maintenance cost: Cloud computing reduces both hardware and software maintenance costs for organizations.
o Mobility: Cloud computing allows us to easily access all cloud data via mobile.
o IServices in the pay-per-use model: Cloud computing offers Application Programming Interfaces (APIs) to the users for access
services on the cloud and pays the charges as per the usage of service.
o Unlimited storage capacity: Cloud offers us a huge amount of storing capacity for storing our important data such as
documents, images, audio, video, etc. in one place.
o Data security: Data security is one of the biggest advantages of cloud computing. Cloud offers many advanced features related to
security and ensures that data is securely stored and handled.
 Disadvantages of Cloud Computing
A list of the disadvantage of cloud computing is given below –
 Internet Connectivity: As you know, in cloud computing, every data (image, audio, video, etc.) is stored on the cloud, and we access
these data through the cloud by using the internet connection. If you do not have good internet connectivity, you cannot access these
data. However, we have no any other way to access data from the cloud.
 Vendor lock-in: Vendor lock-in is the biggest disadvantage of cloud computing. Organizations may face problems when transferring
their services from one vendor to another. As different vendors provide different platforms, that can cause difficulty moving from one
cloud to another.
 Limited Control: As we know, cloud infrastructure is completely owned, managed, and monitored by the service provider, so the cloud
users have less control over the function and execution of services within a cloud infrastructure.
 Security: Although cloud service providers implement the best security standards to store important information. But, before adopting
cloud technology, you should be aware that you will be sending all your organization's sensitive information to a third party, i.e., a cloud
computing service provider. While sending the data on the cloud, there may be a chance that your organization's information is hacked
by Hackers.
Before emerging the cloud computing, there was Client/Server computing which is basically a centralized storage in which all the software
applications, all the data and all the controls are resided on the server side.
If a single user wants to access specific data or run a program, he/she need to connect to the server and then gain appropriate access, and then
he/she can do his/her business.
Then after, distributed computing came into picture, where all the computers are networked together and share their resources when needed.
On the basis of above computing, there was emerged of cloud computing concepts that later implemented.
At around in 1961, John MacCharty suggested in a speech at MIT that computing can be sold like a utility, just like a water or electricity. It was a
brilliant idea, but like all brilliant ideas, it was ahead if its time, as for the next few decades, despite interest in the model, the technology simply
was not ready for it.
But of course time has passed and the technology caught that idea and after few years we mentioned that:
In 1999, Salesforce.com started delivering of applications to users using a simple website. The applications were delivered to enterprises over the
Internet, and this way the dream of computing sold as utility were true.
In 2002, Amazon started Amazon Web Services, providing services like storage, computation and even human intelligence. However, only starting
with the launch of the Elastic Compute Cloud in 2006 a truly commercial service open to everybody existed.
In 2009, Google Apps also started to provide cloud computing enterprise applications.
Of course, all the big players are present in the cloud computing evolution, some were earlier, some were later. In 2009, Microsoft launched
Windows Azure, and companies like Oracle and HP have all joined the game. This proves that today, cloud computing has become mainstream.
As we know, cloud computing technology is used by both small and large organizations to store the information in cloud and access it from
anywhere at anytime using the internet connection.
Cloud computing architecture is a combination of service-oriented architecture and event-driven architecture.
Cloud computing architecture is divided into the following two parts -
 Front End: The front end is used by the client. It contains client-side interfaces and applications that are required to access the cloud
computing platforms. The front end includes web servers (including Chrome, Firefox, internet explorer, etc.), thin & fat clients, tablets,
and mobile devices.
 Back End: The back end is used by the service provider. It manages all the resources that are required to provide cloud computing
services. It includes a huge amount of data storage, security mechanism, virtual machines, deploying models, servers, traffic control
mechanisms, etc.
The below diagram shows the architecture of cloud computing -
Note: Both front end and back end are connected to others through a network, generally using the internet connection.
 Components of Cloud Computing Architecture
There are the following components of cloud computing architecture -
o Client Infrastructure: Client Infrastructure is a Front end component. It provides GUI (Graphical User Interface) to interact with
the cloud.
o Application: The application may be any software or platform that a client wants to access.
o Service: A Cloud Services manages that which type of service you access according to the client’s requirement.
Cloud computing offers the following three type of services:
i. Software as a Service (SaaS) – It is also known as cloud application services. Mostly, SaaS applications run directly
through the web browser means we do not require to download and install these applications. Some important example of
SaaS is given below –
Example: Google Apps, Salesforce Dropbox, Slack, Hubspot, Cisco WebEx.
ii. Platform as a Service (PaaS) – It is also known as cloud platform services. It is quite similar to SaaS, but the
difference is that PaaS provides a platform for software creation, but using SaaS, we can access software over the internet
without the need of any platform.
Example: Windows Azure, Force.com, Magento Commerce Cloud, OpenShift.
iii. Infrastructure as a Service (IaaS) – It is also known as cloud infrastructure services. It is responsible for managing
applications data, middleware, and runtime environments.
Example: Amazon Web Services (AWS) EC2, Google Compute Engine (GCE), Cisco Metapod.
o Runtime Cloud: Runtime Cloud provides the execution and runtime environment to the virtual machines.
o Storage: Storage is one of the most important components of cloud computing. It provides a huge amount of storage capacity in
the cloud to store and manage data.
o Infrastructure: It provides services on the host level, application level, and network level. Cloud infrastructure includes
hardware and software components such as servers, storage, network devices, virtualization software, and other storage resources
that are needed to support the cloud computing model.
o Management: Management is used to manage components such as application, service, runtime cloud, storage, infrastructure,
and other security issues in the backend and establish coordination between them.
o Security: Security is an in-built back end component of cloud computing. It implements a security mechanism in the back end.
o Internet: The Internet is medium through which front end and back end can interact and communicate with each other.
Cloud computing uses a client-server architecture to deliver computing resources such as servers, storage, databases, and software over the
cloud (Internet) with pay-as-you-go pricing.
Cloud computing becomes a very popular option for organizations by providing various advantages, including cost-saving, increased productivity,
efficiency, performance, data back-ups, disaster recovery, and security.
Grid Computing:
Grid computing is also called as "distributed computing." It links multiple computing resources (PC's, workstations, servers, and storage
elements) together and provides a mechanism to access them.
The main advantages of grid computing are that it increases user productivity by providing transparent access to resources, and work can be
completed more quickly.
Let's understand the difference between cloud computing and grid computing.
Cloud Computing Grid Computing
Cloud Computing follows client-server computing architecture. Grid computing follows a distributed computing architecture.
Scalability is high. Scalability is normal.
Cloud Computing is more flexible than grid computing. Grid Computing is less flexible than cloud computing.
Cloud operates as a centralized management system. Grid operates as a decentralized management system.
In cloud computing, cloud servers are owned by infrastructure
providers.
In Grid computing, grids are owned and managed by the organization.
Cloud computing uses services like Iaas, PaaS, and SaaS. Grid computing uses systems like distributed computing, distributed
information, and distributed pervasive.
Cloud Computing is Service-oriented. Grid Computing is Application-oriented.
It is accessible through standard web protocols. It is accessible through grid middleware.
Assume that you are an executive at a very big corporation. Your particular responsibilities include to make sure that all of your employees have
the right hardware and software they need to do their jobs. To buy computers for everyone is not enough. You also have to purchase software as
well as software licenses and then provide these softwares to your employees as they require. Whenever you hire a new employee, you need to
buy more software or make sure your current software license allows another user. It is so stressful that you have to spend lots of money.
But, there may be an alternative for executives like you. So, instead of installing a suite of software for each computer, you just need to load one
application. That application will allow the employees to log-in into a Web-based service which hosts all the programs for the user that is required
for his/her job. Remote servers owned by another company and that will run everything from e-mail to word processing to complex data analysis
programs. It is called cloud computing, and it could change the entire computer industry.
In a cloud computing system, there is a significant workload shift. Local computers have no longer to do all the heavy lifting when it comes to run
applications. But cloud computing can handle that much heavy load easily and automatically. Hardware and software demands on the user's side
decrease. The only thing the user's computer requires to be able to run is the cloud computing interface software of the system, which can be as
simple as a Web browser and the cloud's network takes care of the rest.
Cloud service providers provide various applications in the field of art, business, data storage and backup services, education, entertainment,
management, social networking, etc.
The most widely used cloud computing applications are given below -
1. Art Applications: Cloud computing offers various art applications for quickly and easily design attractive cards, booklets, and images. Some
most commonly used cloud art applications are given below:
i Moo: Moo is one of the best cloud art applications. It is used for designing and printing business cards, postcards, and mini cards.
ii. Vistaprint: Vistaprint allows us to easily design various printed marketing products such as business cards, Postcards, Booklets, and
wedding invitations cards.
iii. Adobe Creative Cloud: Adobe creative cloud is made for designers, artists, filmmakers, and other creative professionals. It is a
suite of apps which includes PhotoShop image editing programming, Illustrator, InDesign, TypeKit, Dreamweaver, XD, and Audition.
2. Business Applications
Business applications are based on cloud service providers. Today, every organization requires the cloud business application to grow their
business. It also ensures that business applications are 24*7 available to users.
There are the following business applications of cloud computing -
i. MailChimp: MailChimp is an email publishing platform which provides various options to design, send, and save templates for emails.
iii. Salesforce: Salesforce platform provides tools for sales, service, marketing, e-commerce, and more. It also provides a cloud
development platform.
iv. Chatter: Chatter helps us to share important information about the organization in real time.
v. Bitrix24: Bitrix24 is a collaboration platform which provides communication, management, and social collaboration tools.
vi. Paypal: Paypal offers the simplest and easiest online payment mode using a secure internet account. Paypal accepts the payment
through debit cards, credit cards, and also from Paypal account holders.
vii. Slack: Slack stands for Searchable Log of all Conversation and Knowledge. It provides a user-friendly interface that helps
us to create public and private channels for communication.
viii. Quickbooks: Quickbooks works on the terminology "Run Enterprise anytime, anywhere, on any device." It provides online
accounting solutions for the business. It allows more than 20 users to work simultaneously on the same system.
3. Data Storage and Backup Applications
Cloud computing allows us to store information (data, files, images, audios, and videos) on the cloud and access this information using an internet
connection. As the cloud provider is responsible for providing security, so they offer various backup recovery application for retrieving the lost data.
A list of data storage and backup applications in the cloud are given below -
i. Box.com:Box provides an online environment for secure content management, workflow, and collaboration. It allows us to
store different files such as Excel, Word, PDF, and images on the cloud. The main advantage of using box is that it provides drag & drop
service for files and easily integrates with Office 365, G Suite, Salesforce, and more than 1400 tools.
ii. Mozy:
Mozy provides powerful online backup solutions for our personal and business data. It schedules automatically back up for each day
at a specific time.
iii. Joukuu:
Joukuu provides the simplest way to share and track cloud-based backup files. Many users use joukuu to search files, folders, and
collaborate on documents.
iv. Google G Suite:
Google G Suite is one of the best cloud storage and backup application. It includes Google Calendar, Docs, Forms, Google+,
Hangouts, as well as cloud storage and tools for managing cloud apps. The most popular app in the Google G Suite is Gmail. Gmail offers
free email services to users.
AD
4. Education Applications
Cloud computing in the education sector becomes very popular. It offers various online distance learning platforms and student information
portals to the students. The advantage of using cloud in the field of education is that it offers strong virtual classroom environments, Ease of
accessibility, secure data storage, scalability, greater reach for the students, and minimal hardware requirements for the applications.
There are the following education applications offered by the cloud -
i. Google Apps for Education: Google Apps for Education is the most widely used platform for free web-based email, calendar,
documents, and collaborative study.
ii. Chromebooks for Education: Chromebook for Education is one of the most important Google's projects. It is designed for the
purpose that it enhances education innovation.
iii. Tablets with Google Play for Education: It allows educators to quickly implement the latest technology solutions into the
classroom and make it available to their students.
iv. AWS in Education: AWS cloud provides an education-friendly environment to universities, community colleges, and schools.
5. Entertainment Applications
Entertainment industries use a multi-cloud strategy to interact with the target audience. Cloud computing offers various entertainment
applications such as online games and video conferencing.
i. Online games: Today, cloud gaming becomes one of the most important entertainment media. It offers various online games that run
remotely from the cloud. The best cloud gaming services are Shaow, GeForce Now, Vortex, Project xCloud, and PlayStation Now.
ii. Video Conferencing Apps: Video conferencing apps provides a simple and instant connected experience. It allows us to
communicate with our business partners, friends, and relatives using a cloud-based video conferencing. The benefits of using video
conferencing are that it reduces cost, increases efficiency, and removes interoperability.
6. Management Applications
Cloud computing offers various cloud management tools which help admins to manage all types of cloud activities, such as resource deployment,
data integration, and disaster recovery. These management tools also provide administrative control over the platforms, applications, and
infrastructure.
Some important management applications are -
i. Toggl: Toggl helps users to track allocated time period for a particular project.
ii. Evernote: Evernote allows you to sync and save your recorded notes, typed notes, and other notes in one convenient place. It is
available for both free as well as a paid version.
It uses platforms like Windows, macOS, Android, iOS, Browser, and Unix.
iii. Outright: Outright is used by management users for the purpose of accounts. It helps to track income, expenses, profits, and losses
in real-time environment.
iv. GoToMeeting: GoToMeeting provides Video Conferencing and online meeting apps, which allows you to start a meeting with your
business partners from anytime, anywhere using mobile phones or tablets. Using GoToMeeting app, you can perform the tasks related to
the management such as join meetings in seconds, view presentations on the shared screen, get alerts for upcoming meetings, etc.
7. Social Applications
Social cloud applications allow a large number of users to connect with each other using social networking applications such as Facebook,
Twitter, Linkedln, etc.
There are the following cloud based social applications -
i. Facebook: Facebook is a social networking website which allows active users to share files, photos, videos, status, more to their
friends, relatives, and business partners using the cloud storage system. On Facebook, we will always get notifications when our friends
like and comment on the posts.
ii. Twitter: Twitter is a social networking site. It is a microblogging system. It allows users to follow high profile celebrities, friends,
relatives, and receive news. It sends and receives short posts called tweets.
iii. Yammer: Yammer is the best team collaboration tool that allows a team of employees to chat, share images, documents, and
videos.
iv. LinkedIn: LinkedIn is a social network for students, freshers, and professionals.
Cloud computing provides various advantages, such as improved collaboration, excellent accessibility, Mobility, Storage capacity, etc. But there are
also security risks in cloud computing.
Some most common Security Risks of Cloud Computing are given below-
 Data Loss: Data loss is the most common cloud security risks of cloud computing. It is also known as data leakage. Data loss is the process
in which data is being deleted, corrupted, and unreadable by a user, software, or application. In a cloud computing environment, data loss
occurs when our sensitive data is somebody else's hands, one or more data elements can not be utilized by the data owner, hard disk is not
working properly, and software is not updated.
 Hacked Interfaces and Insecure APIs: As we all know, cloud computing is completely depends on Internet, so it is compulsory to protect
interfaces and APIs that are used by external users. APIs are the easiest way to communicate with most of the cloud services. In cloud
computing, few services are available in the public domain. These services can be accessed by third parties, so there may be a chance that
these services easily harmed and hacked by hackers.
 Data Breach: Data Breach is the process in which the confidential data is viewed, accessed, or stolen by the third party without any
authorization, so organization's data is hacked by the hackers.
 Vendor lock-in: Vendor lock-in is the of the biggest security risks in cloud computing. Organizations may face problems when transferring
their services from one vendor to another. As different vendors provide different platforms, that can cause difficulty moving one cloud to
another.
 Increased complexity strains IT staff: Migrating, integrating, and operating the cloud services is complex for the IT staff. IT staff must
require the extra capability and skills to manage, integrate, and maintain the data to the cloud.
 Spectre & Meltdown: Spectre & Meltdown allows programs to view and steal data which is currently processed on computer. It can run on
personal computers, mobile devices, and in the cloud. It can store the password, your personal information such as images, emails, and
business documents in the memory of other running programs.
 Denial of Service (DoS) attacks: Denial of service (DoS) attacks occur when the system receives too much traffic to buffer the server.
Mostly, DoS attackers target web servers of large organizations such as banking sectors, media companies, and government organizations. To
recover the lost data, DoS attackers charge a great deal of time and money to handle the data.
 Account hijacking: Account hijacking is a serious security risk in cloud computing. It is the process in which individual user's or
organization's cloud account (bank account, e-mail account, and social media account) is stolen by hackers. The hackers use the stolen
account to perform unauthorized activities.
There are the following 4 types of cloud that you can deploy according to the organization's needs-
 Public Cloud
Public cloud is open to all to store and access information via the Internet using the pay-per-usage method.
In public cloud, computing resources are managed and operated by the Cloud Service Provider (CSP).
Example: Amazon elastic compute cloud (EC2), IBM SmartCloud Enterprise, Microsoft, Google App Engine, Windows Azure Services Platform.
Advantages of Public Cloud
 Public cloud is owned at a lower cost than the private and hybrid cloud.
 Public cloud is maintained by the cloud service provider, so do not need to worry about the maintenance.
 Public cloud is easier to integrate. Hence it offers a better flexibility approach to consumers.
 Public cloud is location independent because its services are delivered through the internet.
 Public cloud is highly scalable as per the requirement of computing resources.
 It is accessible by the general public, so there is no limit to the number of users.
A
Disadvantages of Public Cloud
 Public Cloud is less secure because resources are shared publicly.
 Performance depends upon the high-speed internet network link to the cloud provider.
 The Client has no control of data.
 Private Cloud
Private cloud is also known as an internal cloud or corporate cloud. It is used by organizations to build and manage their own data centers
internally or by the third party. It can be deployed using Opensource tools such as Openstack and Eucalyptus.
Based on the location and management, National Institute of Standards and Technology (NIST) divide private cloud into the following two parts-
 On-premise private cloud
 Outsourced private cloud
Advantages of Private Cloud
 Private cloud provides a high level of security and privacy to the users.
 Private cloud offers better performance with improved speed and space capacity.
 It allows the IT team to quickly allocate and deliver on-demand IT resources.
 The organization has full control over the cloud because it is managed by the organization itself. So, there is no need for the
organization to depends on anybody.
 It is suitable for organizations that require a separate cloud for their personal use and data security is the first priority.
Disadvantages of Private Cloud
 Skilled people are required to manage and operate cloud services.
 Private cloud is accessible within the organization, so the area of operations is limited.
 Private cloud is not suitable for organizations that have a high user base, and organizations that do not have the prebuilt
infrastructure, sufficient manpower to maintain and manage the cloud.
 Hybrid Cloud
Hybrid Cloud is a combination of the public cloud and the private cloud. we can say:
Hybrid Cloud = Public Cloud + Private Cloud
Hybrid cloud is partially secure because the services which are running on the public cloud can be accessed by anyone, while the services which are
running on a private cloud can be accessed only by the organization's users.
Example: Google Application Suite (Gmail, Google Apps, and Google Drive), Office 365 (MS Office on the Web and One Drive), Amazon Web
Services.
Advantages of Hybrid Cloud
 Hybrid cloud is suitable for organizations that require more security than the public cloud.
 Hybrid cloud helps you to deliver new products and services more quickly.
 Hybrid cloud provides an excellent way to reduce the risk.
 Hybrid cloud offers flexible resources because of the public cloud and secure resources because of the private cloud.
Disadvantages of Hybrid Cloud
 In Hybrid Cloud, security feature is not as good as the private cloud.
 Managing a hybrid cloud is complex because it is difficult to manage more than one type of deployment model.
 In the hybrid cloud, the reliability of the services depends on cloud service providers.
AD
 Community Cloud
Community cloud allows systems and services to be accessible by a group of several organizations to share the information between the
organization and a specific community. It is owned, managed, and operated by one or more organizations in the community, a third party, or a
combination of them.
Example: Health Care community cloud
Advantages of Community Cloud :There are the following advantages of Community Cloud -
 Community cloud is cost-effective because the whole cloud is being shared by several organizations or communities.
 Community cloud is suitable for organizations that want to have a collaborative cloud with more security features than the public
cloud.
 It provides better security than the public cloud.
 It provdes collaborative and distributive environment.
 Community cloud allows us to share cloud resources, infrastructure, and other capabilities among various organizations.
Disadvantages of Community Cloud
 Community cloud is not a good choice for every organization.
 Security features are not as good as the private cloud.
 It is not suitable if there is no collaboration.
 The fixed amount of data storage and bandwidth is shared among all community members.
The below table shows the difference between public cloud, private cloud, hybrid cloud, and community cloud.
Parameter Public Cloud Private Cloud Hybrid Cloud Community Cloud
Host Service provider Enterprise (Third party) Enterprise (Third party) Community (Third party)
Users General public Selected users Selected users Community members
Access Internet Internet, VPN Internet, VPN Internet, VPN
Owner Service provider Enterprise Enterprise Community
There are the following three types of cloud service models:
 Infrastructure as a Service (IaaS)
Iaas is also known as Hardware as a Service (HaaS). It is one of the layers of the cloud computing platform. It allows customers to outsource their
IT infrastructures such as servers, networking, processing, storage, virtual machines, and other resources. Customers access these resources on
the Internet using a pay-as-per use model.
In traditional hosting services, IT infrastructure was rented out for a specific period of time, with pre-determined hardware configuration. The client
paid for the configuration and time, regardless of the actual use. With the help of the IaaS cloud computing platform layer, clients can dynamically
scale the configuration to meet changing requirements and are billed only for the services actually used.
IaaS cloud computing platform layer eliminates the need for every organization to maintain the IT infrastructure.
IaaS is offered in three models: public, private, and hybrid cloud. The private cloud implies that the infrastructure resides at the customer-premise.
In the case of public cloud, it is located at the cloud computing platform vendor's data center, and the hybrid cloud is a combination of the two in
which the customer selects the best of both public cloud or private cloud.
IaaS provider provides the following services -
1. Compute: Computing as a Service includes virtual central processing units and virtual main memory for the Vms that is provisioned to
the end- users.
2. Storage: IaaS provider provides back-end storage for storing files.
3. Network: Network as a Service (NaaS) provides networking components such as routers, switches, and bridges for the Vms.
4. Load balancers: It provides load balancing capability at the infrastructure layer.
Advantages of IaaS :
There are the following advantages of IaaS computing layer -
1. Shared infrastructure: IaaS allows multiple users to share the same physical infrastructure.
2. Web access to the resources: Iaas allows IT users to access resources over the internet.
3. Pay-as-per-use model: IaaS providers provide services based on the pay-as-per-use basis. The users are required to pay for what
they have used.
4. Focus on the core business: IaaS providers focus on the organization's core business rather than on IT infrastructure.
5. On-demand scalability: On-demand scalability is one of the biggest advantages of IaaS. Using IaaS, users do not worry about to
upgrade software and troubleshoot the issues related to hardware components.
Disadvantages of IaaS :
1. Security: Security is one of the biggest issues in IaaS. Most of the IaaS providers are not able to provide 100% security.
2. Maintenance & Upgrade: Although IaaS service providers maintain the software, but they do not upgrade the software for some
organizations.
3. Interoperability issues: It is difficult to migrate VM from one IaaS provider to the other, so the customers might face problem
related to vendor lock-in.
AD
Some important point about IaaS cloud computing layer:
IaaS cloud computing platform cannot replace the traditional hosting method, but it provides more than that, and each resource which are used are
predictable as per the usage.
IaaS cloud computing platform may not eliminate the need for an in-house IT department. It will be needed to monitor or control the IaaS setup.
IT salary expenditure might not reduce significantly, but other IT expenses can be reduced.
Breakdowns at the IaaS cloud computing platform vendor's can bring your business to the halt stage. Assess the IaaS cloud computing platform
vendor's stability and finances. Make sure that SLAs (i.e., Service Level Agreement) provide backups for data, hardware, network, and application
failures. Image portability and third-party support is a plus point.
The IaaS cloud computing platform vendor can get access to your sensitive data. So, engage with credible companies or organizations. Study their
security policies and precautions.
AD
Top Iaas Providers who are providing IaaS cloud computing platform:
IaaS Vendor Iaas Solution Details
Amazon Web Services Elastic, Elastic Compute
Cloud (EC2) MapReduce,
Route 53, Virtual Private
Cloud, etc.
The cloud computing platform pioneer, Amazon offers auto scaling, cloud
monitoring, and load balancing features as part of its portfolio.
Netmagic Solutions Netmagic IaaS Cloud Netmagic runs from data centers in Mumbai, Chennai, and Bangalore, and a
virtual data center in the United States. Plans are underway to extend services to
West Asia.
Rackspace Cloud servers, cloud files,
cloud sites, etc.
The cloud computing platform vendor focuses primarily on enterprise-level hosting
services.
Reliance Communications Reliance Internet Data
Center
RIDC supports both traditional hosting and cloud services, with data centers in
Mumbai, Bangalore, Hyderabad, and Chennai. The cloud services offered by RIDC
include IaaS and SaaS.
Sify Technologies Sify IaaS Sify's cloud computing platform is powered by HP's converged infrastructure. The
vendor offers all three types of cloud services: IaaS, PaaS, and SaaS.
Tata Communications InstaCompute InstaCompute is Tata Communications' IaaS offering. InstaCompute data centers
are located in Hyderabad and Singapore, with operations in both countries.
 Platform as a Service (PaaS)
Platform as a Service (PaaS) provides a runtime environment. It allows programmers to easily create, test, run, and deploy web applications. You
can purchase these applications from a cloud service provider on a pay-as-per use basis and access them using the Internet connection. In PaaS,
back end scalability is managed by the cloud service provider, so end- users do not need to worry about managing the infrastructure.
PaaS includes infrastructure (servers, storage, and networking) and platform (middleware, development tools, database management systems,
business intelligence, and more) to support the web application life cycle.
Example: Google App Engine, Force.com, Joyent, Azure.
PaaS providers provide the Programming languages, Application frameworks, Databases, and Other tools:
1. Programming languages: PaaS providers provide various programming languages for the developers to develop the applications.
Popular programming languages provided by PaaS providers are Java, PHP, Ruby, Perl, and Go.
2. Application frameworks: PaaS providers provide application frameworks to easily understand the application development. Some
popular application frameworks provided by PaaS providers are Node.js, Drupal, Joomla, WordPress, Spring, Play, Rack, and Zend.
3. Databases: PaaS providers provide various databases such as ClearDB, PostgreSQL, MongoDB, and Redis to communicate with the
applications.
4. Other tools: PaaS providers provide various other tools that are required to develop, test, and deploy the applications.
Advantages of PaaS:
1. Simplified Development: PaaS allows developers to focus on development and innovation without worrying about infrastructure
management.
2. Lower risk: No need for up-front investment in hardware and software. Developers only need a PC and an internet connection to start
building applications.
3. Prebuilt business functionality: Some PaaS vendors also provide already defined business functionality so that users can avoid
building everything from very scratch and hence can directly start the projects only.
4. Instant community: PaaS vendors frequently provide online communities where the developer can get the ideas to share experiences
and seek advice from others.
5. Scalability: Applications deployed can scale from one to thousands of users without any changes to the applications.
Disadvantages of PaaS:
1. Vendor lock-in:One has to write the applications according to the platform provided by the PaaS vendor, so the migration of an
application to another PaaS vendor would be a problem.
2. Data Privacy: Corporate data, whether it can be critical or not, will be private, so if it is not located within the walls of the company,
there can be a risk in terms of privacy of data.
3. Integration with the rest of the systems applications: It may happen that some applications are local, and some are in the cloud.
So there will be chances of increased complexity when we want to use data which in the cloud with the local data.
Popular PaaS Providers: The below table shows some popular PaaS providers and services that are provided by them -
Providers Services
Google App Engine (GAE) App Identity, URL Fetch, Cloud storage client library, Logservice
Salesforce.com Faster implementation, Rapid scalability, CRM Services, Sales cloud, Mobile connectivity, Chatter.
Windows Azure Compute, security, IoT, Data Storage.
AppFog Justcloud.com, SkyDrive, GoogleDocs
Openshift RedHat, Microsoft Azure.
Cloud Foundry from VMware Data, Messaging, and other services.
 Software as a Service (SaaS)
SaaS is also known as "On-Demand Software". It is a software distribution model in which services are hosted by a cloud service provider. These
services are available to end-users over the internet so, the end-users do not need to install any software on their devices to access these services.
There are the following services provided by SaaS providers -
 Business Services - SaaS Provider provides various business services to start-up the business. The SaaS business services
include ERP (Enterprise Resource Planning), CRM (Customer Relationship Management), billing, and sales.
 Document Management - SaaS document management is a software application offered by a third party (SaaS providers) to
create, manage, and track electronic documents.
 Example: Slack, Samepage, Box, and Zoho Forms.
 Social Networks - As we all know, social networking sites are used by the general public, so social networking service providers
use SaaS for their convenience and handle the general public's information.
 Mail Services - To handle the unpredictable number of users and load on e-mail services, many e-mail providers offering their
services using SaaS.
Advantages of SaaS:
1. SaaS is easy to buy: SaaS pricing is based on a monthly fee or annual fee subscription, so it allows organizations to access business
functionality at a low cost, which is less than licensed applications.
Unlike traditional software, which is sold as a licensed based with an up-front cost (and often an optional ongoing support fee), SaaS
providers are generally pricing the applications using a subscription fee, most commonly a monthly or annually fee.
2. One to Many: SaaS services are offered as a one-to-many model means a single instance of the application is shared by multiple users.
3. Less hardware required for SaaS: The software is hosted remotely, so organizations do not need to invest in additional hardware.
4. Low maintenance required for SaaS: Software as a service removes the need for installation, set-up, and daily maintenance for the
organizations. The initial set-up cost for SaaS is typically less than the enterprise software. SaaS vendors are pricing their applications
based on some usage parameters, such as a number of users using the application. So SaaS does easy to monitor and automatic
updates.
5. No special software or hardware versions required: All users will have the same version of the software and typically access it
through the web browser. SaaS reduces IT support costs by outsourcing hardware and software maintenance and support to the IaaS
provider.
6. Multidevice support: SaaS services can be accessed from any device such as desktops, laptops, tablets, phones, and thin clients.
7. API Integration: SaaS services easily integrate with other software or services through standard APIs.
8. No client-side installation: SaaS services are accessed directly from the service provider using the internet connection, so do not need
to require any software installation.
Disadvantages of SaaS:
1. Security: Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud computing is not more secure
than in-house deployment.
2. Latency issue: Since data and applications are stored in the cloud at a variable distance from the end-user, there is a possibility that
there may be greater latency when interacting with the application compared to local deployment. Therefore, the SaaS model is not
suitable for applications whose demand response time is in milliseconds.
3. Total Dependency on Internet: Without an internet connection, most SaaS applications are not usable.
4. Switching between SaaS vendors is difficult: Switching SaaS vendors involves the difficult and slow task of transferring the very
large data files over the internet and then converting and importing them into another SaaS also.
Popular SaaS Providers: The below table shows some popular SaaS providers and services that are provided by them -
Provider Services
Salseforce.com On-demand CRM solutions
Microsoft Office 365 Online office suite
Google Apps Gmail, Google Calendar, Docs, and sites
NetSuite ERP, accounting, order management, CRM, Professionals Services Automation (PSA), and e-commerce
applications.
GoToMeeting Online meeting and video-conferencing software
Constant Contact E-mail marketing, online survey, and event marketing
Oracle CRM CRM applications
Workday, Inc Human capital management, payroll, and financial management.
Difference between IaaS, PaaS, and SaaS:
IaaS Paas SaaS
It provides a virtual data center to store
information and create platforms for app
development, testing, and deployment.
It provides virtual platforms and tools to
create, test, and deploy apps.
It provides web software and apps to
complete business tasks.
It provides access to resources such as virtual
machines, virtual storage, etc.
It provides runtime environments and
deployment tools for applications.
It provides software as a service to the
end-users.
It is used by network architects. It is used by developers. It is used by end users.
IaaS provides only Infrastructure. PaaS provides Infrastructure+Platform. SaaS provides Infrastructure+Platform
+Software.
Virtualization is the "creation of a virtual (rather than actual) version of something, such as a server, a desktop, a storage device, an operating
system or network resources".
In other words, Virtualization is a technique, which allows to share a single physical instance of a resource or an application among multiple
customers and organizations. It does by assigning a logical name to a physical storage and providing a pointer to that physical resource when
demanded.
What is the concept behind the Virtualization?
Creation of a virtual machine over existing operating system and hardware is known as Hardware Virtualization. A Virtual machine provides an
environment that is logically separated from the underlying hardware.
The machine on which the virtual machine is going to create is known as Host Machine and that virtual machine is referred as a Guest Machine
Types of Virtualization:
1. Hardware Virtualization: When the virtual machine software or virtual machine manager (VMM) is directly installed on the hardware
system is known as hardware virtualization.
The main job of hypervisor is to control and monitoring the processor, memory and other hardware resources.
After virtualization of hardware system we can install different operating system on it and run different applications on those OS.
Usage: Hardware virtualization is mainly done for the server platforms, because controlling virtual machines is much easier than controlling a
physical server.
2. Operating System Virtualization: When the virtual machine software or virtual machine manager (VMM) is installed on the Host
operating system instead of directly on the hardware system is known as operating system virtualization.
Usage: Operating System Virtualization is mainly used for testing the applications on different platforms of OS.
3. Server Virtualization: When the virtual machine software or virtual machine manager (VMM) is directly installed on the Server
system is known as server virtualization.
Usage: Server virtualization is done because a single physical server can be divided into multiple servers on the demand basis and for
balancing the load.
4. Storage Virtualization: Storage virtualization is the process of grouping the physical storage from multiple network storage devices so
that it looks like a single storage device.
Storage virtualization is also implemented by using software applications.
Usage: Storage virtualization is mainly done for back-up and recovery purposes.
Virtualization plays a very important role in the cloud computing technology, normally in the cloud computing, users share the data present in the
clouds like application etc, but actually with the help of virtualization users shares the Infrastructure.
The main usage of Virtualization Technology is to provide the applications with the standard versions to their cloud users, suppose if the next
version of that application is released, then cloud provider has to provide the latest version to their cloud users and practically it is possible because
it is more expensive.
To overcome this problem we use basically virtualization technology, By using virtualization, all severs and the software application which are
required by other cloud providers are maintained by the third party people, and the cloud providers has to pay the money on monthly or annual
basis.
AD
Conclusion:
Mainly Virtualization means, running multiple operating systems on a single machine but sharing all the hardware resources. And it helps us to
provide the pool of IT resources so that we can share these IT resources in order get benefits in the business.
Data virtualization is the process of retrieve data from various resources without knowing its type and physical location where it is stored. It collects
heterogeneous data from different resources and allows data users across the organization to access this data according to their work
requirements. This heterogeneous data can be accessed using any application such as web portals, web services, E-commerce, Software as a
Service (SaaS), and mobile application.
We can use Data Virtualization in the field of data integration, business intelligence, and cloud computing.
Advantages of Data Virtualization: There are the following advantages of data virtualization -
 It allows users to access the data without worrying about where it resides on the memory.
 It offers better customer satisfaction, retention, and revenue growth.
 It provides various security mechanism that allows users to safely store their personal and professional information.
 It reduces costs by removing data replication.
 It provides a user-friendly interface to develop customized views.
 It provides various simple and fast deployment resources.
 It increases business user efficiency by providing data in real-time.
 It is used to perform tasks such as data integration, business integration, Service-Oriented Architecture (SOA) data services, and
enterprise search.
Disadvantages of Data Virtualization:
 It creates availability issues, because availability is maintained by third-party providers.
 It required a high implementation cost.
 It creates the availability and scalability issues.
 Although it saves time during the implementation phase of virtualization but it consumes more time to generate the appropriate
result.A
D
Uses of Data Virtualization: There are the following uses of Data Virtualization -
1. Analyze performance: Data virtualization is used to analyze the performance of the organization compared to previous years.
2. Search and discover interrelated data: Data Virtualization (DV) provides a mechanism to easily search the data which is similar and
internally related to each other.
3. Agile Business Intelligence: It is one of the most common uses of Data Virtualization. It is used in agile reporting, real-time
dashboards that require timely aggregation, analyze and present the relevant data from multiple resources. Both individuals and
managers use this to monitor performance, which helps to make daily operational decision processes such as sales, support, finance,
logistics, legal, and compliance.
4. Data Management: Data virtualization provides a secure centralized layer to search, discover, and govern the unified data and its
relationships.
Data Virtualization Tools:
There are the following Data Virtualization tools –
 Red Hat JBoss data virtualization: Red Hat virtualization is the best choice for developers and those who are using micro services
and containers. It is written in Java.
 TIBCO data virtualization: TIBCO helps administrators and users to create a data virtualization platform for accessing the multiple
data sources and data sets. It provides a builtin transformation engine to combine non-relational and un-structured data sources.
 Oracle data service integrator: It is a very popular and powerful data integrator tool which is mainly worked with Oracle products. It
allows organizations to quickly develop and manage data services to access a single view of data.
 SAS Federation Server: SAS Federation Server provides various technologies such as scalable, multi-user, and standards-based data
access to access data from multiple data services. It mainly focuses on securing data.
 Denodo: Denodo is one of the best data virtualization tools which allows organizations to minimize the network traffic load and improve
response time for large data sets. It is suitable for both small as well as large organizations.
Industries that use Data Virtualization:
 Communication & Technology: In Communication & Technology industry, data virtualization is used to increase revenue per
customer, create a real-time ODS for marketing, manage customers, improve customer insights, and optimize customer care, etc.
 Finance: In the field of finance, DV is used to improve trade reconciliation, empowering data democracy, addressing data complexity,
and managing fixed-risk income.
 Government: In the government sector, DV is used for protecting the environment.
 Healthcare: Data virtualization plays a very important role in the field of healthcare. In healthcare, DV helps to improve patient care,
drive new product innovation, accelerating M&A synergies, and provide a more efficient claims analysis.
 Manufacturing: In manufacturing industry, data virtualization is used to optimize a global supply chain, optimize factories, and improve
IT assets utilization.
Previously, there was "one to one relationship" between physical servers and operating system. Low capacity of CPU, memory, and networking
requirements were available. So, by using this model, the costs of doing business increased. The physical space, amount of power, and hardware
required meant that costs were adding up.
The hypervisor manages shared the physical resources of the hardware between the guest operating systems and host operating system. The
physical resources become abstracted versions in standard formats regardless of the hardware platform. The abstracted hardware is represented as
actual hardware. Then the virtualized operating system looks into these resources as they are physical entities.
Virtualization means abstraction. Hardware virtualization is accomplished by abstracting the physical hardware layer by use of a hypervisor or
VMM (Virtual Machine Monitor).
When the virtual machine software or virtual machine manager (VMM) or hypervisor software is directly installed on the hardware system is known
as hardware virtualization.
The main job of hypervisor is to control and monitoring the processor, memory and other hardware resources.
After virtualization of hardware system we can install different operating system on it and run different applications on those OS.
Usage of Hardware Virtualization:
Hardware virtualization is mainly done for the server platforms, because controlling virtual machines is much easier than controlling a physical
server.
Advantages of Hardware Virtualization:
The main benefits of hardware virtualization are more efficient resource utilization, lower overall costs as well as increased uptime and IT flexibility.
 More Efficient Resource Utilization: Physical resources can be shared among virtual machines. Although the unused resources can
be allocated to a virtual machine and that can be used by other virtual machines if the need exists.
 Lower Overall Costs Because Of Server Consolidation: Now it is possible for multiple operating systems can co-exist on a single
hardware platform, so that the number of servers, rack space, and power consumption drops significantly.
 Increased Uptime Because Of Advanced Hardware Virtualization Features: The modern hypervisors provide highly orchestrated
operations that maximize the abstraction of the hardware and help to ensure the maximum uptime. These functions help to migrate a
running virtual machine from one host to another dynamically, as well as maintain a running copy of virtual machine on another physical
host in case the primary host fails.
 Increased IT Flexibility: Hardware virtualization helps for quick deployment of server resources in a managed and consistent ways.
That results in IT being able to adapt quickly and provide the business with resources needed in good time.
Managing applications and distribution becomes a typical task for IT departments. Installation mechanism differs from application to application.
Some programs require certain helper applications or frameworks and these applications may have conflict with existing applications.
Software virtualization is just like a virtualization but able to abstract the software installation procedure and create virtual software
installations.
Virtualized software is an application that will be "installed" into its own self-contained unit.
Example of software virtualization is VMware software, virtual box etc. In the next pages, we are going to see how to install linux OS and windows
OS on VMware application.
Advantages of Software Virtualization
 Client Deployments Become Easier: Copying a file to a workstation or linking a file in a network then we can easily install virtual
software.
 Easy to manage: To manage updates becomes a simpler task. You need to update at one place and deploy the updated virtual
application to the all clients.
 Software Migration: Without software virtualization, moving from one software platform to another platform takes much time for
deploying and impact on end user systems. With the help of virtualized software environment the migration becomes easier.
Server Virtualization is the process of dividing a physical server into several virtual servers, called virtual private servers. Each virtual private
server can run independently.
The concept of Server Virtualization widely used in the IT infrastructure to minimizes the costs by increasing the utilization of existing resources.
Types of Server Virtualization
1. Hypervisor: In the Server Virtualization, Hypervisor plays an important role. It is a layer between the operating system (OS)
and hardware. There are two types of hypervisors.
 Type 1 hypervisor ( also known as bare metal or native hypervisors)
 Type 2 hypervisor ( also known as hosted or Embedded hypervisors)
The hypervisor is mainly used to perform various tasks such as allocate physical hardware resources (CPU, RAM, etc.) to several smaller
independent virtual machines, called "guest" on the host machine.
2. Full Virtualization: Full Virtualization uses a hypervisor to directly communicate with the CPU and physical server. It provides the best
isolation and security mechanism to the virtual machines.
The biggest disadvantage of using hypervisor in full virtualization is that a hypervisor has its own processing needs, so it can slow down
the application and server performance.
VMWare ESX server is the best example of full virtualization.
3. Para Virtualization: Para Virtualization is quite similar to the Full Virtualization. The advantage of using this virtualization is that it
is easier to use, Enhanced performance, and does not require emulation overhead. Xen primarily and UML use the Para
Virtualization.
The difference between full and pare virtualization is that, in para virtualization hypervisor does not need too much processing power to
manage the OS.
4. Operating System Virtualization: Operating system virtualization is also called as system-lever virtualization. It is a server
virtualization technology that divides one operating system into multiple isolated user-space called virtual environments. The
biggest advantage of using server visualization is that it reduces the use of physical space, so it will save money.
Linux OS Virtualization and Windows OS Virtualization are the types of Operating System virtualization.
FreeVPS, OpenVZ, and Linux Vserver are some examples of System-Level Virtualization.
Note: OS-Level Virtualization never uses a hypervisor.
5. Hardware Assisted Virtualization: Hardware Assisted Virtualization was presented by AMD and Intel. It is also known
as Hardware virtualization, AMD virtualization, and Intel virtualization. It is designed to increase the performance of the
processor. The advantage of using Hardware Assisted Virtualization is that it requires less hypervisor overhead.
6. Kernel-Level Virtualization: Kernel-level virtualization is one of the most important types of server virtualization. It is an open-
source virtualization which uses the Linux kernel as a hypervisor. The advantage of using kernel virtualization is that it does not
require any special administrative software and has very less overhead.
User Mode Linux (UML) and Kernel-based virtual machine are some examples of kernel virtualization.
Advantages of Server Virtualization:
There are the following advantages of Server Virtualization -
1. Independent Restart: In Server Virtualization, each server can be restart independently and does not affect the working of other
virtual servers.
2. Low Cost: Server Virtualization can divide a single server into multiple virtual private servers, so it reduces the cost of hardware
components.
3. Disaster Recovery: Disaster Recovery is one of the best advantages of Server Virtualization. In Server Virtualization, data can easily
and quickly move from one server to another and these data can be stored and retrieved from anywhere.
4. Faster deployment of resources: Server virtualization allows us to deploy our resources in a simpler and faster way.
5. Security: It allows uses to store their sensitive data inside the data centers.
Disadvantages of Server Virtualization
There are the following disadvantages of Server Virtualization -
1. The biggest disadvantage of server virtualization is that when the server goes offline, all the websites that are hosted by the server
will also go down.
2. There is no way to measure the performance of virtualized environments.
3. It requires a huge amount of RAM consumption.
4. It is difficult to set up and maintain.
5. Some core applications and databases are not supported virtualization.
6. It requires extra hardware resources.
Uses of Server Virtualization
A list of uses of server virtualization is given below -
1. Server Virtualization is used in the testing and development environment.
2. It improves the availability of servers.
3. It allows organizations to make efficient use of resources.
4. It reduces redundancy without purchasing additional hardware components.
As we know that, there has been a strong link between the physical host and the locally installed storage devices. However, that paradigm has
been changing drastically, almost local storage is no longer needed. As the technology progressing, more advanced storage devices are coming to
the market that provide more functionality, and obsolete the local storage.
Storage virtualization is a major component for storage servers, in the form of functional RAID levels and controllers. Operating systems and
applications with device can access the disks directly by themselves for writing. The controllers configure the local storage in RAID groups and
present the storage to the operating system depending upon the configuration. However, the storage is abstracted and the controller is
determining how to write the data or retrieve the requested data for the operating system.
Storage virtualization is becoming more and more important in various other forms:
File servers: The operating system writes the data to a remote location with no need to understand how to write to the physical media.
WAN Accelerators: Instead of sending multiple copies of the same data over the WAN environment, WAN accelerators will cache the data locally
and present the re-requested blocks at LAN speed, while not impacting the WAN performance.
SAN and NAS: Storage is presented over the Ethernet network of the operating system. NAS presents the storage as file operations (like NFS). SAN
technologies present the storage as block level storage (like Fibre Channel). SAN technologies receive the operating instructions only when if the
storage was a locally attached device.
Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage tiering analyze the most commonly used data and places it on the
highest performing storage pool. The lowest one used data is placed on the weakest performing storage pool.
This operation is done automatically without any interruption of service to the data consumer.
Advantages of Storage Virtualization
1. Data is stored in the more convenient locations away from the specific host. In the case of a host failure, the data is not
compromised necessarily.
2. The storage devices can perform advanced functions like replication, reduplication, and disaster recovery functionality.
3. By doing abstraction of the storage level, IT operations become more flexible in how storage is provided, partitioned, and
protected.
With the help of OS virtualization nothing is pre-installed or permanently loaded on the local device and no-hard disk is needed. Everything runs
from the network using a kind of virtual disk. This virtual disk is actually a disk image file stored on a remote server, SAN (Storage Area Network)
or NAS (Non-volatile Attached Storage). The client will be connected by the network to this virtual disk and will boot with the Operating System
installed on the virtual disk.
How does OS Virtualization works?
Components needed for using OS Virtualization in the infrastructure are given below:
The first component is the OS Virtualization server. This server is the center point in the OS Virtualization infrastructure. The server manages the
streaming of the information on the virtual disks for the client and also determines which client will be connected to which virtual disk (using a
database, this information is stored). Also the server can host the storage for the virtual disk locally or the server is connected to the virtual disks
via a SAN (Storage Area Network). In high availability environments there can be more OS Virtualization servers to create no redundancy and load
balancing. The server also ensures that the client will be unique within the infrastructure.
Secondly, there is a client which will contact the server to get connected to the virtual disk and asks for components stored on the virtual disk for
running the operating system.
The available supporting components are database for storing the configuration and settings for the server, a streaming service for the virtual disk
content, a (optional) TFTP service and a (also optional) PXE boot service for connecting the client to the OS Virtualization servers.
As it is already mentioned that the virtual disk contains an image of a physical disk from the system that will reflect to the configuration and the
settings of those systems which will be using the virtual disk. When the virtual disk is created then that disk needs to be assigned to the client that
will be using this disk for starting. The connection between the client and the disk is made through the administrative tool and saved within the
database. When a client has a assigned disk, the machine can be started with the virtual disk using the following process as displayed in the given
below
 Connecting to the OS Virtualization server: First we start the machine and set up the connection with the OS Virtualization server.
Most of the products offer several possible methods to connect with the server. One of the most popular and used methods is using a
PXE service, but also a boot strap is used a lot (because of the disadvantages of the PXE service). Although each method initializes the
network interface card (NIC), receiving a (DHCP-based) IP address and a connection to the server.
 Connecting the Virtual Disk: When the connection is established between the client and the server, the server will look into its
database for checking the client is known or unknown and which virtual disk is assigned to the client. When more than one virtual disk
are connected then a boot menu will be displayed on the client side. If only one disk is assigned, that disk will be connected to the client
which is mentioned in step number 3.
 VDisk connected to the client: After the desired virtual disk is selected by the client, that virtual disk is connected through the OS
Virtualization server . At the back-end, the OS Virtualization server makes sure that the client will be unique (for example computer name
and identifier) within the infrastructure.
 OS is "streamed" to the client: As soon the disk is connected the server starts streaming the content of the virtual disk. The software
knows which parts are necessary for starting the operating system smoothly, so that these parts are streamed first. The information
streamed in the system should be stored somewhere (i.e. cached). Most products offer several ways to cache that information. For
examples on the client hard disk or on the disk of the OS Virtualization server.
 Additional Streaming: After that the first part is streamed then the operating system will start to run as expected. Additional virtual
disk data will be streamed when required for running or starting a function called by the user (for example starting an application
available within the virtual disk).
Vmware workstation software is used to do the virtualization of Operating System. For installing any Operating System virtually, you need to install
VMware software. We are using VMware workstation 10.
Before installing linux OS, you need to have iso image file of linux OS. Let's see the steps to install the linux os virtually.
How to create new virtual machine for linux OS?
1) Click on create new virtual machine.
2) In welcome window, choose custom option and click next button.
3) In choose the virtual machine hardware compatibility window, click on next button.
4) In the Guest operating system window, choose iso image file from the disk or any drive. I have put the iso file of ubuntu in e: drive. So browse
your iso image and click on next button.
5) In the easy install information window, provide full name, username, password and confirm password then click on next button.
You can see the given information.
6) In the processor configuration information, you can select number of processors, number of processor per core. If you don't want to change the
default settings, click on next only.
7) In the memory of the virtual machine window, you can set the memory limit. Click on the next button.
8) In the specify disk capacity window, you can set the disk size. Click on the next button.
9) In the specify disk file window, you can specify the disk file then click on the next button.
10) In the ready to create virtual machine window, click on the finish button.
11) Now you will see vmware screen then ubuntu screen.
To install windows OS virtually, you need to install VMware first. After installing virtualization software, you will get a window to install the new
operating system.
Let's see the steps to install the windows OS on VMware workstation.
cloud computing is the delivery of computing services
cloud computing is the delivery of computing services
cloud computing is the delivery of computing services
Cloud Service providers (CSP) offers various services such as Software as a Service, Platform as a service, Infrastructure as a
service, network services, business applications, mobile applications, and infrastructure in the cloud. The cloud service providers host
these services in a data center, and users can access these services through cloud provider companies using an Internet connection.
There are the following Cloud Service Providers Companies -
 Amazon Web Services (AWS)
AWS (Amazon Web Services) is a secure cloud service platform provided by Amazon. It offers various services such as database storage,
computing power, content delivery, Relational Database, Simple Email, Simple Queue, and other functionality to increase the organization's growth.
Features of AWS
AWS provides various powerful features for building scalable, cost-effective, enterprise applications. Some important features of AWS is given
below-
 AWS is scalable because it has an ability to scale the computing resources up or down according to the organization's demand.
 AWS is cost-effective as it works on a pay-as-you-go pricing model.
 It provides various flexible storage options.
 It offers various security services such as infrastructure security, data encryption, monitoring & logging, identity & access
control, penetration testing, and DDoS attacks.
 It can efficiently manage and secure Windows workloads.
 Microsoft Azure
Microsoft Azure is also known as Windows Azure. It supports various operating systems, databases, programming languages, frameworks that
allow IT professionals to easily build, deploy, and manage applications through a worldwide network. It also allows users to create different groups
for related utilities.
Features of Microsoft Azure
 Microsoft Azure provides scalable, flexible, and cost-effective
 It allows developers to quickly manage applications and websites.
 It managed each resource individually.
 Its IaaS infrastructure allows us to launch a general-purpose virtual machine in different platforms such as Windows and Linux.
 It offers a Content Delivery System (CDS) for delivering the Images, videos, audios, and applications.
AD
 Google Cloud Platform
Google cloud platform is a product of Google. It consists of a set of physical devices, such as computers, hard disk drives, and virtual machines. It
also helps organizations to simplify the migration process.
Features of Google Cloud
 Google cloud includes various big data services such as Google BigQuery, Google CloudDataproc, Google CloudDatalab, and
Google Cloud Pub/Sub.
 It provides various services related to networking, including Google Virtual Private Cloud (VPC), Content Delivery Network,
Google Cloud Load Balancing, Google Cloud Interconnect, and Google Cloud DNS.
 It offers various scalable and high-performance
 GCP provides various serverless services such as Messaging, Data Warehouse, Database, Compute, Storage, Data Processing,
and Machine learning (ML)
 It provides a free cloud shell environment with Boost Mode.
AD
 IBM Cloud Services
IBM Cloud is an open-source, faster, and more reliable platform. It is built with a suite of advanced data and AI tools. It offers various services such
as Infrastructure as a service, Software as a service, and platform as a service. You can access its services like compute power, cloud data &
Analytics, cloud use cases, and storage networking using internet connection.
Feature of IBM Cloud
 IBM cloud improves operational efficiency.
 Its speed and agility improve the customer's satisfaction.
 It offers Infrastructure as a Service (IaaS), Platform as a Service (PaaS), as well as Software as a Service (SaaS)
 It offers various cloud communications services to our IT environment.
 VMware Cloud
VMware cloud is a Software-Defined Data Center (SSDC) unified platform for the Hybrid Cloud. It allows cloud providers to build agile, flexible,
efficient, and robust cloud services.
Features of VMware
 VMware cloud works on the pay-as-per-use model and monthly subscription
 It provides better customer satisfaction by protecting the user's data.
 It can easily create a new VMware Software-Defined Data Center (SDDC) cluster on AWS cloud by utilizing a RESTful API.
 It provides flexible storage options. We can manage our application storage on a per-application basis.
 It provides a dedicated high-performance network for managing the application traffic and also supports multicast networking.
 It eliminates the time and cost complexity.
 Oracle cloud
Oracle cloud platform is offered by the Oracle Corporation. It combines Platform as a Service, Infrastructure as a Service, Software as a Service,
and Data as a Service with cloud infrastructure. It is used to perform tasks such as moving applications to the cloud, managing development
environment in the cloud, and optimize connection performance.
Features of Oracle cloud
 Oracle cloud provides various tools for build, integrate, monitor, and secure the applications.
 Its infrastructure uses various languages including, Java, Ruby, PHP, Node.js.
 It integrates with Docker, VMware, and other DevOps tools.
 Oracle database not only provides unparalleled integration between IaaS, PaaS, and SaaS, but also integrates with the on-premises
platform to improve operational efficiency.
 It maximizes the value of IT investments.
 It offers customizable Virtual Cloud Networks, firewalls, and IP addresses to securely support private networks.
 Red Hat
Red Hat virtualization is an open standard and desktop virtualization platform produced by Red Hat. It is very popular for the Linux environment to
provide various infrastructure solutions for virtualized servers as well as technical workstations. Most of the small and medium-sized organizations
use Red Hat to run their organizations smoothly. It offers higher density, better performance, agility, and security to the resources. It also improves
the organization's economy by providing cheaper and easier management capabilities.
Features of Rad Hat
 Red Hat provides secure, certified, and updated container images via the Red Hat Container catalog.
 Red Hat cloud includes OpenShift, which is an app development platform that allows developers to access, modernize,
and deploy apps
 It supports up to 16 virtual machines, each having up to 256GB of RAM.
 It offers better reliability, availability, and serviceability.
 It provides flexible storage capabilities, including very large SAN-based storage, better management of memory allocations, high
availability of LVMs, and support for particularly roll-back.
 In the Desktop environment, it includes features like New on-screen keyboard, GNOME software, which allows us to install
applications, update application, as well as extended device support.
 DigitalOcean
DigitalOcean is the unique cloud provider that offers computing services to the organization. It was founded in 2011 by Moisey Uretsky and Ben. It
is one of the best cloud provider that allows us to manage and deploy web applications.
Features of DigitalOcean
 It uses the KVM hypervisor to allocate physical resources to the virtual servers.
 It provides high-quality performance.
 It offers a digital community platform that helps to answer queries and holding feedbacks.
 It allows developers to use cloud servers to quickly create new virtual machines for their projects.
 It offers one-click apps for droplets. These apps include MySQL, Docker, MongoDB, Wordpress, PhpMyAdmin, LAMP stack, Ghost,
and Machine Learning.
 Rackspace
Rackspace offers cloud computing services such as hosting web applications, Cloud Backup, Cloud Block Storage, Databases, and Cloud Servers.
The main aim to designing Rackspace is to easily manage private and public cloud deployments. Its data centers operating in the USA, UK, Hong
Kong, and Australia.
Features of Rackspace
 Rackspace provides various tools that help organizations to collaborate and communicate more efficiently.
 We can access files that are stored on the Rackspace cloud drive, anywhere, anytime using any device.
 It offers 6 globally data centers.
 It can manage both virtual servers and dedicated physical servers on the same network.
 It provides better performance at a lower cost.
 Alibaba Cloud
Alibaba Cloud is used to develop data management and highly scalable cloud computing services. It offers various services, including Elastic
Computing, Storage, Networking, Security, Database Services, Application Services, Media Services, Cloud Communication, and Internet of Things.
Features of Alibaba Cloud
 Alibaba cloud offers a suite of global cloud computing services for both international customers and Alibaba Group's e-commerce
ecosystem.
 Its services are available on a pay-as-per-use basis.
 It globally deals with its 14 data centers.
 It offers scalable and reliable data storage.
Difference between AWS, Azure, and Google Cloud Platform
Although AWS, Microsoft Azure, and Google cloud platforms offer various high-level features in terms of computing, management, storage, and
other services, but there are also some differences between these three.
 Amazon Web Services (AWS): Amazon Web Services (AWS) is a cloud computing platform which was introduced in 2002. It offers a
wide range of cloud services such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
AWS provides the largest community with millions of active customers as well as thousands of partners globally. Most of the
organizations use AWS to expand their business by moving their IT management to the AWS.
Flexibility, security, scalability, and better performance are some important features of AWS.
 Microsoft Azure: Microsoft Azure is also called as Windows Azure. It is a worldwide cloud platform which is used for building, deploying,
and managing services. It supports multiple programming languages such as Java, Nodejs, C, and C#. The advantage of using Microsoft
Azure is that it allows us to a wide variety of services without arranging and purchasing additional hardware components.
Microsoft Azure provides several computing services, including servers, storage, databases, software, networking, and analytics over
the Internet.
 Google Cloud Platform (GCP): Google Cloud Platform (GCP) is introduced by Google in 2011. It allows us to use Google's products
such as Google search engine, Gmail, YouTube, etc. Most of the companies use this platform to easily build, move, and deploy
applications on the cloud. It allows us to access these applications using a high-speed internet connection. The advantage of GCP is that
it supports various databases such as SQL, MYSQL, Oracle, Sam, and more.
Google Cloud Platform (GCP) provides various cloud computing services, including computing, data analytics, data storage, and machine
learning.
AD
The below table shows the difference between AWS, Azure, and Google Cloud Platform -
Parameter AWS Azure Google Cloud Platform
App Testing It uses device farm It uses DevTest labs It uses Cloud Test labs.
API Management Amazon API gateway Azure API gateway Cloud endpoints.
Kubernetes Management EKS Kubernetes service Kubernetes engine
Git Repositories AWS source repositories Azure source repositories Cloud source repositories.
Data warehouse Redshift SQL warehouse Big Query
Object Storage S3 Block Blobs and files Google cloud storage.
Relational DB RDS Relational DBs Google Cloud SQL
Block Storage EBS Page Blobs Persistent disks
Marketplace AWS Azure G suite
File Storage EFS Azure Files ZFS and Avere
Media Services Amazon Elastic transcoder Azure media services Cloud video intelligence API
Virtual network VPC VNet Subnet
Pricing Per hour Per minute Per minute
Maximum processors in VM 128 128 96
Maximum memory in VM (GiB) 3904 3800 1433
Catching ElasticCache RedisCache CloudCDN
Load Balancing Configuration Elastic Load Balancing Load Balancer Application
Gateway
Cloud Load Balancing
Global Content Delivery Networks CloudFront Content Delivery Network Cloud Interconnect
Google upgraded its algorithm in July 2018 to include page load speed as a ranking metric. Consider the consequences if customers leave the page
because of load time then the rankings of the page suffer.
Load-time was one of many instances of the significance of hosting services and its effects are on the overall profitability of the company.
Now, let's disintegrate the distinction between the two key kinds of services provided to understand the significance of web hosting servers: These
two servers are: Cloud hosting and dedicated servers.
Each server has certain benefits and drawbacks that may become especially significant to an organization on a plan, meeting time restrictions or
looking to develop. The meanings and variations you need to know are discussed here.
Cloud Ecosystem
A cloud environment is a dynamic system of interrelated components, all of which come together to produce cloud services possible. The cloud
infrastructure of cloud services is made up of software and hardware components and also cloud clients, cloud experts, vendors, integrators and
partners.
The cloud is a technique that is applied to function as a single entity with limitless multiple-servers. As data is stored "in the cloud," it implies that it
is kept in a virtual environment that can pull support from numerous geographically placed physical platforms across the world.
Similarly, the hubs are specific servers that are linked via the opportunity to exchange services in virtual space, mostly in data center facilities. It's a
cloud.
To distribute computing resources, cloud servers support pooled files and folders, including Ceph or a wide Storage Area Network (SAN). Through
devolution, hosted and virtual server data are integrated. In the context of a malfunction, its condition can be easily transferred from this
environment.
To manage the various sizes of cloud storage that are splintered, a hypervisor is often built. It also controls the assignment of hardware facilities,
such as core processors, RAM and storage space, to every cloud server.
Dedicated Hosting System
The dedicated environment for server hosting may not allow usage of virtual technologies. The strengths and weaknesses of a specific item
of hardware devices are the foundation of all tools.
The word 'dedicated' derives from the fact that, depending on hardware, it is separated from any other physical environment around it. The
equipment is deliberately developed to offer industry-leading efficiency, power, longevity and, very important, durability.
What is Cloud Server, and How it works
The on-demand procurement of computer network resources, particularly storing data (cloud services) and computational capability, is cloud
computing without explicit, active user intervention. In common, the term describes data centers accessible over the Web to many users. Large
servers, over all today, also have operations spread through cloud servers over several environments. If the communication to the user is slightly
closer, an edge server can be assigned.
Cloud server hosting is, in basic words, a virtualized storage network.
The core level support for several cloud storage is provided by devices known as bare metal servers. Various bare metal nodes are mainly
composed of a public cloud, typically housed in protected network infrastructure for collocation. Multiple virtual servers are hosted by all of these
physical servers.
In a couple of seconds, a virtual machine can be built. When it is no longer required, it can also be discarded fast. It is also an easy task to submit
information to a virtual server, without the need for in-depth hardware upgrades. Another of the main benefits of cloud infrastructure is versatility,
and it is a quality that is central to the cloud service concept.
There will be several web servers within such a private cloud that provide services for the same physical environment. And though each device will
be a bare metal server, what consumers invest for and eventually use is the virtual environment.
AD
Dedicated Server Hosting
Dedicated hosting contains the ability to provide a data center with only a specific customer.
All of the server's facilities are offered to the particular client who leases or purchases the computer equipment. Services are designed to the
customer's requirements, such as storage, RAM, bandwidth load, and processor sort. The most efficient computers in the marketplace are
dedicated hosting servers, which most often include several processors.
A dedicated server can need a server network. The cluster is based on modern technology, everyone connecting to a virtual network location for
several dedicated servers. After all, only one customer has access to the tools that are in the virtual environment.
 Hybrid cloud server (Mixture of Dedicated and cloud server)
A hybrid cloud is named as an incredibly prevalent architecture that several businesses use. Dedicated and cloud hosting alternatives are used in a
hybrid cloud. A hybrid may also combine dedicated hosting servers with protected and public cloud servers. This configuration enables several
configurations that are appealing to organizations with unique requirements or financial restrictions on the personalization aspect.
Using dedicated servers for back-end operations is one of the most common hybrid cloud architectures. The hybrid servers' power provides the
most stable storage space and communication climate. On cloud storage, its front-end is hosted. For Software as a Service (SaaS) applications,
which need flexibility and scalability based on customer-handling parameters, this architecture works perfectly.
 Common factors of cloud server and dedicated server
Either dedicated or cloud servers both perform similar required actions through their root. The following software is used with both strategies:
 Keep information preserved
 Request permission for the data
 Queries for information processed
 Return data to the person who needed it.
Differences between hosting services or virtual private server (VPS) services are often preserved by cloud storage and physical hosting.
 Processing large quantities of data without hiccups from delay or results.
 Knowledge reception, analysis and returning to clients with business usual reaction times.
 Protection of the integrity of information stored.
 Ensuring web apps' efficiency.
Cloud-based systems and dedicated servers of the modern generation have the specific capacity to handle almost any service or program. Using
related back-end tools, they can be handled, so that both approaches may execute on similar applications. The differentiation is in the results.
Matching the perfect approach to a framework will save money for organizations, increase flexibility and agility, and help to optimize the use of
resources.
Cloud server vs. Dedicated server
While analyzing performance, scalability, migration, management, services, and costing, the variations among cloud infrastructure and
dedicated servers become more evident.
 Scalability:Dedicated hosting ranges separately from servers based on clouds. The classifier model is constrained by the size of stacks
or drive-bays of the Distributed Antenna System (DAS) present on the server. Via an existing logical volume manager (LVM) file,
a RAID handler, and a connected charger, a dedicated server might be able to communicate a disk to an already open bay. Hot swapping
is more complicated for DAS arrays.
Cloud server space, by addition, is readily customizable (and contractible). The cloud server is not always a part of the connection to
provide more storage capacity since the SAN is away from the host. In the cloud world, extending capacity does not suffer any
slowdown.
Excluding operational downtime, dedicated servers often require more money and resources to update processors. The complete
conversion or communicating of another server is necessary for webservers on a single device that needs additional processing capacity.
 Performance:For a business that's looking for easy deployment and information retrieval, dedicated servers are typically the most
preferred option. Although they manipulate data locally, they may not experience a wide range of delays when carrying out certain
operations.
This output pace is particularly essential for organizations, including e-commerce, in which every 1/10th of a second count. To manage
information, cloud servers have to go through SAN, which carries the operation through the architecture's rear end.
The application should also be routed via the hypervisor. This additional processing imposes a certain delay factor that cannot be
decreased.
Devices on dedicated servers are dedicated exclusively to the web or software host. They may not require to queue queries until all of
the computing capacity is used at one (which is highly doubtful). For businesses with Processor sensitive load balancing operations, this
enables dedicated servers an excellent option. CPU units in a cloud system need supervision to prevent efficiency from decaying. Without
the need for an additional amount of lag, the existing version of hosts cannot accommodate requests.
Dedicated servers are completely connected to the host site or program, preventing the overall environment from being throttled.
Especially in comparison to the cloud storage world, the commitment of this degree enables networking to be a simple operation.
Using the physical network in the cloud system poses a serious risk of bandwidth being throttled. If more than one occupant is
concurrently utilizing the same channel, a variety of adverse impacts can be encountered by both occupants.
 Administration and Operations:Dedicated servers can enable an enterprise to track their dedicated devices. In-house workers also
ought to grasp the management of programs more precisely. A business would also need a detailed understanding of the load profile to
keep storage overhead within the correct range.
Scaling, updates and repairs are a collaborative endeavor between customers and suppliers that should be strategically planned to keep
downtime to a minimum. It will be more convenient for cloud servers to handle. With much less effect on processes, interoperability is
quicker.
If a dedicated environment requires scheduling to estimate server needs correctly, cloud services platforms require planning to address
the possible constraints that you may encounter.
 Cost Comparison:Normally, cloud servers contain a lower initial expense than dedicated servers. After all, when a business scales and
needs additional capital, cloud servers start losing this benefit.
There are also some characteristics that really can boost the price of cloud and dedicated servers. For example, executing a cloud server
via a specific network interface can be very costly.
An advantage of dedicated servers is that it is possible to update them.
Network cards and Non-Volatile Memory (NVMe) drive with more storage, which can boost capacities at the cost of a business's
equipment expenditure.
Usually, cloud servers are paid on a regular OpEx (Operational expenditure) model. CapEx (Capital expenditure) are generally physical
server alternatives. They enable you to overwrite the assets at no extra cost. You also have capital investment expenses that can be paid
off for a period of 3 years.
 Migration:Streamlined migration can be attained through both dedicated and cloud hosting services. Migration involves further
preparation inside a dedicated setting. The new approach may hold both previous and present progress in view to execute a smooth
migration. There should be a full-scale decision made.
In most instances, before the new server is entirely prepared to accept over, the old and new implementations can run simultaneously.
Maintaining the existing systems as a backup is also recommended before the latest approach can be sufficiently checked.
Cloud Deployment Model
Today, organizations have many exciting opportunities to reimagine, repurpose and reinvent their businesses with the cloud. The last decade has
seen even more businesses rely on it for quicker time to market, better efficiency, and scalability. It helps them achieve lo ng-term digital goals as
part of their digital strategy.
Though the answer to which cloud model is an ideal fit for a business depends on your organization's computing and business needs. Choosing the
right one from the various types of cloud service deployment models is essential. It would ensure your business is equipped with the performance,
scalability, privacy, security, compliance & cost-effectiveness it requires. It is important to learn and explore what different deployment types can
offer - around what particular problems it can solve.
Read on as we cover the various cloud computing deployment and service models to help discover the best choice for your business.
Cloud Deployment Model?
It works as your virtual computing environment with a choice of deployment model depending on how much data you want to store and who has
access to the Infrastructure.
Most cloud hubs have tens of thousands of servers and storage devices to enable fast loading. It is often possible to choose a geographic area to
put the data "closer" to users. Thus, deployment models for cloud computing are categorized based on their location. To know which model would
best fit the requirements of your organization, let us first learn about the various types.
 Public Cloud
The name says it all. It is accessible to the public. Public deployment models in the cloud are perfect for organizations with growing and fluctuating
demands. It also makes a great choice for companies with low-security concerns. Thus, you pay a cloud service provider for networking services,
compute virtualization & storage available on the public internet. It is also a great delivery model for the teams with development and testing. Its
configuration and deployment are quick and easy, making it an ideal choice for test environments.
Benefits of Public Cloud
 Minimal Investment - As a pay-per-use service, there is no large upfront cost and is ideal for businesses who need quick access to
resources
 No Hardware Setup - The cloud service providers fully fund the entire Infrastructure
 No Infrastructure Management - This does not require an in-house team to utilize the public cloud.
Limitations of Public Cloud
 Data Security and Privacy Concerns - Since it is accessible to all, it does not fully protect against cyber-attacks and could lead to
vulnerabilities.
 Reliability Issues - Since the same server network is open to a wide range of users, it can lead to malfunction and outages
 Service/License Limitation - While there are many resources you can exchange with tenants, there is a usage cap.
AD
 Private Cloud
Now that you understand what the public cloud could offer you, of course, you are keen to know what a private cloud can do. Companies that look
for cost efficiency and greater control over data & resources will find the private cloud a more suitable choice.
It means that it will be integrated with your data center and managed by your IT team. Alternatively, you can also choose to host it externally. The
private cloud offers bigger opportunities that help meet specific organizations' requirements when it comes to customization. It's also a wise choice
for mission-critical processes that may have frequently changing requirements.
Benefits of Private Cloud
 Data Privacy - It is ideal for storing corporate data where only authorized personnel gets access
 Security - Segmentation of resources within the same Infrastructure can help with better access and higher levels of security.
 Supports Legacy Systems - This model supports legacy systems that cannot access the public cloud.
Limitations of Private Cloud
 Higher Cost - With the benefits you get, the investment will also be larger than the public cloud. Here, you will pay for software,
hardware, and resources for staff and training.
 Fixed Scalability - The hardware you choose will accordingly help you scale in a certain direction
 High Maintenance - Since it is managed in-house, the maintenance costs also increase.
 Community Cloud
The community cloud operates in a way that is similar to the public cloud. There's just one difference - it allows access to only a specific set of
users who share common objectives and use cases. This type of deployment model of cloud computing is managed and hosted internally or by a
third-party vendor. However, you can also choose a combination of all three.
Benefits of Community Cloud
 Smaller Investment - A community cloud is much cheaper than the private & public cloud and provides great performance
 Setup Benefits - The protocols and configuration of a community cloud must align with industry standards, allowing customers to
work much more efficiently.
Limitations of Community Cloud
 Shared Resources - Due to restricted bandwidth and storage capacity, community resources often pose challenges.
 Not as Popular - Since this is a recently introduced model, it is not that popular or available across industries
 Hybrid Cloud
As the name suggests, a hybrid cloud is a combination of two or more cloud architectures. While each model in the hybrid cloud functions
differently, it is all part of the same architecture. Further, as part of this deployment of the cloud computing model, the internal or external
providers can offer resources.
Let's understand the hybrid model better. A company with critical data will prefer storing on a private cloud, while less sensitive data can be stored
on a public cloud. The hybrid cloud is also frequently used for 'cloud bursting'. It means, supposes an organization runs an application on-premises,
but due to heavy load, it can burst into the public cloud.
Benefits of Hybrid Cloud
 Cost-Effectiveness - The overall cost of a hybrid solution decreases since it majorly uses the public cloud to store data.
 Security - Since data is properly segmented, the chances of data theft from attackers are significantly reduced.
 Flexibility - With higher levels of flexibility, businesses can create custom solutions that fit their exact requirements
Limitations of Hybrid Cloud
 Complexity - It is complex setting up a hybrid cloud since it needs to integrate two or more cloud architectures
 Specific Use Case - This model makes more sense for organizations that have multiple use cases or need to separate critical and
sensitive data
A Comparative Analysis of Cloud Deployment Models
With the below table, we have attempted to analyze the key models with an overview of what each one can do for you:
Important Factors to
Consider
Public Private Community Hybrid
Setup and ease of use Easy Requires professional IT
Team
Requires professional IT
Team
Requires professional IT Team
Data Security and
Privacy
Low High Very High High
Scalability and
flexibility
High High Fixed requirements High
Cost-Effectiveness Most
affordable
Most expensive Cost is distributed among
members
Cheaper than private but more
expensive than public
Reliability Low High Higher High
Making the Right Choice for Cloud Deployment Models
There is no one-size-fits-all approach to picking a cloud deployment model. Instead, organizations must select a model based on workload-by-
workload. Start with assessing your needs and consider what type of support your application requires. Here are a few factors you can consider
before making the call:
 Ease of Use - How savvy and trained are your resources? Do you have the time and the money to put them through training?
 Cost - How much are you willing to spend on a deployment model? How much can you pay upfront on subscription, maintenance,
updates, and more?
 Scalability - What is your current activity status? Does your system run into high demand?
 Compliance - Are there any specific laws or regulations in your country that can impact the implementation? What are the industry
standards that you must adhere to?
 Privacy - Have you set strict privacy rules for the data you gather?
Each cloud deployment model has a unique offering and can immensely add value to your business. For small to medium-sized businesses, a public
cloud is an ideal model to start with. And as your requirements change, you can switch over to a different deployment model. An effective strategy
can be designed depending on your needs using the cloud mentioned above deployment models.
AD
The key is to enable hypervisor virtualization. In its simplest form, a hypervisor is specialized firmware or software, or both, installed on a single
hardware that will allow you to host multiple virtual machines. This allows physical hardware to be shared across multiple virtual machines. The
computer on which the hypervisor runs one or more virtual machines is called the host machine.
Virtual machines are called guest machines. The hypervisor allows the physical host machine to run various guest machines. It helps to get
maximum benefit from computing resources such as memory, network bandwidth and CPU cycles.
Advantages of Hypervisor:
Although virtual machines operate on the same physical hardware, they are isolated from each other. It also denotes that if one virtual machine
undergoes a crash, error, or malware attack, it does not affect other virtual machines.
Another advantage is that virtual machines are very mobile because they do not depend on the underlying hardware. Since they are not connected
to physical hardware, switching between local or remote virtualized servers becomes much easier than with traditional applications.
Types of Hypervisors in Cloud Computing:
There are two main types of hypervisors in cloud computing.
1. Type I Hypervisor: A Type I hypervisor operates directly on the host's hardware to monitor the hardware and guest virtual machines,
and is referred to as bare metal. Typically, they do not require the installation of software ahead of time.
Instead, you can install it directly on the hardware. This type of hypervisor is powerful and requires a lot of expertise to function well. In
addition, Type I hypervisors are more complex and have few hardware requirements to run adequately. Because of this it is mostly
chosen by IT operations and data center computing.
Examples of Type I hypervisors include Oracle VM Server for Xen, SPARC, Oracle VM Server for x86, Microsoft Hyper-V, and
VMware's ESX/ESXi.
2. Type II Hypervisor: It is also called a hosted hypervisor because it is installed on an existing operating system, and they are not more
capable of running more complex virtual tasks. People use it for basic development, testing and simulation.
If a security flaw is found inside the host OS, it can potentially compromise all running virtual machines. This is why Type II hypervisors
cannot be used for data center computing, and they are designed for end-user systems where security is less of a concern. For example,
developers can use a Type II hypervisor to launch virtual machines to test software products prior to their release.
Hypervisors, their use and Importance:
A hypervisor is a process or a function to help admins isolate operating systems and applications from the underlying hardware.
Cloud computing uses it the most as it allows multiple guest operating systems (also known as virtual machines or VMs) to run simultaneously on a
single host system. Administrators can use the resources efficiently by dividing computing resources (RAM, CPU, etc.) between multiple VMs.
A hypervisor is a key element in virtualization, which has helped organizations achieve higher cost savings, improve their provisioning and
deployment speeds, and ensure higher resilience with reduced downtimes.
The Evolution of Hypervisors:
The use of hypervisors dates back to the 1960s, when IBM deployed them on time-sharing systems and took advantage of them to test new
operating systems and hardware. During the 1960s, virtualization techniques were used extensively by developers wishing to test their programs
without affecting the main production system.
The mid-2000s saw another significant leap forward as Unix, Linux and others experimented with virtualization. With advances in processing power,
companies built powerful machines capable of handling multiple workloads. In 2005, CPU vendors began offering hardware virtualization for their
x86-based products, making hypervisors mainstream.
Why use a hypervisor?
Now that we have answered "what is a hypervisor", it will be useful to explore some of their important applications to better understand the role of
hypervisors in virtualized environments. Hypervisors simplify server management because VMs are independent of the host environment. In other
words, the operation of one VM does not affect other VMs or the underlying hardware.
Therefore, even when one VM crashes, others can continue to work without affecting performance. This allows administrators to move VMs
between servers, which is a useful capability for workload balancing. Teams seamlessly migrate VMs from one machine to another, and they can
use this feature for fail-overs. In addition, a hypervisor is useful for running and testing programs in different operating systems.
However, the most important use of hypervisors is consolidating servers on the cloud, and data centers require server consolidation to reduce
server sprawl. Virtualization practices and hypervisors have become popular because they are highly effective in solving the problem of
underutilized servers.
Virtualization enables administrators to easily take advantage of untapped hardware capacity to run multiple workloads at once, rather than running
separate workloads on separate physical servers. They can match their workload with appropriate material resources, meeting their time, cost and
service level requirements.
Need of a Virtualization Management Tool:
Today, most enterprises use hypervisors to simplify server management, and it is the backbone of all cloud services. While virtualization has its
advantages, IT teams are often less equipped to manage a complex ecosystem of hypervisors from multiple vendors. It is not always easy to keep
track of different types of hypervisors and to accurately monitor the performance of VMs. In addition, the ease of provisioning increases the
number of applications and operating systems, increasing the routine maintenance, security and compliance burden.
In addition, VMs may still require IT support related to provisioning, de-provisioning and auditing as per individual security and compliance
mandates. Troubleshooting often involves skimming through multiple product support pages. As organizations grow, the lack of access to proper
documentation and technical support can make the implementation and management of hypervisors difficult. Eventually, controlling virtual machine
spread becomes a significant challenge.
Different groups within an organization often deploy the same workload to different clouds, increasing inefficiency and complicating data
management. IT administrators must employ virtualization management tools to address the above challenges and manage their resources
efficiently.
Virtualization management tools provide a holistic view of the availability of all VMs, their states (running, stopped, etc.), and host servers. These
tools also help in performing basic maintenance, provisioning, de-provisioning and migration of VMs.
Key Players in Virtualization Management:
There are three broad categories of virtualization management tools available in the market:
 Proprietary tools (with varying degrees of cross-platform support): VMware venter, Microsoft SCVMM
 Open-source tools: Citrix XenCenter
 Third-party commercial tools: Dell Foglight, Solar Winds Virtualization Manager, Splunk Virtualization Monitoring System.
Cloud computing is an infrastructure and software model that enables ubiquitous access to shared storage pools, networks, servers and
applications.
It allows data processing on a privately owned cloud or on a third-party server. This creates maximum speed and reliability. But the biggest
advantages are its ease of installation, low maintenance and scalability. In this way, it grows with your needs.
IaaS and SaaS cloud computing has been skyrocketing since 2009, and it's all around us now. You're probably reading this on the cloud right
now.
For some perspective on how important cloud storage and computing are to our daily lives, here are 8 real-world examples of cloud computing:
 Examples of Cloud Storage
Ex: Dropbox, Gmail, Facebook
The number of online cloud storage providers is increasing every day, and each is competing on the amount of storage that can be provided to the
customer.
Right now, Dropbox is the clear leader in streamlined cloud storage, allowing users to access files through their application or website on any
device with up to 1 terabyte of free storage.
Gmail, Google's email service provider, on the other hand, offers unlimited storage on the cloud. Gmail has revolutionized the way we send email
and is largely responsible for the increasing use of email across the world.
Facebook is a mixture of both in that it can store an infinite amount of information, pictures and videos on your profile. Then they can be easily
accessed on multiple devices. Facebook goes a step further with its Messenger app, which allows profiles to exchange data.
 Examples of Marketing Cloud Platforms
Ex: Maropost for Marketing, Hubspot, Adobe Marketing Cloud
Marketing Cloud is an end-to-end digital marketing platform for customers to manage contacts and target leads. Maropost Marketing Cloud
combines easy-to-use marketing automation and hyper-targeting of leads. Plus, making sure email arrives in the inbox, thanks to its advanced
email delivery capabilities.
In general, marketing clouds fill the need for personalization, and this is important in a market that demands messaging to be "more human". So
communicating that your brand is here to help will make all the difference in closing.
 Examples of Cloud Computing in Education
Ex: SlideRocket, Ratatype, Amazon Web Services
Education is rapidly adopting advanced technology as students already are. Therefore, to modernize classrooms, teachers have introduced e-
learning software like SlideRocket.
SlideRocket is a platform that students can use to create and submit presentations, and students can also present them over the cloud via web
conferencing. Another tool teachers use is RataType, which helps students learn to type faster and offers online typing tests to track their progress.
Amazon's AWS Cloud for K12 and Primary Education is a virtual desktop infrastructure (VDI) solution for school administration. The cloud allows
instructors and students to access teaching and learning software on multiple devices.
 Examples of Cloud Computing in Healthcare
Ex: ClearDATA, Dell's Secure Healthcare Cloud, IBM Cloud
Cloud computing allows nurses, physicians and administrators to quickly share information from anywhere. It also saves on costs by allowing large
data files to be shared quickly for maximum convenience. This is a huge boost to efficiency.
Ultimately, cloud technology ensures that patients receive the best possible care without unnecessary delay. The patient's status can also be
updated in seconds through remote conferencing.
However, many modern hospitals have not yet implemented cloud computing, but are expected to do so soon.
 Examples of Cloud Computing for Government
Uses: IT consolidation, shared services, citizen services
The US government and military were early adopters of cloud computing. Under the Obama administration to accelerate cloud adoption across
departments, the U.S. The federal cloud computing strategy was introduced.
According to the strategy: "The focus will shift from the technology itself to the core competencies and mission of the agency."
US Government's cloud includes social, mobile and analytics technologies. However, they must adhere to strict compliance and security measures
(FIPS, FISMA, and FedRAMP). This is to protect against cyber threats both domestically and abroad.
Cloud computing is the answer for any business struggling to stay organized, increase ROI, or grow their email lists. Maropost has the digital
marketing solutions you need to transform your business.
Cloud computing is becoming popular day by day. Continuous business expansion and growth requires huge computational power and large-scale
data storage systems. Cloud computing can help organizations expand and securely move data from physical locations to the 'cloud' that can be
accessed anywhere.
Cloud computing has many features that make it one of the fastest growing industries at present. The flexibility offered by cloud services in the
form of their growing set of tools and technologies has accelerated its deployment across industries. This blog will tell you about the essential
features of cloud computing.
1. Resources Pooling: Resource pooling is one of the essential features of cloud computing. Resource pooling means that a cloud service
provider can share resources among multiple clients, each providing a different set of services according to their needs. It is a multi-client strategy
that can be applied to data storage, processing and bandwidth-delivered services. The administration process of allocating resources in real-time
does not conflict with the client's experience.
2. On-Demand Self-Service: It is one of the important and essential features of cloud computing. This enables the client to continuously monitor
server uptime, capabilities and allocated network storage. This is a fundamental feature of cloud computing, and a customer can also control the
computing capabilities according to their needs.
3. Easy Maintenance: This is one of the best cloud features. Servers are easily maintained, and downtime is minimal or sometimes zero. Cloud
computing powered resources often undergo several updates to optimize their capabilities and potential. Updates are more viable with devices and
perform faster than previous versions.
4. Scalability And Rapid Elasticity: A key feature and advantage of cloud computing is its rapid scalability. This cloud feature enables cost-
effective handling of workloads that require a large number of servers but only for a short period. Many customers have workloads that can be run
very cost-effectively due to the rapid scalability of cloud computing.
5. Economical: This cloud feature helps in reducing the IT expenditure of the organizations. In cloud computing, clients need to pay the
administration for the space used by them. There is no cover-up or additional charges that need to be paid. Administration is economical, and more
often than not, some space is allocated for free.
6. Measured And Reporting Service: Reporting Services is one of the many cloud features that make it the best choice for organizations. The
measurement and reporting service is helpful for both cloud providers and their customers. This enables both the provider and the customer to
monitor and report which services have been used and for what purposes. It helps in monitoring billing and ensuring optimum utilization of
resources.
7. Security: Data security is one of the best features of cloud computing. Cloud services make a copy of the stored data to prevent any kind of
data loss. If one server loses data by any chance, the copied version is restored from the other server. This feature comes in handy when multiple
users are working on a particular file in real-time, and one file suddenly gets corrupted.
8. Automation: Automation is an essential feature of cloud computing. The ability of cloud computing to automatically install, configure and
maintain a cloud service is known as automation in cloud computing. In simple words, it is the process of making the most of the technology and
minimizing the manual effort. However, achieving automation in a cloud ecosystem is not that easy. This requires the installation and deployment
of virtual machines, servers, and large storage. On successful deployment, these resources also require constant maintenance.
9. Resilience: Resilience in cloud computing means the ability of a service to quickly recover from any disruption. The resilience of a cloud is
measured by how fast its servers, databases and network systems restart and recover from any loss or damage. Availability is another key feature
of cloud computing. Since cloud services can be accessed remotely, there are no geographic restrictions or limits on the use of cloud resources.
10. Large Network Access: A big part of the cloud's characteristics is its ubiquity. The client can access cloud data or transfer data to the cloud
from any location with a device and internet connection. These capabilities are available everywhere in the organization and are achieved with the
help of internet. Cloud providers deliver that large network access by monitoring and guaranteeing measurements that reflect how clients access
cloud resources and data: latency, access times, data throughput, and more.
Cloud services have many benefits, so let's take a closer look at some of the most important ones.
 Flexibility: Cloud computing lets users access files using web-enabled devices such as smartphones and laptops. The ability to
simultaneously share documents and other files over the Internet can facilitate collaboration between employees. Cloud services are very
easily scalable, so your IT needs can be increased or decreased depending on the needs of your business.
 Work from anywhere: Users of cloud systems can work from any location as long as you have an Internet connection. Most of the
major cloud services offer mobile applications, so there are no restrictions on what type of device you're using.
It allows users to be more productive by adjusting the system to their work schedules.
 Cost savings: Using web-based services eliminates the need for large expenditures on implementing and maintaining the hardware.
Cloud services work on a pay-as-you-go subscription model.
 Automatic updates: With cloud computing, your servers are off-premises and are the responsibility of the service provider. Providers
update systems automatically, including security updates. This saves your business time and money from doing it yourself, which could
be better spent focusing on other aspects of your organization.
 Disaster recovery: Cloud-based backup and recovery ensure that your data is secure. Implementing robust disaster recovery was once
a problem for small businesses, but cloud solutions now provide these organizations with the cost-effective solutions with the expertise
they need. Cloud services save time, avoid large investments and provide a third party experience for your company.
Multitenancy is a type of software architecture where a single software instance can serve multiple distinct user groups. It means that multiple
customers of cloud vendors are using the same computing resources. As they are sharing the same computing resources but the data of each
Cloud customer is kept separate and secure. It is a very important concept of Cloud Computing.
Multitenancy is also a shared host where the same resources are divided among different customers in cloud computing.
For Example : The example of multitenancy is the same as working of Bank. Multiple people can store money in the same Bank. But every
customer asset is different. One customer cannot access the other customer's money and account, and different customers are not aware of each
other's account balance and details, etc.
Advantages of Multitenancy :
 The use of Available resources is maximized by sharing resources.
 Customer's Cost of Physical Hardware System is reduced, and it reduces the usage of physical devices and thus power consumption
and cooling cost savings.
 Save Vendor's cost as it becomes difficult for a cloud vendor to provide separate Physical Services to each individual.
Disadvantages of Multitenancy :
 Data is stored in third-party services, which reduces our data security and puts it into vulnerable conditions.
 Unauthorized access will cause damage to data.
Each tenant's data is not accessible to all other tenants within the cloud infrastructure and can only be accessed with the permission of the cloud
provider.
In a private cloud, customers, or tenants, can be different individuals or groups within the same company. In a public cloud, completely different
organizations can securely share their server space. Most public cloud providers use a multi-tenancy model, which allows them to run servers with
single instances, which is less expensive and helps streamline updates.
Multitenant Cloud vs. Single-Tenant Cloud
In a single-tenant cloud, only one client is hosted on the server and provided access to it. Due to the multi-tenancy architecture hosting multiple
clients on the same server, it is important to understand the security and performance of the provider fully. Single-tenant clouds give customers
greater control over managing data, storage, security, and performance.
Benefits of multitenant architecture
There is a whole range of advantages to multitenancy, which are evident in the popularity of cloud computing.
 Multitenancy can save money: Computing is cheap to scale, and multi-tenancy allows resources to be allocated coherently and
efficiently, ultimately saving on operating costs. For an individual user, paying for access to a cloud service or SaaS application is often
more cost-effective than running single-tenant hardware and software.
 Enables multi-tenancy flexibility: If you have invested in your hardware and software, it can reach capacity during times of high
demand or go idle during slow demand. On the other hand, a multitenant cloud can allocate a pool of resources to the users who need it
as their needs go up and down. As a public cloud provider customer, you can access additional capacity when needed and not pay for it
when you don't.
 Multi-tenancy can be more efficient: Multitenancy reduces the need for individual users to manage the infrastructure and handle
updates and maintenance. Individual tenants can rely on a central cloud provider rather than their teams to handle those routine chores.
Example Of Multi-Tenancy:
Multitenant clouds can be compared to the structure of an apartment building. Each resident has access to their apartments within the entire
building agreement, and only authorized persons may enter specific units. However, the entire building shares resources such as water, electricity,
and common areas.
It is similar to a multitenant cloud in that the provider sets broad quotas, rules, and performance expectations for customers, but each customer
has private access to their information.
Multitenancy can describe a hardware or software architecture in which multiple systems, applications, or data from different enterprises are
hosted on the same physical hardware. It differs from single-tenancy, in which a server runs a single instance of the operating system and
application. In the cloud world, a multitenant cloud architecture enables customers ("tenants") to share computing resources in a public or private
cloud.
Multitenancy is a common feature of purpose-built, cloud-delivered services, as it allows customers to efficiently share resources while safely
scaling up to meet increasing demand. Even though they share resources, cloud customers are unaware of each other, and their data is kept
separate.
What does multitenant mean for the cloud?
Cloud providers offer multi-tenancy as a means of sharing the use of computing resources. However, this shared use of resources should not be
confused with virtualization, a closely related concept. In a multitenant environment, multiple clients share the same application, in the same
operating environment, on the same hardware, with the same storage system. In virtualization, unlike multitenancy, each application runs on a
separate virtual machine with its operating system.
Each resident has authorized access to their apartment, yet all residents share water, electricity, and common areas. Similarly, in a multitenant
cloud, the provider sets broad terms and performance expectations, but individual customers have private access to their information.
The multitenant design of a cloud service can dramatically impact the delivery of applications and services. It enables unprecedented reliability,
availability, and scalability while enabling cost savings, flexibility, and security for IT organizations.]
Multi-tenancy, Security, and Zscaler:
The primary advantage of multitenant architectures is that organizations can easily onboard users. There's no difference between onboarding
10,000 users from one company or 10 users from a thousand companies with a multitenant cloud. This type of platform easily scales to handle
increasing demand, whereas other architectures easily.
From a security perspective, a multitenant architecture enables policies to be implemented globally across the cloud. That's why Zscaler users can
walk around anywhere, knowing that their traffic will be routed to the nearest Zscaler data center-one in 150 worldwide-and their policies will
follow. Because of this capability, an organization with a thousand users can now have the same security protections as a much larger organization
with tens or hundreds of thousands of employees.
Cloud-native SASE architectures will almost always be multitenant, with multiple customers sharing the underlying data plane.
The future of network security is in the cloud.:
Corporate networks now move beyond the traditional "security perimeter" to the Internet. The only way to provide adequate security to users -
regardless of where they connect - is by moving security and access control to the cloud.
Zscaler leverages multi-tenancy to scale to increasing demands and spikes in traffic without impacting performance. Scalability lets us easily scan
every byte of data coming and going over all ports and protocols - including SSL - without negatively impacting the user experience. Another
advantage of multitenancy is that we can immediately protect all our customers from this threat as soon as a threat is detected on the Zscaler
Cloud.
Zscaler Cloud is always updated with the latest security updates to protect customers from rapidly evolving malware. With thousands of new
phishing sites coming in every day, the equipment is not working. And Zscaler reduces costs and eliminates the complexity of patching, updating,
and maintaining hardware and software.
The multitenant environment in Linux:
Anyone setting up a multitenant environment will be faced with the option of isolating environments using virtual machines (VMs) or containers.
With VMs, a hypervisor spins up guest machines, each of which has its operating system and applications and dependencies. The hypervisor also
ensures that users are isolated from each other.
Compared to VMs, containers offer a more lightweight, flexible, and easy-to-scale model. Containers simplify multi-tenancy deployment by
deploying multiple applications on a single host, using the kernel and container runtime to spin up each container. Unlike VMs, which contain their
kernel, applications running in containers share a kernel, even across multiple tenants.
In Linux, namespaces make it possible for multiple containers to use the same resource without conflict simultaneously. Securing a container is the
same as securing any running process.
When using Kubernetes for container orchestration, it is possible to set up a multitenant environment using a single Kubernetes cluster. It is
possible to segregate tenants into their namespaces and create policies that enforce tenant segregation.
Multitenant Database:
When choosing a database for multitenant applications, developers have to balance customer needs or desire for data isolation and quick and
economical solutions in response to growth or spikes in application traffic.
To ensure complete isolation, the developer can allocate a separate database instance for each tenant; On the other hand, to ensure maximum
scalability, the developer can make all tenants share the same database instance. But, most developers opt to use a data store such as PostgreSQL,
which enables each tenant to have their schema within a single database instance (sometimes called 'soft isolation') and have the best of both
worlds. Provides.
What about "hybrid" security solutions?
Organizations are increasingly using cloud-based apps, such as Salesforce, Box, and Office 365 when migrating to infrastructure services such as
Microsoft Azure and Amazon Web Services (AWS). Therefore, many businesses realize that it makes more sense to secure the traffic in the cloud.
In response, older vendors that relied heavily on periodic hardware equipment sales promoted so-called "hybrid solutions". In that data, center
security controlled by devices and mobile or branch security is similar to those housed in a cloud environment. The stack handles security.
This hybrid strategy complicates, rather than simplifies, enterprise security in that cloud users and administrators get none of the benefits of a real
cloud service-speed, scale, global visibility, and threat intelligence-that is only a multitenant one. It can be provided through global architecture.
The use of a widely dispersed system strategy to accomplish a common objective is called grid computing. A computational grid can be conceived
as a decentralized network of interrelated files and non-interactive activities. Grid computing differs from traditional powerful computational
platforms like cluster computing in that each unit is dedicated to a certain function or activity. Grid computers are also more diverse and spatially
scattered than cluster machines and are not physically connected. However, a particular grid might be allocated to a unified platform, and grids are
frequently utilized for various purposes. General-purpose grid network application packages are frequently used to create grids. The size of the grid
might be extremely enormous.
Grids are decentralized network computing in which a "super virtual computer" is made up of several loosely coupled devices that work together to
accomplish massive operations. Distributed or grid computing is a sort of parallel processing that uses entire devices (with onboard CPUs, storage,
power supply, network connectivity, and so on) linked to a network connection (private or public) via a traditional network connection,
like Ethernet, for specific applications. This contrasts with the typical quantum computer concept, consisting of several cores linked by an elevated
universal serial bus on a local level. This technique has been used in corporate entities for these applications ranging from drug development,
market analysis, seismic activity, and backend data management in the assistance of e-commerce and online services. It has been implemented to
computationally demanding research, numerical, and educational difficulties via volunteer computer technology.
Grid computing brings together machines from numerous organizational sectors to achieve a similar aim, such as completing a single work and
then vanishes just as rapidly. Grids can be narrowed to a group of computer terminals within a firm, such as accessible alliances involving multiple
organizations and systems. "A limited grid can also be referred to as intra-nodes collaboration, while a bigger, broader grid can be referred to as
inter-nodes cooperative".
Managing Grid applications can be difficult, particularly when dealing with the data flows among distant computational resources. A grid sequencing
system is a combination of workflow automation software that has been built expressly for composing and executing a sequence of mathematical
or data modification processes or a sequence in a grid setting.
History of Grid Computing
In the early nineties, the phrase "grid computing" was used as an analogy for rendering computational power as accessible as an electricity
network.
 When Ian Foster and Carl Kesselman launched their landmark article, "The Grid: Blueprint for a New Computing
Infrastructure", the electric grid analogy for scalable computing immediately became classic (1999). The concept of grid computing
(1961) predated this by centuries: computers as a utility service, similar to the telecommunications network.
 Ian Foster and Steve Tuecke of the University of Chicago and Carl Kesselman of the University of Southern California's
Computer Sciences Institute gathered together the grid's concepts (which included those from parallel development, object-oriented
development, and online services). The three are popularly considered as the "fathers of the grid" because they initiated the
initiative to establish the Globus Toolkit. Memory maintenance, safety providing, information transportation, surveillance, and a
toolset for constructing extra services depending on the same platform, such as contract settlement, notification systems, trigger
functions, and analytical expression, are all included in the toolkit.
 Although the Globus Toolbox maintains the de facto standard for developing grid systems, some possible technologies have
been developed to address a part of the capabilities required to establish a worldwide or business grid.
 The phrase "cloud computing" became famous in 2007. It is dreamed up to the standard foster description of grid computing (in
which computing resources are consumed as power is consumed from the electrical grid) and earlier utility computing. Grid
computing is frequently (but not always) linked to the supply of cloud computing environment, as demonstrated by 3tera's
AppLogic
In summary, "distributed" or "grid" computing is reliant on comprehensive computer systems (with navigation CPU cores, storage, power supply
units, network connectivity, and so on) attached to the network (personal, community, or the World wide web) via a traditional network
connection, resulting in existing hardware, as opposed to the lower capacity of designing and developing a small number of custom
supercomputers. The fundamental performance drawback is the lack of high-speed connectivity between the multiple CPUs and local storage
facilities.
Background of Grid Computing
In the early 1990s, the phrase "grid computing" was used as a concept for rendering computational complexity as accessible as an electricity
network. When Ian Foster and Carl Kesselman released their landmark study, "The Grid: Blueprint for a New Computing Infrastructure," the power
network analogy for ubiquitous computing immediately became classic (1999). The analogy of computing services (1961) predated this by decades:
computing as a public entity, similar to the telephone system.
Distributed.net and SETI@homepopularised CPU scavenge and voluntary computation in 1997 and 1999, respectively, to harness the energy of
linked PCs worldwide to discuss CPU-intensive research topics.
Ian Foster and Steve Tuecke of the University of Chicago and Carl Kesselman of the University of Southern California's Advanced Research Centre
gathered together the grid's concepts (which included those from cloud applications, object-oriented computing, and Online services). The three
are popularly considered as the "fathers of the grid" because they led the initiative to establish the Globus Framework. While the Globus Toolbox
retains the standard for developing grid systems, several alternative techniques have been developed to address some of the capabilities required
to establish a worldwide or business grid. Memory control, protection supply, data transportation, surveillance, and a toolset for constructing extra
services based on similar infrastructures, such as contract settlement, alert systems, trigger events, and analytical expression, are all included in
the toolkit.
The phrase "cloud computing" became prominent in 2007. It is conceptually related to the classic Foster description of grid computing (in which
computer resources are deployed as energy is used from the electrical grid) and previous utility computing. Grid computing is frequently (but not
always) linked to the supply of cloud computing environment, as demonstrated by 3tera's AppLogic technology.
Comparison between Grid and Supercomputers:
In summary, "dispersed" or "grid" computer processing depends on comprehensive desktops (with inbuilt processors, backup, power supply units,
networking devices, and so on) connected to the network (private, public, or the internet) via a traditional access point, resulting in embedded
systems, as opposed to the reduced energy of designing and building a limited handful of modified powerful computers. The relevant performance
drawback is the lack of high-speed links between the multiple CPUs and regional storage facilities.
This configuration is well-suited to situations in which various concurrent calculations can be performed separately with no need for error values to
be communicated among processors. Due to the low demand for connections among units compared to the power of the open network, the high-
end scalability of geographically diverse grids is often beneficial. There are various coding and MC variations as well.
Writing programs that can function in the context of a supercomputer, which may have a specialized version of windows or need the application to
solve parallelism concerns, can be expensive and challenging. If a task can be suitably distributed, a "thin" shell of "grid" architecture can enable
traditional, independent code to be executed on numerous machines, each solving a different component of the same issue. This reduces issues
caused by several versions of the same code operating in the similar shared processing and disk area at the same time, allowing for writing and
debugging on a single traditional system.
Differences and Architectural Constraints:
Integrated grids can combine computational resources from one or more persons or organizations (known as multiple administrative domains). This
can make trades easier, such as computing services or charity computer science.
One drawback of this function is that the machines that execute the equations may not be completely reliable. As a consequence, design engineers
must include precautions to prevent errors or malicious respondents from generating false, misrepresentative, or incorrect results, as well as using
the framework as a variable for invasion. This frequently entails allocating tasks to multiple nodes (assumedly with multiple owners) at irregular
intervals and ensuring that at least two endpoints disclose the same solution for a provided workgroup. Inconsistencies would reveal networks that
were dysfunctional or malevolent.
There is no method of ensuring that endpoints will not opt-out of the connection at arbitrary periods thanks to the shortage of centralized power
across the equipment. For unpredictably long durations, some nodes (such as workstations or dial-up Online subscribers) may be accessible for
processing but not infrastructure technology. These variances can be compensated by allocating big workgroups (thus lowering the need for
constant internet connectivity) and reallocating workgroups when a station refuses to disclose its output within the specified time frame.
An additional range of social acceptability difficulties in the initial periods of grid computing included grid researchers' desire to extend their
technology far beyond the initial subject of elevated computing or across departmental lines into other domains such as high-energy physics.
The effects of credibility and accessibility on continuous quality improvement complexity can determine whether a specialized complex, idle
workstations within the creating organization or an unrestricted extranet of amateurs or subcontractors is chosen. In many circumstances, the
networking devices must believe the centralized system not to exploit the access granted by tampering with the functioning of other applications,
mutilating stored information, sending personal information, or introducing new security vulnerabilities. Other systems use techniques like virtual
machines to limit the amount of faith that "client" hubs must put in the centralized computer.
Public systems that span organizational sectors (such as those used by various departments within the same company) frequently require the use
of embedded devices with diverse operating systems and equipment configurations. There is an exchange with many programs among application
development and the number of systems that can be maintained (and thus the size of the resulting network). Cross-platform languages can
alleviate the requirement for this compromise but at the risk of sacrificing good performance on any specific node (due to run-time interpretation or
lack of optimization for the particular platform). Several networking initiatives have developed universal architecture to observe frequency research
and commercial enterprises to exploit a specifically associated grid or establish new grids. BOINC is a popular platform for research projects looking
for public participants; a selection of others is provided after the post.
In assertion, a system can be considered a surface among equipment and software. Many innovative sectors must be required with the
middleware, and these may not be entity framework impartial. SLA management, trustworthiness, virtual organization control, license
management, interfaces, and information management are just a few examples. These basic topics may be addressed in a commercial solution, but
the workpiece of each is predominantly encountered in independent research initiatives investigating the sector.
The CPU as a Scavenger:
In a system of members, CPU scavenges, cycle scrounging, or shared computing produces a "grid" from the excess capacity (whether global or
internal to an organization). Generally, this strategy uses the 'spare' instructions units created by periodic inaction, such as at night, over lunch
breaks, or during the (very brief but frequent) periods of inactive awaiting that desktop workstation CPUs encounter during the day. In actuality, in
complement to direct Computational resources, contributing machines also offer some disc storage capacity, RAM, and communication bandwidth.
The CPU scavenging model is used by many volunteers computing projects, such as BOINC. This model must be developed to handle such
scenarios because nodes are likely to be "offline" from time to time as their owners use their resources for their primary purpose.
Establishing an Opportunism Ecosystem, also known as Industrial Computer Grid, is another method of computation in which a customized task
management solution harvests unoccupied desktops and laptops for computationally intensive workloads. HTCondor, an accessible, powerful
computational feature for poorly graded decentralized rationalization of computationally intensive tasks, can, for example, be designed only to use
computer devices where the mouse and keyboard are inactive, allowing it to strap squandered CPU power from otherwise inactive desktop
workspaces successfully.
HTCondor includes a task queuing system, schedule strategy, prioritization scheme, capacity tracking, and strategic planning, just like other packed
batch processes. It can handle demand on a specialized cluster of machines or smoothly blend devoted assets (rack-mounted clusters) and non-
dedicated desktops workstations (cycle scavenging) into a single desktop environment.
Fastest Virtual Supercomputers
 BOINC - 29.8 PFLOPS as of April 7, 2020.
 Folding@home - 1.1 exaFLOPS as of March 2020.
 Einstein@Home has 3.489 PFLOPS as of February 2018.
 SETI@Home - 1.11 PFLOPS as of April 7, 2020.
 MilkyWay@Home - 1.465 PFLOPS as of April 7, 2020.
 GIMPS - 0.558 PFLOPS as of March 2019.
In addition, the Bitcoin Community has a compute power comparable to about 80,000 exaFLOPS as of March 2019 (Floating-point Operations per
Second).
Because the elements of the Bitcoin network (Bitcoin mining ASICs) perform only the specific cryptographic hash computation required by the
Bitcoin protocol, this measurement reflects the number of FLOPS required equal to the hash output of the Bitcoin network rather than its capacity
for general floating-point arithmetic operations.
Today's Applications and Projects:
Grids are a means to make the most of a group's information technology systems. Grid computing enables the Large Hadron Collider at CERN and
solves challenges like protein function, financial planning, earthquake prediction, and environment modelling. They also allow for the provision of
information technology as a commodity to both corporate and nongovernmental customers, with the latter contributing only for what they
consume, similar to how energy or water is provided.
The National Community Grid now has over 4 million workstations using the accessible Berkley Public Initiative for Network Computing (BOINC)
technology as of October 2016. SETI@home is one of the programs that use BOINC, and as of October 2016, it was employing over 400,000
machines to reach 0.828 TFLOPS. Folding@home, which is not affiliated with BOINC, has reached more than 101 x86-equivalent petabytes on over
110,000 computers as of October 2016.
Activities were sponsored by the Euro Zone thru the European Commission's foundation initiatives. The Eu Commission financed BEinGRID
(Business Experiments in Grid) as an Integrative Program underneath the Sixth Framework (FP6) financing program. The project, which began on
June 1, 2006, ended in November 2009, lasted 42 months. Atos Origin was in charge of the project's coordination. "To build effective paths to
support grid computing across the EU and drive innovation into creative marketing strategies employing Grid technology," per the project fact page.
Two experts examine several prototypes, one technically and one commercial, to identify best practices and commonalities from the trial solutions.
The project is important not only because of its prolonged term but also because of its expenditure, which is the highest of any FP6 integral
approach at 24.8 million Euros.
The Eu Commission contributes 15.7 million, with the remaining funds coming from the 98 participating alliance partners. BEinGRID's achievements
have been picked up and carried further by IT-Tude.com since the program's termination.BEinGRID's achievements have been picked up and
moved further by IT-Tude.com ever since the project's termination.
The Enabling Grids for E-sciencE initiative was a join to the European DataGrid (EDG) and grew into the European Power Grid. It was located in the
European Union and included Asia and the United States. This, including the LHC Computing Grid (LCG), was created to aid research at CERN's
Large Hadron Collider. Here, you may find a list of current LCG locations and real-time surveillance of the EGEE infrastructure. The essential tools
and information are also available to the general public. Specialized fiber-optic lines, such as those established by CERN to meet the LCG's statistics
demands, may one day be accessible to home users, allowing them to access the internet at rates up to 10,000 30 % faster than a regular fiber
connection.
In 1997, the distributed.net plan was initiated. The NASA Advanced Supercomputing Facility (NAS) used the Condor cycle scavenger to perform
evolutionary algorithms on around 350 Sun Microsystems and SGI computers.
United Technologies ran the Universal Technologies Cancer Research Project in 2001, which used its Grid MP technology to rotate among
participant PCs linked to the internet. Before it was shut down in 2007, the initiative had 3.1 million computers running.
Aneka includes an extensible set of APIs associated with programming models like MapReduce.
These APIs support different cloud models like a private, public, hybrid Cloud.
Manjrasoft focuses on creating innovative software technologies to simplify the development and deployment of private or public cloud applications.
Our product plays the role of an application platform as a service for multiple cloud computing.
 Multiple Structures:
 Aneka is a software platform for developing cloud computing applications.
 In Aneka, cloud applications are executed.
 Aneka is a pure PaaS solution for cloud computing.
 Aneka is a cloud middleware product.
 Manya can be deployed over a network of computers, a multicore server, a data center, a virtual cloud infrastructure, or a
combination thereof.
Multiple containers can be classified into three major categories:
1. Textile Services: Fabric Services defines the lowest level of the software stack that represents multiple containers. They provide access
to resource-provisioning subsystems and monitoring features implemented in many.
2. Foundation Services: Fabric Services are the core services of Manya Cloud and define the infrastructure management features of the
system. Foundation services are concerned with the logical management of a distributed system built on top of the infrastructure and
provide ancillary services for delivering applications.
3. Application Services: Application services manage the execution of applications and constitute a layer that varies according to the
specific programming model used to develop distributed applications on top of Aneka.
There are mainly two major components in multiple technologies:
The SDK (Software Development Kit) includes the Application Programming Interface (API) and tools needed for the rapid development of
applications. The Anka API supports three popular cloud programming models: Tasks, Threads and MapReduce;
And
A runtime engine and platform for managing the deployment and execution of applications on a private or public cloud.
One of the notable features of Aneka Pass is to support the provision of private cloud resources from desktop, cluster to a virtual data center
using VMware, Citrix Zen Server, and public cloud resources such as Windows Azure, Amazon EC2, and GoGrid cloud service.
Aneka's potential as a Platform as a Service has been successfully harnessed by its users and customers in three different areas,
including engineering, life sciences, education, and business intelligence.
Architecture of Aneka
Aneka is a platform and framework for developing distributed applications on the Cloud. It uses desktop PCs on-demand and CPU cycles in addition
to a heterogeneous network of servers or datacenters. Aneka provides a rich set of APIs for developers to transparently exploit such resources and
express the business logic of applications using preferred programming abstractions.
System administrators can leverage a collection of tools to monitor and control the deployed infrastructure. It can be a public cloud available to
anyone via the Internet or a private cloud formed by nodes with restricted access.
A multiplex-based computing cloud is a collection of physical and virtualized resources connected via a network, either the Internet or a private
intranet. Each resource hosts an instance of multiple containers that represent the runtime environment where distributed applications are
executed. The container provides the basic management features of a single node and takes advantage of all the other functions of its hosting
services.
Services are divided into clothing, foundation, and execution services. Foundation services identify the core system of Anka middleware, which
provides a set of infrastructure features to enable Anka containers to perform specific and specific tasks. Fabric services interact directly with nodes
through the Platform Abstraction Layer (PAL) and perform hardware profiling and dynamic resource provisioning. Execution services deal directly
with scheduling and executing applications in the Cloud.
One of the key features of Aneka is its ability to provide a variety of ways to express distributed applications by offering different programming
models; Execution services are mostly concerned with providing middleware with the implementation of these models. Additional services such as
persistence and security are inverse to the whole stack of services hosted by the container.
At the application level, a set of different components and tools are provided to
 Simplify the development of applications (SDKs),
 Port existing applications to the Cloud, and
 Monitor and manage multiple clouds.
An Aneka-based cloud is formed by interconnected resources that are dynamically modified according to user needs using resource virtualization or
additional CPU cycles for desktop machines. A common deployment of Aneka is presented on the side. If the deployment identifies a private cloud,
all resources are in-house, for example, within the enterprise.
This deployment is enhanced by connecting publicly available on-demand resources or by interacting with several other public clouds that provide
computing resources connected over the Internet.
Cloud scalability in cloud computing refers to increasing or decreasing IT resources as needed to meet changing demand. Scalability is one of the
hallmarks of the cloud and the primary driver of its explosive popularity with businesses.
Data storage capacity, processing power, and networking can all be increased by using existing cloud computing infrastructure. Scaling can be done
quickly and easily, usually without any disruption or downtime.
Third-party cloud providers already have the entire infrastructure in place; In the past, when scaling up with on-premises physical infrastructure,
the process could take weeks or months and require exorbitant expenses.
This is one of the most popular and beneficial features of cloud computing, as businesses can grow up or down to meet the demands depending on
the season, projects, development, etc.
By implementing cloud scalability, you enable your resources to grow as your traffic or organization grows and vice versa. There are a few main
ways to scale to the cloud:
If your business needs more data storage capacity or processing power, you'll want a system that scales easily and quickly.
Cloud computing solutions can do just that, which is why the market has grown so much. Using existing cloud infrastructure, third-party cloud
vendors can scale with minimal disruption.
Types of scaling
1. Vertical Scaling: To understand vertical scaling, imagine a 20-story hotel. There are innumerable rooms inside this hotel from where
the guests keep coming and going. Often there are spaces available, as not all rooms are filled at once. People can move easily as there
is space for them. As long as the capacity of this hotel is not exceeded, no problem. This is vertical scaling.
With computing, you can add or subtract resources, including memory or storage, within the server, as long as the resources do not
exceed the capacity of the machine. Although it has its limitations, it is a way to improve your server and avoid latency and extra
management. Like in the hotel example, resources can come and go easily and quickly, as long as there is room for them.
2. Horizontal Scaling: Horizontal scaling is a bit different. This time, imagine a two-lane highway. Cars travel smoothly in each direction
without major traffic problems. But then the area around the highway develops - new buildings are built, and traffic increases. Very soon,
this two-lane highway is filled with cars, and accidents become common. Two lanes are no longer enough. To avoid these issues, more
lanes are added, and an overpass is constructed. Although it takes a long time, it solves the problem.
Horizontal scaling refers to adding more servers to your network, rather than simply adding resources like with vertical scaling. This
method tends to take more time and is more complex, but it allows you to connect servers together, handle traffic efficiently and execute
concurrent workloads.
3. Diagonal Scaling: It is a mixture of both Horizontal and Vertical scalability where the resources are added both vertically and
horizontally. Well, you get diagonal scaling, which allows you to experience the most efficient infrastructure scaling. When you combine
vertical and horizontal, you simply grow within your existing server until you hit the capacity. Then, you can clone that server as
necessary and continue the process, allowing you to deal with a lot of requests and traffic concurrently.
Scale in the Cloud
When you move scaling into the cloud, you experience an enormous amount of flexibility that saves both money and time for a business. When
your demand booms, it's easy to scale up to accommodate the new load. As things level out again, you can scale down accordingly.
This is so significant because cloud computing uses a pay-as-you-go model.
Traditionally, professionals guess their maximum capacity needs and purchase everything up front. If they overestimate, they pay for unused
resources.
If they underestimate, they don't have the services and resources necessary to operate effectively. With cloud scaling, though, businesses get the
capacity they need when they need it, and they simply pay based on usage. This on-demand nature is what makes the cloud so appealing. You can
start small and adjust as you go. It's quick, it's easy, and you're in control.
Difference between Cloud Elasticity and Scalability:
Cloud Elasticity Cloud Scalability
Elasticity is used just to meet the sudden up and down in the
workload for a small period of time.
Scalability is used to meet the static increase in the workload.
Elasticity is used to meet dynamic changes, where the resources
need can increase or decrease.
Scalability is always used to address the increase in workload in an
organization.
Elasticity is commonly used by small companies whose workload
and demand increases only for a specific period of time.
Scalability is used by giant companies whose customer circle
persistently grows in order to do the operations efficiently.
It is a short term planning and adopted just to deal with an
unexpected increase in demand or seasonal demands.
Scalability is a long term planning and adopted just to deal with an
expected increase in demand.
AD
Why is cloud scalable?
Scalable cloud architecture is made possible through virtualization. Unlike physical machines whose resources and performance are relatively set,
virtual machines virtual machines (VMs) are highly flexible and can be easily scaled up or down. They can be moved to a different server or hosted
on multiple servers at once; workloads and applications can be shifted to larger VMs as needed.
Third-party cloud providers also have all the vast hardware and software resources already in place to allow for rapid scaling that an individual
business could not achieve cost-effectively on its own.
Benefits of cloud scalability
Key cloud scalability benefits driving cloud adoption for businesses large and small:
 Convenience: Often, with just a few clicks, IT administrators can easily add more VMs that are available-and customized to an
organization's exact needs-without delay. Teams can focus on other tasks instead of setting up physical hardware for hours and
days. This saves the valuable time of the IT staff.
 Flexibility and speed: As business needs change and grow, including unexpected demand spikes, cloud scalability allows IT to
respond quickly. Companies are no longer tied to obsolete equipment-they can update systems and easily increase power and
storage. Today, even small businesses have access to high-powered resources that used to be cost-prohibitive.
 Cost Savings: Thanks to cloud scalability, businesses can avoid the upfront cost of purchasing expensive equipment that can
become obsolete in a few years. Through cloud providers, they only pay for what they use and reduce waste.
 Disaster recovery: With scalable cloud computing, you can reduce disaster recovery costs by eliminating the need to build and
maintain secondary data centers.
AD
When to Use Cloud Scalability?
Successful businesses use scalable business models to grow rapidly and meet changing demands. It's no different with their IT. Cloud scalability
benefits help businesses stay agile and competitive.
Scalability is one of the driving reasons for migrating to the cloud. Whether traffic or workload demands increase suddenly or increase gradually
over time, a scalable cloud solution enables organizations to respond appropriately and cost-effectively to increased storage and performance.
How do you determine optimal cloud scalability?
Changing business needs or increasing demand often necessitate your scalable cloud solution changes. But how much storage, memory, and
processing power do you need? Will you scale in or out?
To determine the correct size solution, continuous performance testing is essential. IT administrators must continuously measure response times,
number of requests, CPU load, and memory usage. Scalability testing also measures the performance of an application and its ability to scale up or
down based on user requests.
Automation can also help optimize cloud scalability. You can set a threshold for usage that triggers automatic scaling so as not to affect
performance. You may also consider a third-party configuration management service or tool to help you manage your scaling needs, goals, and
implementation.
The IT market is still buzzing because of the advent of cloud computing. Though the breakthrough technology first came out some 10 years back,
companies are benefiting from its benefits for business in various forms. The cloud has offered more than just storage of data and security
benefits. It has caused a storm of confusion within organizations because new terms are constantly being invented to describe the various cloud
types. At first, the IT industry began to recognize the private cloud infrastructures that could support only the data and workload of the particular
company. As time passed, it was apparent that the cloud-based solution had developed and was made public and managed by third-party
companies like AWS or Google Cloud and Microsoft. The cloud today is now able to support hybrid and multi-cloud infrastructure.
What is Multi-Cloud?
Multi-cloud is the dispersion of cloud-based assets, software, and apps across a variety of cloud environments. The multi-cloud infrastructure is
managed specifically for a specific workload with the mix-and-match strategy used by diverse cloud services. The main benefit of a multi-cloud for
many companies is the possibility of using two or more cloud services or private clouds in order to avoid dependence on one cloud service.
However, multi-cloud doesn't allow the orchestration or connection between these various services.
Challenges around Multi-Cloud:
 Siloed cloud providers - Sometimes, the different cloud providers cause an issue with cloud monitoring and management since
they have tools to monitor the workload exclusively within their cloud infrastructure.
 Insufficient knowledge - multi-cloud is a relatively new concept, and the market for cloud services isn't at a point where there
are people who are proficient in multi-cloud.
 Selecting different cloud vendors - It is a fact that many organizations have issues when choosing cloud providers that
cooperate with each other without encountering any difficulties.
Why do Multi-Cloud?
Multi-cloud technology supports changes and growth in business. Each department or team has its tasks, organizational roles, and volume of data
produced in every company. They also have different requirements in terms of security, performance, and privacy. In turn, the use of multi-cloud in
this type of business setting allows companies to satisfy the unique requirements of their departments in relation to the storage of data,
structuring, and security. Additionally, businesses must be able to adapt and allow their IT to evolve as their business expands. It's not just a
business-enablement strategy and IT-forward plan.
Looking deeper into multi-cloud's many advantages for business, companies get an edge on the marketplace, both technologically as well as
commercially. These companies also enjoy geographical benefits from using multi-cloud in that it helps address the issue of app latency and issues
to a great extent. However, two other important issues force enterprises to implement multi-cloud on their premises: vendor lock-in and outages
for cloud providers. Multi-cloud solutions can be a powerful tool for preventing lock-in from vendors and a method to prevent the possibility of
failure or downtime at just a few locations, and a way to take advantage of unique services from various cloud service providers.
In a simple statement, CIOs and IT executives of enterprise IT are opting for the multi-cloud option since it allows for greater flexibility, as well as
complete control of the data of the business and the workload. Many times, business decision-makers prefer multi-cloud options together with the
hybrid cloud strategy.
Furthermore, we've got an 8-point list of ways to reduce Multi-Cloud expenses.
What is Hybrid-Cloud?
The term "hybrid cloud" refers to a mix of third parties' private cloud on-premises and cloud services. It is also referred to as a public and private
cloud in addition to conventional data centres. In simple terms, it is made up of multiple cloud combinations. The mix could consist of two cloud
types: two private clouds, two public clouds, or one public cloud, as well as the other cloud being private.
Challenges around Hybrid Cloud:
 Security - Through the hybrid cloud model, enterprises must simultaneously handle different security platforms while transferring
specific data from the private cloud or reverse.
 Complexities associated with cloud integrations - A high level of technical expertise is required to seamlessly integrate public
and private cloud architectures without adding additional complexities to the process.
 Complications around scaling - As the data grows, the cloud must also be able to grow. However, altering the hybrid cloud's
architecture to keep up with data growth can be extremely difficult.
AD
Why do Hybrid Cloud?
No matter how big the business, the transition to cloud computing cannot be completed in one straightforward move. Even if they plan to migrate
to a public cloud managed by third-party companies, it is essential for proper planning for the time needed to ensure that the cloud implementation
is as precise as is possible. But, prior to jumping into the cloud, companies should create a checklist of data, resources, as well as workloads and
systems that will be moved to the cloud while others remain on their own located in data centres. In general terms, interoperability is a well-known
and dependable illustration of the hybrid cloud.
Furthermore, unless businesses are based in the cloud in the early days of operation, they're likely to be on a path that involves preparation,
strategies, and support for cloud infrastructure and existing infrastructure.
A lot of companies have also considered the possibility of constructing and implementing a distinct cloud environment for their IT requirements,
which is integrated with their existing data centers in order to reduce the interference between internal processes and cloud-based tools. However,
the complexity of the setup is more than decreases because of the necessity to perform a range of functions in different environments. In this
scenario, it is essential that every business ensures that they have the resources to create and implement integrated platforms that provide a
practical design and architecture for business operations.
Which Cloud-based Solution to Adopt?
Both hybrid and multi-cloud platforms provide distinct advantages to companies that can be confusing. What are the best ways of picking one of
these two to help businesses succeed? Which cloud service is suitable for what department or work? What is the best way to ensure that
implementing one of these options will benefit organizations in the many years? All of these questions will be addressed in the next section, which
explains how the two cloud services differ from each other and which one is the best choice in the case of an organization.
AD
How does Multi-Cloud Differ from a Hybrid Cloud?
There are distinct differences between hybrid and multi-cloud clouds in the commercial realm. Both terms are commonly employed in conjunction.
This distinction is also anticipated to grow since multi-cloud computing has become the default for numerous organizations.
 As is well-known that the multi-cloud approach makes use of several cloud services that typically come offered by different Third-
party cloud solutions providers. This strategy allows companies to find diverse cloud solutions for various departments.
 In contrast to the multi-cloud model, hybrid cloud components typically collaborate. The processes and data tend to mix and
interconnect in a hybrid cloud environment, in contrast to multi-cloud environments that operate in silos.
 Multi-cloud can provide organizations with additional peace of mind because it reduces the dependence on a single cloud service,
thus reducing costs and enhancing flexibility.
 Practically speaking, an application that runs on a hybrid cloud platform uses load balancing in conjunction with applications and
web services provided by a public cloud. At the same time, databases and storage are located in a private cloud structure. The
cloud-based solution includes resources that can perform the same private and public cloud functions.
 Practically speaking, an application running in a multi-cloud environment could perform all computing and networking tasks on one
cloud service and utilize database services from other cloud providers. In multi-cloud environments, certain applications could use
resources exclusively located in Azure. However, other applications may use resources exclusively from AWS. Another example
would be the use of a private and public cloud. Some applications may use resources only within the public cloud, whereas others
use resources only within private clouds.
 In addition to their differences, both cloud-based services give businesses the ability to provide their services to customers in an
efficient and productive way.
Elasticity is a 'rename' of scalability, a known non-functional requirement in IT architecture for many years already. Scalability is the ability to add
or remove capacity, mostly processing, memory, or both, from an IT environment.
Ability to dynamically scale the services provided directly to customers' need for space and other services. It is one of the five fundamental aspects
of cloud computing.
It is usually done in two ways:
 Horizontal Scalability: Adding or removing nodes, servers, or instances to or from a pool, such as a cluster or a farm.
 Vertical Scalability: Adding or removing resources to an existing node, server, or instance to increase the capacity of a node,
server, or instance.
Most implementations of scalability are implemented using the horizontal method, as it is the easiest to implement, especially in the current web-
based world we live in. Vertical Scaling is less dynamic because this requires reboots of systems, sometimes adding physical components to
servers.
A well-known example is adding a load balancer in front of a farm of web servers that distributes the requests.
Why call it Elasticity?
Traditional IT environments have scalability built into their architecture, but scaling up or down isn't done very often. It has to do with Scaling and
the amount of time, effort, and cost.
Servers have to be purchased, operations need to be screwed into server racks, installed and configured, and then the test team needs to verify
functioning, and only after that's done can you get the big There are. And you don't just buy a server for a few months - typically, it's three to five
years. So it is a long-term investment that you make.
The latch is doing the same, but more like a rubber band. You 'stretch' the ability when you need it and 'release' it when you don't have it. And this
is possible because of some of the other features of cloud computing, such as "resource pooling" and "on-demand self-service". Combining these
features with advanced image management capabilities allows you to scale more efficiently.
Three forms for scalability: Below I describe the three forms of scalability as I see them, describing what makes them different from each
other.
 Manual Scaling: Manual scalability begins with forecasting the expected workload on a cluster or farm of resources, then manually
adding resources to add capacity. Ordering, installing, and configuring physical resources takes a lot of time, so forecasting needs to be
done weeks, if not months, in advance. It is mostly done using physical servers, which are installed and configured manually.
Another downside of manual scalability is that removing resources does not result in cost savings because the physical server has already
been paid for.
 Semi-automated Scaling: Semi-automated scalability takes advantage of virtual servers, which are provisioned (installed) using
predefined images. A manual forecast or automated warning of system monitoring tooling will trigger operations to expand or reduce the
cluster or farm of resources.
Using predefined, tested, and approved images, every new virtual server will be the same as others (except for some minor
configuration), which gives you repetitive results. It also reduced the manual labor on the systems significantly, and it is a well-known
fact that manual actions on systems cause around 70 to 80 percent of all errors. There are also huge benefits to using a virtual server;
this saves costs after the virtual server is de-provisioned. The freed resources can be directly used for other purposes.
 Elastic Scaling (fully automatic Scaling): Elasticity, or fully automatic scalability, takes advantage of the same concepts that semi-
automatic scalability does but removes any manual labor required to increase or decrease capacity. Everything is controlled by a trigger
from the System Monitoring tooling, which gives you this "rubber band" effect. If more capacity is needed now, it is added now and there
in minutes. Depending on the system monitoring tooling, the capacity is immediately reduced.
Scalability vs. Elasticity in Cloud Computing:
Imagine a restaurant in an excellent location. It can accommodate up to 30 customers, including outdoor seating. Customers come and go
throughout the day. Therefore restaurants rarely exceed their seating capacity.
The restaurant increases and decreases its seating capacity
within the limits of its seating area. But the staff adds a table or two to lunch and dinner when more people stream in with an appetite. Then they
remove the tables and chairs to de-clutter the space.
A nearby center hosts a bi-annual event that attracts hundreds of attendees for the week-long convention.
The restaurant often sees increased traffic during convention weeks. The demand is usually so high that it has to drive away customers. It often
loses business and customers to nearby competitors. The restaurant has disappointed those potential customers for two years in a row.
Elasticity allows a cloud provider's customers to achieve cost savings, which are often the main reason for adopting cloud services.
Depending on the type of cloud service, discounts are sometimes offered for long-term contracts with cloud providers. If you are willing to charge a
higher price and not be locked in, you get flexibility.
Let's look at some examples where we can use it.
 Cloud Rapid Elasticity Example 1
Let us tell you that 10 servers are needed for a three-month project. The company can provide cloud services within minutes, pay a small monthly
OpEx fee to run them, not a large upfront CapEx cost, and decommission them at the end of three months at no charge.
We can compare this to before cloud computing became available. Let's say a customer comes to us with the same opportunity, and we have to
move to fulfill the opportunity. We have to buy 10 more servers as a huge capital cost.
When the project is complete at the end of three months, we'll have servers left when we don't need them anymore. It's not economical, which
could mean we have to forgo the opportunity.
Because cloud services are much more cost-efficient, we are more likely to take this opportunity, giving us an advantage over our competitors.
 Cloud Rapid Elasticity Example 2
Let's say we are an eCommerce store. We're probably going to get more seasonal demand around Christmas time. We can automatically spin up
new servers using cloud computing as demand grows.
It works to monitor the load on the CPU, memory, bandwidth of the server, etc. When it reaches a certain threshold, we can automatically add new
servers to the pool to help meet demand. When demand drops again, we may have another lower limit below which we automatically shut down
the server. We can use it to automatically move our resources in and out to meet current demand.
 Cloud-based software service example
If we need to use cloud-based software for a short period, we can pay for it instead of buying a one-time perpetual license. Most software as
service companies offers a range of pricing options that support different features and duration lengths to choose the most cost-effective one.
There will often be monthly pricing options, so if you need occasional access, you can pay for it as and when needed.
What is the Purpose of Cloud Elasticity?
Cloud elasticity helps users prevent over-provisioning or under-provisioning system resources. Over-provisioning refers to a scenario where you buy
more capacity than you need.
Under-provisioning refers to allocating fewer resources than you are used to.
Over-provisioning leads to wastage of cloud costs, while under-provisioning can lead to server outages as the available servers overwork. Server
shutdowns result in revenue loss and customer dissatisfaction, which is bad for business.
Scaling with Elasticity provides a middle ground.
Elasticity is ideal for short-term needs, such as handling website traffic spikes and database backups.
But Elasticity Cloud also helps to streamline service delivery when combined with scalability. For example, by spinning up additional VMs in the
same server, you create more capacity in that server to handle dynamic workload surges.
So, how does cloud elasticity work in a business environment?
Rapid Elasticity Use Cases and Examples:
At work, three excellent examples of cloud elasticity include e-commerce, insurance, and streaming services.
 Use case one: Insurance.: Let's say you are in the auto insurance business. Perhaps your customers renew auto policies at roughly
the same time every year. Policyholders will rush to exceed the renewal deadline. You can expect a surge in traffic when you arrive at
that time.
If you rely on scalability alone, a traffic spike can quickly overwhelm your provisioned virtual machine, causing service outages. It will
result in a loss of revenue and customers.
But if you have "leased" a few more virtual machines, you can handle the traffic for the entire policy renewal period. Thus, you will have
multiple scalable virtual machines to manage demand in real-time.
Policyholders wouldn't notice any changes in performance whether you served more customers this year than the previous year. To
reduce cloud spending, you can then release some of them to virtual machines when you no longer need them, such as during off-peak
months.
An Elastic Cloud Platform will let you do just that. It will only charge you for the resources you use on a pay-per-use basis and not for the
number of virtual machines you employ.
 Use case two: e-commerce.:The more effectively you run your awareness campaign, the more potential buyers' interest you can
expect to peak. Let's say you run a limited-time offer on notebooks to mark your anniversary, Black Friday, or a techno celebration. You
can expect more traffic and server requests during that time.
New buyers will register new accounts. This will put a lot of load on your server during the campaign's duration compared to most times
of the year.
Existing customers will also revisit abandoned trains from old wishlists or try to redeem accumulated points.
You can provide more resources to absorb the high festive season demand with an elastic platform. After that, you can return the excess
capacity to your cloud provider and keep what is doable in everyday operations.
 Use case three: Streaming services.: Netflix is probably the best example to use here. When the streaming service released all 13
episodes of House of Cards' second season, viewership jumped to 16% of Netflix's subscribers, compared to just 2% for the first season's
premiere weekend.
Those subscribers streamed one of those episodes within seven to ten hours that Friday. Now, Netflix has over 50 million subscribers
(February 2014). So a 16% jump in viewership means that over 8 million subscribers streamed a portion of the show in a single day
within a workday.
Netflix engineers have repeatedly stated that they take advantage of the Elastic Cloud services by AWS to serve multiple such server
requests within a short period and with zero downtime.
Bottom line: If your cloud provider offers Cloud Elasticity by default, and you have activated the feature in your account, the platform
will allocate you unlimited resources at any time. It means that you will be able to handle both sudden and expected workload spikes.
Benefits and Limitations of Cloud Elasticity
Elasticity in the cloud has many powerful benefits.
 Elasticity balances performance with cost-effectiveness: An Elastic Cloud provider provides system monitoring tools that track
resource usage. Then they automatically analyze resource allocation versus usage. The goal is always to ensure that these two metrics
match to ensure that the system performs cost-effectively at its peak.
Cloud providers also price it on a pay-per-use model, allowing you to pay for what you use and no more. The pay-as-you-expansion model will
let you add new infrastructure components to prepare them for growth.
 It helps in providing smooth services.: Cloud elasticity combines with cloud scalability to ensure that both the customer and the
cloud platform meet changing computing needs when the need arises.
For a cloud platform, Elasticity helps keep customers happy.
While scalability helps it handle long-term growth, Elasticity currently ensures flawless service availability. It also helps prevent system
overloading or runaway cloud costs due to over-provisioning.
Limits or disadvantages of cloud elasticity?
Cloud elasticity may not be for everyone. Cloud scalability alone may be sufficient if you have a relatively stable demand for your products or
services online.
For example, if you run a business that doesn't experience seasonal or occasional spikes in server requests, you may not mind using scalability
without Elasticity. Keep in mind that Elasticity requires scalability, but not vice versa.
Still, no one could have predicted that you might need to take advantage of a sudden wave of interest in your company. So, what do you do when
you need to be up for that opportunity but don't want to ruin your cloud budget speculation? Enter cloud cost optimization.
How is cloud cost optimization related to cloud elasticity?
Elasticity uses dynamic variations to align computing resources to the demands of the workload as closely as possible to prevent wastage and
promote cost-efficiency. Another goal is usually to ensure that your systems can continue to serve customers satisfactorily, even when bombarded
by heavy, sudden workloads.
But not all cloud platform services support the Scaling in and out of cloud elasticity.
For example. Some AWS services include Elasticity as a part of their offerings, such as Amazon Simple Storage Service (S3), Amazon Simple
Queue Service (SQS), and Amazon Aurora. Amazon Aurora qualifies as serverless Elastic, while others like Amazon Elastic Compute Cloud (EC2)
integrate with Amazon Auto Scaling and support Elastic.
Whether or not you use the Elastic service to reduce cloud costs dynamically, you'll want to increase your cloud cost visibility in a way that Amazon
Cloud Watch doesn't offer.
CloudZero allows engineering teams to track and oversee the specific costs and services driving their products, facilities, etc. You can group costs
by feature, product, service, or account to uncover unique insights about your cloud costs that will help you answer what's changing, why, and why
you want to know more about it.
You can also measure and monitor your unit costs, such as cost per customer. Here's a look at Cloud Xero's cost per customer report, where you
can uncover important cost information about your customers, which can help guide your engineering and pricing decisions.
Fog computing vs. Cloud computing
The delivery of on-demand computing services is known as cloud computing. We may use applications to store and process power over the
Internet. Without owning any computing infrastructure or data center, anyone can rent access to anything from applications to storage from a
cloud service provider.
It is a pay-as-you-go service.
By using cloud computing services and paying for what we use, we can avoid the complexity of owning and maintaining infrastructure.
Cloud computing service providers can benefit from significant economies of scale by providing similar services to customers.
Fog computing is a decentralized computing infrastructure or process in which computing resources are located between a data source and a cloud
or another data center. Fog computing is a paradigm that provides services to user requests on edge networks.
Devices at the fog layer typically perform networking-related operations such as routers, gateways, bridges, and hubs. The researchers envision
these devices to perform both computational and networking tasks simultaneously.
Although these tools are resource-constrained compared to cloud servers, the geological spread and decentralized nature help provide reliable
services with coverage over a wide area. Fog is the physical location of computing devices much closer to users than cloud servers.
Benefits of Fog Computing:
 Fog computing is less expensive to work with because the data is hosted and analyzed on local devices rather than transferred to
any cloud device.
 It helps in facilitating and controlling business operations by deploying fog applications as per the user's requirement.
 Fogging provides users with various options to process their data on any physical device.
Benefits of Cloud Computing:
 It works on a pay-per-use model, where users have to pay only for the services they are receiving for a specified period.
 Cloud users can quickly increase their efficiency by accessing data from anywhere, as long as they have net connectivity.
 It increases cost savings as workloads can be transferred from one Cloud to another cloud platform.
Table of differences between cloud computing and fog computing is given below:
Specialty Cloud Computing fog computing
Delay Cloud computing has higher latency than fog
computing
Fog computing has low latency
Capacity Cloud computing does not provide any reduction
in data while sending or converting data.
Fog computing reduces the amount of data sent to cloud
computing.
Responsiveness The response time of the system is low. The response time of the system is high.
Security Cloud computing has less Security compared to
Fog Computing
Fog computing has high Security.
Speed Access speed is high depending on the VM
connectivity.
High even more compared to Cloud Computing.
Data Integration Multiple data sources can be integrated. Multiple Data sources and devices can be integrated.
Mobility In cloud computing, mobility is Limited. Mobility is supported in fog computing.
Location Awareness Partially Supported in Cloud computing. Supported in fog computing.
Number of Server Nodes Cloud computing has Few numbers server
nodes.
Fog computing has a Large number of server nodes.
Geographical Distribution It is centralized. It is decentralized and distributed.
Location of service Services provided within the Internet. Services are provided at the edge of the local network.
Working environment Specific data center building with air
conditioning systems
Outdoor (streets, base stations, etc.) or indoor (houses,
cafes, etc.)
Communication mode IP network Wireless communication: WLAN, WiFi, 3G, 4G, ZigBee, etc.
or wired communication (part of the IP networks)
Dependence on the
quality of core network
Requires strong network core. It can also work in a Weak network core.
Difference between Fog Computing and Cloud Computing:
Information:
 In fog computing, data is received from IoT devices using any protocol.
 Cloud computing receives and summarizes data from different fog nodes.
Structure:
 Fog has a decentralized architecture where information is located on different nodes at the source closest to the user.
 There are many centralized data centers in the Cloud, making it difficult for users to access information on the networking area at their
nearest source.
Protection:
 Fog is a more secure system with different protocols and standards, which minimizes the chances of it collapsing during networking.
 As the Cloud operates on the Internet, it is more likely to collapse in case of unknown network connections.
Component:
 Fog has some additional features in addition to the features provided by the components of the Cloud that enhance its storage and
performance at the end gateway.
 Cloud has different parts such as frontend platform (e.g., mobile device), backend platform (storage and servers), cloud delivery, and
network (Internet, intranet, intercloud).
Accountability:
 Here, the system's response time is relatively higher compared to the Cloud as fogging separates the data and then sends it to the Cloud.
 Cloud service does not provide any isolation in the data while transmitting the data at the gate, increasing the load and thus making the
system less responsive.
Application:
 Edge computing can be used for smart city traffic management, automating smart buildings, visual Security, self-maintenance trains,
wireless sensor networks, etc.
 Cloud computing can be applied to e-commerce software, word processing, online file storage, web applications, creating image albums,
various applications, etc.
Reduces latency:
 Fog computing cascades system failure by reducing latency in operation. It analyzes the data close to the device and helps in averting
any disaster.
Flexibility in Network Bandwidth:
 Large amounts of data are transferred from hundreds or thousands of edge devices to the Cloud, requiring fog-scale processing and
storage.
 For example, commercial jets generate 10 TB for every 30 minutes of flight. Fog computing sends selected data to the cloud for historical
analysis and long-term storage.
Wide geographic reach:
 Fog computing provides better quality of services by processing data from devices that are also deployed in areas with high network
density.
 On the other hand, Cloud servers communicate only with IP and not with the endless other protocols used by IoT devices.
Real-time analysis:
 Fog computing analyzes the most time-sensitive data and operates on the data in less than a second, whereas cloud computing does not
provide round-the-clock technical support.
Operating Expenses:
 The license fee and on-premises maintenance for cloud computing are lower than fog computing. Companies have to buy edge device
routers.
Fog Computing vs. Cloud Computing for IoT Projects:
According to Statista, by 2020, there will be 30 billion IoT devices worldwide, and by 2025 this number will exceed 75 billion connected things.
These tools will produce huge amounts of data that will have to be processed quickly and permanently. F fog computing works similarly to cloud
computing to meet the growing demand for IoT solutions.
Fog is even better on some things. This article aims to compare Fog vs. Cloud and tell you more about Fog vs. cloud computing possibilities and
their pros and cons.
We provide leading-edge IoT development services for companies looking to transform their business.
 Cloud Computing
We are already used to the technical term cloud, a network of multiple devices, computers, and servers connected to the Internet.
Such a computing system can be figuratively divided into two parts:
 Frontend- is made up of the client device (computer, tablet, mobile phone).
 Backend- consists of data storage and processing systems (servers) that can be located far from the client device and make up the
Cloud itself.
These two layers communicate with each other using a direct wireless connection.
Cloud computing technology provides a variety of services that are classified into three groups:
 IaaS (Infrastructure as a Service) - A remote data center with data storage capacity, processing power, and networking
resources.
 PaaS (Platform as a Service) - A development platform with tools and components to build, test, and launch applications.
 SaaS (Software as a Service) - Software tailored to suit various business needs.
By connecting your company to the Cloud, you can access the services mentioned above from any location and through various devices.
Therefore, availability is the biggest advantage. Plus, there's no need to maintain local servers and worry about downtimes - the vendor supports
everything for you, saving you money.
Integrating the Internet of Things with the Cloud is an affordable way to do business. Off-premises services provide the scalability and flexibility
needed to manage and analyze data collected by connected devices. At the same time, specialized platforms (e.g., Azure IoT Suite, IBM
Watson, AWS, and Google Cloud IoT) give developers the power to build IoT apps without major investments in hardware and software.
 Advantages of Cloud for IoT
Since connected devices have limited storage capacity and processing power, integration with cloud computing comes to the aid of:
o Improved performance - faster communication between IoT sensors and data processing systems.
o Storage Capacity - Highly scalable and unlimited storage space can integrate, aggregate, and share huge data.
o Processing Capabilities - Remote data centers provide unlimited virtual processing capabilities on demand.
o Low Cost - The license fee is less than the cost of on-premises equipment and its ongoing maintenance.
 Disadvantages of Cloud for IoT
Unfortunately, nothing is spotless, and cloud technology has some drawbacks, especially for Internet of Things services.
 High latency - More and more IoT apps require very low latency, but the Cloud cannot guarantee this due to the distance between
client devices and data processing centers.
 Downtimes - Technical issues and network interruptions can occur in any Internet-based system and cause customers to suffer
from outages; Many companies use multiple connection channels with automatic failover to avoid problems.
 Security and Privacy - your data is transferred via globally connected channels along with thousands of gigabytes of other users'
information; No wonder the system is vulnerable to cyber-attacks or data loss; the problem can be partially solved with the help of
hybrid or private clouds.
AD
Cisco coined the term fog computing (or fogging) in 2014, so it is new to the general public. Fog and cloud computing are intertwined. In nature,
Fog is closer to Earth than clouds; In the tech world, it's the same; Fog is closer to end-users, bringing cloud capabilities to the ground.
The definition may sound like this: Fog is an extension of cloud computing that consists of multiple edge nodes directly connected to physical
devices.
Such nodes tend to be much closer to devices than centralized data centers so that they can provide instant connections.
The considerable processing power of edge nodes allows them to compute large amounts of data without sending them to distant servers.
Fog can also include cloudlets - small-scale and rather powerful data centers located at the network's edge. They are intended to support
resource-intensive IoT apps that require low latency.
The main difference between fog computing and cloud computing is that Cloud is a centralized system, whereas Fog is a distributed decentralized
infrastructure.
Fog is an intermediary between computing hardware and a remote server. It controls what information should be sent to the server and can be
processed locally. In this way, Fog is an intelligent gateway that dispels the clouds, enabling more efficient data storage, processing, and analysis.
It should be noted that fog networking is not a separate architecture. It does not replace cloud computing but complements it by getting as close
as possible to the source of information.
There is another method for data processing similar to fog computing - edge computing. The essence is that the data is processed directly on the
devices without sending it to other nodes or data centers. Edge computing is particularly beneficial for IoT projects as it provides bandwidth
savings and better data security.
The new technology is likely to have the biggest impact on the development of IoT, embedded AI, and 5G solutions, as they, like never before,
demand agility and seamless connections.
Advantages of fog computing in IoT:
The fogging approach has many benefits for the Internet of Things, Big Data, and real-time analytics. The main advantages of fog computing over
cloud computing are as follows:
 Low latency - Fog tends to be closer to users and can provide a quicker response.
 There is no problem with bandwidth - pieces of information are aggregated at separate points rather than sent through a
channel to a single hub.
 Due to the many interconnected channels - loss of connection is impossible.
 High Security - because the data is processed by multiple nodes in a complex distributed system.
 Improved User Experience - Quick responses and no downtime make users satisfied.
 Power-efficiency - Edge nodes run power-efficient protocols such as Bluetooth, Zigbee, or Z-Wave.
Disadvantages of fog computing in IoT:
The technology has no obvious disadvantages, but some shortcomings can be named:
 Fog is an additional layer in a more complex system - a data processing and storage system.
 Additional expenses - companies must buy edge devices: routers, hubs, gateways.
 Limited scalability - Fog is not scalable like a cloud.
Cloud Computing is the delivery of cloud computing services like servers, storage networks, databases, applications for software Big Data
Processing or analytics via the Internet.
The most significant difference between cloud services and traditional web-hosted services is that cloud-hosted services are available on demand.
We can avail ourselves of as many or as little as we'd like from a cloud service. Cloud-based providers have revolutionized the game using the pay-
as-you-go model. This means that the only cost we pay is for services we use, proportion to the number of times our customers or we utilize the
services.
We can save money on expenditures for buying and maintaining servers in-house as well as data warehouses and the infrastructure that supports
them. The cloud service provider handles everything else.
There are generally three kinds of clouds:
 Public Cloud
 Private Cloud
 Hybrid Cloud
A public cloud is described by cloud-based computing provided by third-party vendors like Amazon Web Services over the Internet and making
them accessible to users on the subscription model.
One of the major advantages of the cloud public is that it permits customers to pay only the amount they've used in terms of bandwidth, storage
processing, or the ability to analyse.
Cloud providers can eliminate the cost of infrastructure for buying and maintaining their cloud infrastructures (servers, software, and much more).
A private cloud is described as a cloud that provides the services of computing via the Internet or a private internal network to a select group of
users. The services are not accessible open to all users. A private cloud is often known as a private cloud or a corporate cloud.
Private cloud enjoys certain benefits of a cloud public like:
 Self-service
 Scalability
 Elasticity
AD
Benefits of Clouds that are private Cloud:
 Low latency because of the proximity to Cloud setup (hosted near offices)
 Greater security and privacy thanks to firewalls within the company
 Blocking of sensitive information from third-party suppliers and users
One of the major disadvantages of using a private cloud is that we can't reduce the cost of equipment, staffing, and other infrastructure costs in
establishing and managing our cloud.
The most effective way to use a private cloud can be achieved through an effective Multi-Cloud and Hybrid Cloud setup.
In general, Cloud Computing offers a few business-facing benefits:
 Cost
 Speed
 Security
 Productivity
 Performance
 Scalability
Let's discuss multi-Cloud and how it compares to Hybrid Cloud.
Hybrid Cloud vs. Multi-Cloud
Hybrid Cloud is a combination of private and public cloud computing services. The primary difference is that both the public and private cloud
services that are part of the Hybrid Cloud setup communicate with each other.
Contrary to this, in a multi-Cloud setup, both the public and private cloud providers are not able to speak to one another. In general, cloud
configurations for public and private clouds are utilized for completely different purposes and are separated from one another within the business.
Hybrid cloud solutions have advantages that could entice users to choose the hybrid approach. With a private and a public cloud that
communicates with one another, we can reap the advantages of both by hosting less crucial elements in a cloud that is public and using the private
cloud reserved for important and sensitive information.
In a broad sense in the overall picture, from a holistic perspective, Hybrid cloud has more of an execution point of view to take advantage of the
benefits that come from both cloud services that are private and public, as well as their interconnection. Contrarily, multi-cloud is a more strategic
option than an execution decision.
Multi-Cloud is usually not a multi-vendor cloud configuration. Multi-cloud can utilize services from multiple vendors and is a mix between AWS,
Azure, and GCP.
The primary distinguishing factors that differentiate Hybrid and Multi-Cloud could be:
 A Multi-Cloud is utilized to perform a range of tasks. It typically consists of multiple cloud providers.
 A hybrid cloud is typically the result of combining cloud services that are private and public, which mix with one another.
Multi-Cloud Strategy
Multi-Cloud Strategy involves the implementation of several cloud computing solutions simultaneously.
Multi-cloud refers to the sharing of our web, software, mobile apps, and other client-facing or internal assets across several cloud services or
environments. There are numerous reasons to opt for a multi-cloud environment for our company, including the reduction of dependence on a
single cloud service provider and improving fault tolerance. Furthermore, businesses choose cloud service providers that follow an approach based
on services. This has a major impact on why companies opt for a multi-cloud system. We'll talk about this in the near future.
A Multi-Cloud may be constructed in many ways:
 It is a mix of cloud computing services offered by the private cloud to create a multi-cloud cloud,
o Setting up our servers in various regions of the globe and creating an online cloud network to manage and distribute the
services is an excellent illustration of an all-private multi-cloud configuration.
 It could be a mixture of all cloud service providers and
o A combination of several cloud service providers, like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud
Platform, is an example of a free cloud setup.
 It may comprise a combination of both private cloud service providers to make a multi-cloud architecture.
o Private cloud providers that use AWS in conjunction with AWS or Azure could fall into this category. If it is optimized for your
business, we could enjoy all the benefits of AWS and Azure.
 A typical multi-Cloud setup is a mix of two or more cloud providers together with one private cloud to remove the dependence on one
cloud services provider.
Why has Multi-cloud strategy become the norm?
When cloud computing was introduced in a huge way, businesses began to recognize a few issues.
 Security: Relying on security services that one cloud service provider provides makes us more susceptible to DDoS as well as other
cyber-attacks. If there is an attack on the cloud, the whole cloud would be compromised, and the company could be crippled.
 Reliability: If we're relying on just one cloud-based service, reliability is at risk. A cyber-attack, natural catastrophe, or security breach
could compromise our private information or result in a loss of data.
 Loss of Business: Software-driven businesses are working on regular UI improvements, bug fixes, and patches that have to be rolled
out monthly or weekly to their Cloud Infrastructure. In order to implement a single cloud strategy, the business suffers downtime
because their cloud services are not accessible to their customers. This can result in the loss of business as well as the loss of money.
 Vendor lock-in: Vendor lock-in refers to the situation of a client of one particular service, product, or product in which the customer is
unable to easily switch from the product or service to a competitor's service or product. This is usually the case in the event that
proprietary software is utilized in a service that isn't compatible with the new service or product vendor or even within the legal bounds
of the contract or the law. It is why businesses are forced to commit to a certain cloud provider even if they're dissatisfied with their
service. The reason for switching providers can be numerous, including better capabilities and features provided by competitors to lower
pricing, and so on.
Additionally, moving the data between cloud providers to the next is a hassle since it has to be transferred to the local datacentres before
being transferred to the cloud provider.
Benefits of a Multi-Cloud Strategy
Let's discuss the advantages from the benefits of a Multi-Cloud Strategy that inherently answer the challenges posed by one or more cloud-based
service. Many of the problems with a single cloud environment are solved when we consider a multi-cloud perspective.
 Flexibility: One of the most important benefits of multi-cloud cloud computing systems is flexibility. There is no lock-in of the vendor
customers able to test different cloud providers and play with their capabilities and features. A lot of companies that are tied to a single
provider cannot implement new technologies or innovate because the cloud service provider is bound to them to certain compatibility.
This is not a problem with a multi-cloud system. we can create a cloud system to sync with our company's goals.
Multi-cloud lets us select our cloud services. Each cloud service has its distinct features. Choose the ones that meet our business's
requirements the best, and then choose services from a variety of providers to select the best solution for our business.
 Security: The most important aspect of multi-cloud is risk reduction. If multiple cloud providers host us, we can reduce the chance of
being hacked and losing data in the event of vulnerabilities in our cloud provider. Also, we reduce the chance of injury caused by natural
disasters or human error. In the end, we should not put all our eggs in one basket.
 Fault Tolerance: One of the biggest issues with using one cloud service provider is that it offers zero fault tolerance. With a multi-cloud
system, it is possible to have backups and data redundancies in the right place. Also, we can strategically schedule downtime for
deployment or maintenance of our software/applications without letting our clients suffer.
 Performance: Each cloud service provider, such as AWS (64plus nations), Azure (140+ countries), or GCP (200plus countries), has been
established throughout the world. Based on our location and our workload, we'll be able to choose the best cloud service provider to
lower the delay and speed of our operations.
 IoT and ML/AI are Emerging Opportunities.: In the age of Machine Learning and Artificial Intelligence growing exponentially,
there's a lot of potential for analysis of our data on the cloud and using these capabilities for better decision-making and customer
service. The top cloud service providers offer their distinct features. Google Cloud Platform (GCP) for AI, AWS for serverless computing,
and IBM for AI/ML are just a couple of options worth considering.
 Cost: The cost will always be an important factor when making a purchase decision. Cloud computing is evolving in the time we go
through this. The competition is so fierce that providers of cloud services are coming up with a viable pricing solution that we can gain.
In a multi-cloud setting, depending on the service or feature we'll use with the service provider, we are able to select the most
appropriate option. AWS, Azure, and Google all offer pricing calculators. They help manage costs to aid us in making the right choice.
 Governance and Compliance Regulations: The big clients typically will require you to comply with specific local as well as
cybersecurity regulations. For example, GDPR compliance or the ISO cybersecurity certification. There is a chance that our business could
be affected because a certain cloud service could violate our security certificates, or the cloud provider may not have been certified. We
may choose an alternative provider without losing our significant clientele if this happens.
Few Disadvantages of Multi-Cloud
 Discount on High Volume Purchases: Cloud service providers that are public offer massive discounts when we buy their services in
bulk. But, if we have multi-cloud, it is unlikely that we'll get these discounts because the volume we purchase will be split between
various service providers.
 The Training of Existing Employees or new Hiring: We must prepare our existing staff or recruit new employees to be able to use
cloud computing in our company. It will cost us more and time spent in training.
 Effective Multi-Cloud Management: Multi-cloud requires efficient cloud management, which requires knowing the workload and
business requirements and then dispersing the work among cloud service providers most suitable for the task. For instance, a company
might make use of AWS for computing service, Google or Azure for communication and email tools, and Salesforce to manage customer
relationships. It requires expertise in the cloud and business domain to comprehend these subtleties.
A Service Level Agreement (SLA) is the bond for the performance of the negotiation between a cloud service provider and a client. Earlier, in cloud
computing, all service level agreements were negotiated between a customer and a service consumer. With the introduction of large utilities such
as cloud computing providers, most service level agreements are standardized until a customer becomes a large consumer of cloud services.
Service level agreements are also defined at different levels, which are mentioned below:
 Customer-based SLA
 Service-based SLA
 Multilevel SLA
Some service level agreements are enforceable as contracts, but most are agreements or contracts that are more in line with an operating level
agreement (OLA) and may not be constrained by law. It's okay to have a lawyer review documents before making any major settlement with a
cloud service provider. Service level agreements usually specify certain parameters, which are mentioned below:
 Availability of the Service (uptime)
 Latency or the response time
 Service components reliability
 Each party accountability
 Warranties
If a cloud service provider fails to meet the specified targets of the minimum, the provider will have to pay a penalty to the cloud service consumer
as per the agreement. So, service level agreements are like insurance policies in which the corporation has to pay as per the agreement if an
accident occurs.
Microsoft publishes service level agreements associated with Windows Azure platform components, demonstrating industry practice for cloud
service vendors. Each component has its own service level contracts. The two major Service Level Agreements (SLAs) are described below:
 Windows Azure SLA -
Windows Azure has separate SLAs for computing and storage. For Compute, it is guaranteed that when a client deploys two or more role instances
to different fault and upgrade domains, the client's Internet-facing roles will have external connectivity at least 99.95% of the time. In addition, all
role instances of the client are monitored, and 99.9% of the time it is guaranteed to detect when the role instance's process does not run and
starts properly.
 SQL Azure SLA -
The SQL Azure client will have connectivity between the database of SQL Azure and the Internet Gateway. SQL Azure will handle a "monthly
availability" of 99.9% within a month. The monthly availability ratio for a particular tenant database is the ratio of the time the database was
available to customers to the total time in a month.
Time is measured in intervals of a few minutes in a 30-day monthly cycle. If SQL Azure Gateway rejects attempts to connect to the customer's
database, part of the time is unavailable. Availability is always remunerated for a full month.
Service level agreements are based on the usage model. Often, cloud providers charge their pay-per-use resources at a premium and enforce
standard service level contracts for just that purpose. Customers can also subscribe to different tiers that guarantee access to a specific amount of
purchased resources.
Service level agreements (SLAs) associated with subscriptions often offer different terms and conditions. If the client requires access to a particular
level of resources, the client needs to subscribe to a service. A usage model may not provide that level of access under peak load condition
Cloud infrastructure can span geographies, networks, and systems that are both physical and virtual. While the exact metrics of cloud SLAs can
vary by service provider, the areas covered are the same:
 Volume and quality of work (including precision and accuracy);
 Speed;
 Responsiveness; and
 Efficiency.
The purpose of the SLA document is to establish a mutual understanding of the services, priority areas, responsibilities, guarantees and warranties.
It clearly outlines metrics and responsibilities between the parties involved in cloud configuration, such as the specific amount of response time to
report or address system failures.
 The importance of a cloud SLA
Service-level agreements are fundamental as more organizations rely on external providers for critical systems, applications and data. Cloud SLAs
ensure that cloud providers meet certain enterprise-level requirements and provide customers with a clearly defined set of deliverables. It also
describes financial penalties, such as credit for service time, if the provider fails to meet guaranteed conditions.
The role of a cloud SLA is essentially the same as that of any contract -- it's a blueprint that governs the relationship between a customer and a
provider. These agreed terms form a reliable foundation upon which the Customer commits to use the cloud providers' services. They also reflect
the provider's commitments to quality of service (QoS) and the underlying infrastructure.
 What to look for in a cloud SLA
The cloud SLA should outline each party's responsibilities, acceptable performance parameters, a description of the applications and services
covered under the agreement, procedures for monitoring service levels, and a program for remediation of outages. SLAs typically use technical
definitions to measure service levels, such as mean time between failures (MTBF) or average time to repair (MTTR), which specify targets or
minimum values for service-level performance. does.
The defined level of services must be specific and measurable so that they can be benchmarked and, if stipulated by contract, trigger rewards or
penalties accordingly.
Depending on the cloud model you choose, you can control much of the management of IT assets and services or let cloud providers manage it for
you.
A typical compute and cloud SLA expresses the exact levels of service and recourse or compensation that the User is entitled to in case the Provider
fails to provide the Service. Another important area is service availability, which specifies the maximum time a read request can take, how many
retries are allowed, and other factors.
The cloud SLA should also define compensation for users if the specifications are not met. A cloud service provider typically offers a tiered service
credit plan that gives credit to users based on the discrepancy between the SLA specifications and the actual service tiers.
 Selecting and monitoring cloud SLA metrics
Most cloud providers publicly provide details of the service levels that users can expect, and these are likely to be the same for all users. However,
an enterprise choosing a cloud service may be able to negotiate a more customized deal. For example, a cloud SLA for a cloud storage service may
include unique specifications for retention policies, the number of copies to maintain, and storage space.
Cloud service-level agreements can be more detailed to cover governance, security specifications, compliance, and performance and uptime
statistics. They should address security and encryption practices for data security and data privacy, disaster recovery expectations, data location,
and data access and portability.
 Verifying cloud service levels
Customers can monitor service metrics such as uptime, performance, security, etc. through the cloud provider's native tooling or portal. Another
option is to use third-party tools to track the performance baseline of cloud services, including how resources are allocated (for example, memory
in a virtual machine or VM) and security.
Cloud SLA must use clear language to define the terms. Such language controls, for example, the inaccessibility of the service and who is
responsible - slow or intermittent loading can be attributed to latency in the public Internet, which is beyond the control of the cloud provider.
Providers usually specify and waive any downtime due to scheduled maintenance periods, which are usually, but not always, regularly scheduled
and re-occurring.
AD
 Negotiating a cloud SLA
Most common cloud services are simple and universal, with some variations, such as infrastructure (IaaS) options. Be prepared to negotiate for any
customized services or applications delivered through the cloud. There may be more room to negotiate terms in specific custom areas such as data
retention criteria or pricing and compensation/fines. Negotiation power generally varies with the size of the client, but there may be room for more
favorable terms.
When entering into any cloud SLA negotiation, it is important to protect the business by making the uptime clear. A good SLA protects both the
customer and the supplier from missed expectations. For example, 99.9% uptime ("three nines") is a common bet that translates to nine hours of
outages per year; 99.9999% ("five nine") means an annual downtime of approximately five minutes. Some mission-critical data may require high
levels of availability, such as fractions of a second of annual downtime. Consider several areas or areas to help reduce the impact of major outages.
Keep in mind that some areas of Cloud SLA negotiations have unnecessary insurance. Certain use cases require the highest uptime guarantee,
require additional engineering work and cost and may be better served with private on-premises infrastructure.
Pay attention to where the data resides with a given cloud provider. Many compliance regulations such as HIPAA (Health Insurance Portability and
Accountability Act) require data to be held in specific areas, along with certain privacy guidelines. The cloud customer owns and is responsible for
this data, so make sure these requirements are built into the SLA and validated by auditing and reporting.
Finally, a cloud SLA should include an exit strategy that outlines the provider's expectations to ensure a smooth transition.
 Scaling a cloud SLA
Most SLAs are negotiated to meet the current needs of the customer, but many businesses change dramatically in size over time. A solid cloud
service-level agreement outlines the gaps where the contract is reviewed and potentially adjusted to meet the changing needs of the organization.
Some vendors build in notification workflows that are triggered when a cloud service-level agreement is close to breach in order to initiate new
negotiations based on changes in scale. This uptime can cover usage exceeding the availability level or norm and can warrant an upgrade to a new
service level.
"Anything as a service" (XaaS) describes a general category of cloud computing and remote access services. It recognizes the vast number of
products, tools, and technologies now delivered to users as a service over the Internet.
Essentially, any IT function can be a service for enterprise consumption. The service is paid for in a flexible consumption model rather than an
advance purchase or license.
What are the benefits of XaaS?
XaaS has many benefits: improving spending models, speeding up new apps and business processes, and shifting IT resources to high-value
projects.
 Expenditure model improvements. With XaaS, businesses can cut costs by purchasing services from providers on a subscription
basis. Before XaaS and cloud services, businesses had to buy separate products-software, hardware, servers, security,
infrastructure-install them on-site, and then link everything together to form a network. With XaaS, businesses buy what they need
and pay on the go. The previous capital expenditure now becomes an operating expense.
 Speed up new apps and business processes. This model allows businesses to adopt new apps or solutions to changing market
conditions. Using multi-tenant approaches, cloud services can provide much-needed flexibility. Resource pooling and rapid elasticity
support mean that business leaders can add or subtract services. When users need innovative resources, a company can use new
technologies, automatically scaling up the infrastructure.
 Transferring IT resources to high-value projects. Increasingly, IT organizations are turning to a XaaS delivery model to
streamline operations and free up resources for innovation. They are also harnessing the benefits of XaaS to transform digitally and
become more agile. XaaS gives more users access to cutting-edge technology, democratizing innovation. In a recent survey by
Deloitte, 71% of companies report that XaaS now constitutes more than half of their company's enterprise IT.
What are the disadvantages of XaaS?
There are potential drawbacks to XaaS: possible downtime, performance issues, and complexity.
 Possible downtime. The Internet sometimes breaks down, and when this happens, your XaaS provider can be a problem too.
With XaaS, there can be issues of Internet reliability, flexibility, provisioning, and management of infrastructure resources. If XaaS
servers go down, users will not be able to use them. XaaS providers can guarantee services through SLAs.
 Performance issues. As XaaS becomes more popular, bandwidth, latency, data storage, and recovery times can be affected. If
too many clients use the same resources, the system may slow down. Apps running in virtualized environments can also be
affected. Integration issues can occur in these complex environments, including the ongoing management and security of multiple
cloud services.
 Complexity effect. Advancing technology for XaaS can relieve IT, workers from day-to-day operational headaches; however, it
can be difficult to troubleshoot if something goes wrong.
Internal IT staff still needs to stay updated on new technology. The cost of maintaining a high-performance, a robust network can
add up - although the overall cost savings of the XaaS model are usually enormous. Nonetheless, some companies want to maintain
visibility into their XaaS service provider's environment and infrastructure. Furthermore, a XaaS provider that gets acquired shuts
down a service or changes its roadmap can profoundly impact XaaS users.
AD
What are some examples of XaaS?
Because XaaS stands for "anything as a service," the list of examples is endless. Many kinds of IT resources or services are now delivered this way.
Broadly speaking, there are three categories of cloud computing models: software as a service (SaaS), platform as a service (PaaS), and
infrastructure as a service (IaaS). Outside these categories, there are other examples such as disaster recovery as a service (DRaaS),
communications as a service (CaaS), network as a service (NaaS), database as a service (DBaaS), storage as a service (STaaS), desktop as a
service (DaaS), and monitoring as a service (MaaS). Other emerging industry examples include marketing as a service and healthcare as a service.
NetApp and XaaS
NetApp provides several XaaS options, including IaaS, IT as a service (ITaaS), STaaS, and PaaS.
 When you differentiate your hosted and managed infrastructure services, you can increase service and platform revenue, improve
customer satisfaction, and turn IaaS into a profit center. You can also take advantage of new opportunities to differentiate and
expand services and platform revenue, including delivering more performance and predictability from your IaaS services. Plus,
NetApp® technology can enable you to offer a competitive advantage to your customers and reduce time to market for deploying
IaaS solutions.
 When your data center is in a private cloud, it takes advantage of cloud features to deliver ITaaS to internal business users. A
private cloud offers characteristics similar to the public cloud but is designed for use by a single organization.
These characteristics include:
 Catalog-based, on-demand service delivery
 Automated scalability and service elasticity
 Multitenancy with shared resource pools
 Metering with utility-style operating expense models
 Software-defined, centrally managed infrastructure
 Self-service lifecycle management of services
STaaS. NetApp facilitates private storage as a service in a pay-as-you-go model by partnering with various vendors, including Arrow Electronics,
HPE ASE, BriteSky, DARZ, DataLink, Faction, Forsythe, Node4, Proact, Solvinity, Synoptek, and 1901 Group. NetApp also seamlessly integrates with
all major cloud service providers including AWS, Google Cloud, IBM Cloud, and Microsoft Azure.
PaaS. NetApp PaaS solutions help simplify a customer's application development cycle. Our storage technologies support PaaS platforms to:
 Reduce application development complexity.
 Provide high-availability infrastructure.
 Support native multitenancy.
 Deliver webscale storage.
PaaS services built on NetApp technology enable your enterprise to adopt hybrid hosting services-and accelerate your application-deployment time.
The future market for XaaS
The combination of cloud computing and ubiquitous, high-bandwidth, global internet access provides a fertile environment for XaaS growth.
Some organizations have been tentative to adopt XaaS because of security, compliance and business governance concerns. However, service
providers increasingly address these concerns, allowing organizations to bring additional workloads into the cloud.
 Resurce Pooling: The next resource we will look at that we can pool is the storage. The big blue box represents a storage system with many
hard drives in the diagram below. Each of the smaller white squares represents the hard drives.
With my centralized storage, I can slice up my storage however I want and give the virtual machines their own small part of that storage for
however much space they require. In the example below, I take a slice of the first disk and allocate that as the boot disk for 'Tenant 1, Server 1'.
I take another slice of my storage and provision that as the boot disk for 'Tenant 2, Server 1'.
Shared centralized storage makes storage allocation efficient - rather than giving whole disks to different servers, I can give them exactly how
much storage they require. Further savings can be made through storage efficiency techniques such as thin provisioning, deduplication, and
compression.
Check out my Introduction to SAN and NAS Storage course to learn more about centralized storage.
 Network Infrastructure Pooling: The next resource that can be pooled is network infrastructure.
At the top of the diagram below is a physical firewall.
All different tenants will have firewall rules that control what traffic is allowed into their virtual machines, such as RDP for management and HTTP
traffic on port 80 if it is a web server.
We don't need to give each customer their physical firewall; We can share the same physical firewall between different clients. Load balancers for
incoming connections can also be virtualized and shared among multiple clients.
In the main section on the left side of the diagram, you can see several switches and routers. Those switches and routers are shared, with traffic
going through the same device to different clients.
 Service pooling: The cloud provider also provides various services to the customers, as shown on the right side of the diagram. Windows
Update and Red Hat Update Server for operating system patching, DNS, etc. Keeping DNS as a centralized service saves customers from
having to provide their DNS solution.
 Location Independence: As stated by NIST, the customer generally has no knowledge or control over the exact location of the resources
provided. Nevertheless, they may be able to specify the location at a higher level of abstraction, such as the country, state, or data center
level.
For example, let's use AWS again; When I created a virtual machine, I did it in a Singapore data center because I am located in the
Southeast Asia region. I would get the lowest network latency and best performance by having mine.
With AWS, I know the data center where my virtual machine is located, but not the actual physical server it is running on. It could be
anywhere in that particular data center. It can use any personal storage system in the data center and any personal firewall. Those
specifics don't matter to the customer.
How does resource pooling work?
In this private cloud as a service, the user can choose the ideal resource segmentation based on his needs. The main thing to be considered in
resource pooling is cost-effectiveness. It also ensures that the brand provides new delivery of services.
It is commonly used in wireless technology such as radio communication. And here, single channels join together to form a strong connection. So,
the connection can transmit without interference.
And in the cloud, resource pooling is a multi-tenant process that depends on user demand. It is why it is known as SaaS or Software as a Service
controlled in a centralized manner. Also, as more and more people start using such SaaS services as service providers. The charges for the services
tend to be quite low. Therefore, owning such technology becomes more accessible at a certain point than it.
In a private cloud, the pool is created, and cloud computing resources are transferred to the user's IP address. Therefore, by accessing the IP
address, the resources continue to transfer the data to an ideal cloud service platform.
Benefits of resource pooling
 High Availability Rate: Resource pooling is a great way to make SaaS products more accessible. Nowadays, the use of such services
has become common. And most of them are far more accessible and reliable than owning one. So, startups and entry-level businesses
can get such technology.
 Balanced load on the server: Load balancing is another benefit that a tenant of resource pooling-based services enjoys. In this, users
do not have to face many challenges regarding server speed.
 Provides High Computing Experience: Multi-tenant technologies are offering excellent performance to the users. Users can easily
and securely hold data or avail such services with high-security benefits. Plus, many pre-built tools and technologies make cloud
computing advanced and easy to use.
 Stored Data Virtually and Physically: The best advantage of resource pool-based services is that users can use the virtual space
offered by the host. However, they also moved to the physical host provided by the service provider.
 Flexibility for Businesses: Pool-based cloud-based services are flexible as they can be transformed according to the need of the
technology. Plus, users don't have to worry about capitalization or huge investments.
 Physical Host Works When a Virtual Host Goes Down: It could be a common technical issue that the virtual host becomes slow or
slow. So, in that case, the physical host of the SaaS service provider will start working. Therefore, the user or tenant can get a suitable
computing environment without technical challenges.
AD
Disadvantages of resource pooling
 Security: Most service providers offering resource pooling-based services provide a high security features. However, many features can
provide a high level of security with such services. But even then, the company's confidential data may pass to a third party, a service
provider. And due to any flaw, the company's data may be misused. But even then, it would not be a good idea to rely solely on a third-
party service provider.
 Non-scalability: It can be another disadvantage of using resource pooling for organizations. Because if they find cheap solutions, they
may face challenges while upgrading their business in the future. Also, another element can hinder the whole process and limit the scale
of the business.
 Restricted Access: In private resource pooling, users have restricted access to the database. In this, only a user with user credentials
can access the company's stored or cloud computing data. Since there may be confidential user details and other important documents.
Therefore such a service provider can provide tenant port designation, domain membership, and protocol transition. They can also use
another credential for the users of the allotted area in cloud computing.
 Conclusion: Resource pooling in cloud computing represents the technical phrase. It is used to describe a service provider as providing
IT service to multiple customers at a time. And these services are scalable and accessible to businesses as well. Plus, when brands use
this kind of technology, they can save a large capitalization investment.
Load balancing is the method that allows you to have a proper balance of the amount of work being done on different pieces of device or hardware
equipment. Typically, what happens is that the load of the devices is balanced between different servers or between the CPU and hard drives in a
single cloud server.
Load balancing was introduced for various reasons. One of them is to improve the speed and performance of each single device, and the other is to
protect individual devices from hitting their limits by reducing their performance.
Cloud load balancing is defined as dividing workload and computing properties in cloud computing. It enables enterprises to manage workload
demands or application demands by distributing resources among multiple computers, networks or servers. Cloud load balancing involves managing
the movement of workload traffic and demands over the Internet.
Traffic on the Internet is growing rapidly, accounting for almost 100% of the current traffic annually. Therefore, the workload on the servers is
increasing so rapidly, leading to overloading of the servers, mainly for the popular web servers. There are two primary solutions to overcome the
problem of overloading on the server-
 First is a single-server solution in which the server is upgraded to a higher-performance server. However, the new server may also
be overloaded soon, demanding another upgrade. Moreover, the upgrading process is arduous and expensive.
 The second is a multiple-server solution in which a scalable service system on a cluster of servers is built. That's why it is more cost-
effective and more scalable to build a server cluster system for network services.
Cloud-based servers can achieve more precise scalability and availability by using farm server load balancing. Load balancing is beneficial with
almost any type of service, such as HTTP, SMTP, DNS, FTP, and POP/IMAP.
It also increases reliability through redundancy. A dedicated hardware device or program provides the balancing service.
Different Types of Load Balancing Algorithms in Cloud Computing:
 Static Algorithm: Static algorithms are built for systems with very little variation in load. The entire traffic is divided equally between
the servers in the static algorithm. This algorithm requires in-depth knowledge of server resources for better performance of the
processor, which is determined at the beginning of the implementation.
However, the decision of load shifting does not depend on the current state of the system. One of the major drawbacks of static load
balancing algorithm is that load balancing tasks work only after they have been created. It could not be implemented on other devices
for load balancing.
 Dynamic Algorithm: The dynamic algorithm first finds the lightest server in the entire network and gives it priority for load balancing.
This requires real-time communication with the network which can help increase the system's traffic. Here, the current state of the
system is used to control the load.
The characteristic of dynamic algorithms is to make load transfer decisions in the current system state. In this system, processes can
move from a highly used machine to an underutilized machine in real time.
 Round Robin Algorithm: As the name suggests, round robin load balancing algorithm uses round-robin method to assign jobs. First, it
randomly selects the first node and assigns tasks to other nodes in a round-robin manner. This is one of the easiest methods of load
balancing.
Processors assign each process circularly without defining any priority. It gives fast response in case of uniform workload distribution
among the processes. All processes have different loading times. Therefore, some nodes may be heavily loaded, while others may remain
under-utilised.
 Weighted Round Robin Load Balancing Algorithm: Weighted Round Robin Load Balancing Algorithms have been developed to
enhance the most challenging issues of Round Robin Algorithms. In this algorithm, there are a specified set of weights and functions,
which are distributed according to the weight values.
Processors that have a higher capacity are given a higher value. Therefore, the highest loaded servers will get more tasks. When the full
load level is reached, the servers will receive stable traffic.
 Opportunistic Load Balancing Algorithm: The opportunistic load balancing algorithm allows each node to be busy. It never considers
the current workload of each system. Regardless of the current workload on each node, OLB distributes all unfinished tasks to these
nodes.
The processing task will be executed slowly as an OLB, and it does not count the implementation time of the node, which causes some
bottlenecks even when some nodes are free.
 Minimum To Minimum Load Balancing Algorithm: Under minimum to minimum load balancing algorithms, first of all, those tasks
take minimum time to complete. Among them, the minimum value is selected among all the functions. According to that minimum time,
the work on the machine is scheduled.
Other tasks are updated on the machine, and the task is removed from that list. This process will continue till the final assignment is
given. This algorithm works best where many small tasks outweigh large tasks.
Load balancing solutions can be categorized into two types -
 Software-based load balancers: Software-based load balancers run on standard hardware (desktop, PC) and standard operating
systems.
 Hardware-based load balancers: Hardware-based load balancers are dedicated boxes that contain application-specific
integrated circuits (ASICs) optimized for a particular use. ASICs allow network traffic to be promoted at high speeds and are often
used for transport-level load balancing because hardware-based load balancing is faster than a software solution.
AD
Major Examples of Load Balancers -
 Direct Routing Request Despatch Technique: This method of request dispatch is similar to that implemented in IBM's
NetDispatcher. A real server and load balancer share a virtual IP address. The load balancer takes an interface built with a virtual IP
address that accepts request packets and routes the packets directly to the selected server.
 Dispatcher-Based Load Balancing Cluster: A dispatcher performs smart load balancing using server availability, workload,
capacity and other user-defined parameters to regulate where TCP/IP requests are sent. The dispatcher module of a load balancer
can split HTTP requests among different nodes in a cluster. The dispatcher divides the load among multiple servers in a cluster, so
services from different nodes act like a virtual service on only one IP address; Consumers interconnect as if it were a single server,
without knowledge of the back-end infrastructure.
 Linux Virtual Load Balancer: This is an open-source enhanced load balancing solution used to build highly scalable and highly
available network services such as HTTP, POP3, FTP, SMTP, media and caching, and Voice over Internet Protocol (VoIP) is done. It
is a simple and powerful product designed for load balancing and fail-over. The load balancer itself is the primary entry point to the
server cluster system. It can execute Internet Protocol Virtual Server (IPVS), which implements transport-layer load balancing in the
Linux kernel, also known as layer-4 switching.
Types of Load Balancing
You will need to understand the different types of load balancing for your network. Server load balancing is for relational databases, global server
load balancing is for troubleshooting in different geographic locations, and DNS load balancing ensures domain name functionality. Load balancing
can also be based on cloud-based balancers.
 Network Load Balancing: Cloud load balancing takes advantage of network layer information and leaves it to decide where network
traffic should be sent. This is accomplished through Layer 4 load balancing, which handles TCP/UDP traffic. It is the fastest local
balancing solution, but it cannot balance the traffic distribution across servers.
 HTTP(S) load balancing: HTTP(s) load balancing is the oldest type of load balancing, and it relies on Layer 7. This means that load
balancing operates in the layer of operations. It is the most flexible type of load balancing because it lets you make delivery decisions
based on information retrieved from HTTP addresses.
 Internal Load Balancing: It is very similar to network load balancing, but is leveraged to balance the infrastructure internally.Load
balancers can be further divided into hardware, software and virtual load balancers.
 Hardware Load Balancer: It depends on the base and the physical hardware that distributes the network and application traffic. The
device can handle a large traffic volume, but these come with a hefty price tag and have limited flexibility.
 Software Load Balancer: It can be an open source or commercial form and must be installed before it can be used. These are more
economical than hardware solutions.
 Virtual Load Balancer: It differs from a software load balancer in that it deploys the software to the hardware load-balancing device on
the virtual machine.
Why Load Balancer is Important in Cloud Computing?
Here are some of the importance of load balancing in cloud computing.
 Offers better performance: The technology of load balancing is less expensive and also easy to implement. This allows companies to
work on client applications much faster and deliver better results at a lower cost.
 Helps Maintain Website Traffic: Cloud load balancing can provide scalability to control website traffic. By using effective load
balancers, it is possible to manage high-end traffic, which is achieved using network equipment and servers. E-commerce companies that
need to deal with multiple visitors every second use cloud load balancing to manage and distribute workloads.
 Can Handle Sudden Bursts in Traffic: Load balancers can handle any sudden traffic bursts they receive at once. For example, in case
of university results, the website may be closed due to too many requests. When one uses a load balancer, he does not need to worry
about the traffic flow. Whatever the size of the traffic, load balancers will divide the entire load of the website equally across different
servers and provide maximum results in minimum response time.
 Greater Flexibility: The main reason for using a load balancer is to protect the website from sudden crashes. When the workload is
distributed among different network servers or units, if a single node fails, the load is transferred to another node. It offers flexibility,
scalability and the ability to handle traffic better.
Because of these characteristics, load balancers are beneficial in cloud environments. This is to avoid heavy workload on a single server.
Desktop as a Service (DaaS) is a cloud computing offering where a service provider distributes virtual desktops to end-users over the Internet,
licensed with a per-user subscription.
The provider takes care of backend management for small businesses that find their virtual desktop infrastructure to be too expensive or resource-
consuming. This management usually includes maintenance, backup, updates, and data storage. Cloud service providers can also handle security
and applications for the desktop, or users can manage these service aspects individually.
There are two types of desktops available in DaaS - persistent and non-persistent.
 Persistent Desktop: Users can customize and save a desktop from looking the same as each user logs on. Permanent
desktops require more storage than non-permanent desktops, making them more expensive.
 Non-persistent desktop: The desktop is erased whenever the user logs out-they're just a way to access shared cloud
services. Cloud providers can allow customers to choose from both, allowing workers with specific needs access to a permanent
desktop and providing access to temporary or occasional workers through a non-permanent desktop.
Benefits of Desktop as a Service (DaaS)
Desktop as a Service (DaaS) offers some clear advantages over the traditional desktop model. With DaaS, it is faster and less expensive to deploy
or deactivate active end users.
 Rapid deployment and decommissioning of active end-users: the desktop is already configured; it needs to be connected to a new
device. DAAs can save a lot of time and money for seasonal businesses that experience frequent spikes and declines in demand or
employees.
 Reduced Downtime for IT Support: Desktop as a Service allows companies to provide remote IT support to their employees,
reducing downtime.
 Cost savings: Because DAAS devices require much less computing power than a traditional desktop machine or laptop, they are less
expensive and use less power.
 Increased device flexibility: DaaS runs on various operating systems and device types, supporting the tendency of users to bring
their own devices into the office and shifting the burden of supporting desktops across those devices to the cloud service provider Is.
 Enhanced Security: The security risks are significantly lower as the data is stored in the data center with DaaS. If a laptop or mobile
device is stolen, it can be disconnected from service. Since no data remains on that stolen device, the risk of a thief accessing sensitive
data is minimal. Security patches and updates are also easier to install in a DaaS environment as all desktops can be updated
simultaneously from a remote location.
How does Desktop as a Service (DaaS) work?
With Desktop as a Service (DaaS), the cloud service provider hosts the infrastructure, network resources, and storage in the cloud and streams the
virtual desktop to the user's device. The user can access the desktop's data and applications through a web browser or other software.
Organizations can purchase as many virtual desktops as they want through the subscription model.
Because desktop applications stream from a centralized server over the Internet, graphics-intensive applications have historically been difficult to
use with DaaS.
New technology has changed this, and applications such as Computer-Aided Design (CAD) that require a lot of computer power to display quickly
can now easily run on DaaS.
When the workload on a server becomes too high, IT administrators can move a running virtual machine from one physical server to another in
seconds, allowing graphics-accelerated or GPU-accelerated applications to run seamlessly. Meets.
GPU-accelerated Desktop as a Service (GPU-DaaS) has implications for any industry that requires 3D modeling, high-end graphics, simulation, or
video production. The engineering and design, broadcast, and architecture industries can benefit from this technology.
What are the use cases for DaaS?
Organizations can leverage DaaS to address various use cases and scenarios such as:
 Users with multiple endpoints: A user can access multiple virtual desktops on a single PC instead of switching between multiple
devices or multiple OSes. Some roles, such as software development, may require the user to work from multiple devices.
 Contract or seasonal workers: DaaS can help you provision virtual desktops within minutes for seasonal or contract workers.
You can also quickly close such desktops when the employee leaves the organization.
 Mobile and remote worker: DaaS provides secure access to corporate resources anywhere, anytime, and any device. Mobile and
remote employees can take advantage of these features to increase productivity in the organization.
 Mergers and acquisition: DaaS simplifies the provision and deployment of new desktops to new employees, allowing IT
administrators to quickly integrate the entire organization's network following a merger or acquisition.
 Educational institutions: IT administrators can provide each teacher or student with an individual virtual desktop with the
necessary privileges. When such users leave the organization, their desktops become inactive with just a few clicks.
 Healthcare professionals: Privacy is a major concern in many health care settings. It allows individual access to each healthcare
professional's virtual desktop, allowing access only to relevant patient information. With DaaS, IT administrators can easily
customize desktop permissions and rules based on the user.
How is DaaS different from VDI?
Both DaaS and VDI offer a similar result: bringing virtual applications and desktops from a centralized data center to users' endpoints. However,
these offerings differ in setup, architecture, controls, cost impact, and agility, as summarized below:
Specialty Slave VDI
Setup The cloud provider hosts all of the organization's IT
infrastructure, including compute, networking, and storage.
The provider handles all hardware monitoring, availability,
troubleshooting, and upgrade issues.
It also manages the VMs that run the OS. Some providers also
provide technical support.
With VDI, you manage all IT resources on-premises or
yourself in a colocation facility. VDI is used for servers,
networking, storage, licenses, endpoints, etc.
More about this source textSource text required for
additional translation information Send feedback Side
panels History
Saved Contribute
Architecture Most DaaS offerings take advantage of the multi-tenancy
architecture. Under this model, a single instance of an
application-hosted by a server or datacenter-serves multiple
"tenants" or customers.
The DaaS provider differentiates each customer's services and
provides them dynamically. Resource consumption or security
of other clients may affect you with multi-tenant architecture
if services are compromised.
Most VDI offerings are single-tenant solutions where
customers operate in a completely dedicated
environment.
Leveraging the single-tenant architecture in VDI allows
IT administrators to gain complete control over its IT
resource distribution and configuration.
You also don't have to worry about the overuse of
resources and any other organization causing service
disruption.
Control The cloud vendor controls all of its IT infrastructure, including
monitoring, configuration, and storage. You may not have
complete knowledge of these aspects.
Internet connectivity is required to access the control plane of
DAAs, making it more vulnerable to breaches and cyber
attacks.
With VDI deployment, the organization has complete
control over its IT resources. Since most VDI solutions
leverage a single-tenant architecture, IT administrators
can ensure that only permitted users access virtual
desktops and applications.
Cost There is almost no upfront cost with DaaS offerings as it is
subscription-based. The pay-as-you-go pricing structure
allows companies to dynamically scale their operations and
pay only for the resources consumed.
DaaS offerings can be cheaper for small to medium-sized
businesses (SMBs) with fluctuating needs.
VDI requires a real capital expenditure (CapEx) to
purchase or upgrade a server. it is suitable for
Enterprise-level organizations that have projected
growth and resource requirements.
Agility DaaS deployments provide excellent flexibility.
For example, you can provision virtual desktops and
applications immediately and accommodate temporary or
seasonal employees.
You can also reduce the resources easily. With DaaS
solutions, you can support new technological trends such as
the latest GPUs or CPUs or CPU or software innovations.
VDI requires considerable efforts to set up and build and
maintain complex infrastructure. For example, adding
new features can take days or even weeks. Budget can
also limit the organization if it wants to buy new
hardware to handle the scalability.
AD
AD
How to Choose a DaaS Provider:
There are multiple DaaS providers to choose from, including major vendors such as Azure and managed service providers (MSPs). Because of the
many options, selecting the appropriate provider can be a challenge.
 An appropriate DaaS solution meets all the organization's users' requirements, including GPU-intensive applications. Here are some
tips to help you choose the right seller:
 If you implement a DaaS solution for an organization with hundreds or thousands of users, make sure it is scalable. A scalable DaaS
offering allows you to get on and offboard new users easily.
 A great DaaS provider allows you to provision resources based on current workload demands. You don't want to overpay when
workload demands vary depending on the day or time of day.
 Datacenter Choosing a DaaS provider whose data center is close to the employee results in optimized network infrastructure with
low latency. On the other hand, poor location can lead to unstable connections and efficiency challenges.
 Security and compliance. If you are in an industry that must comply with prevailing laws and regulations, choose a DaaS
provider that meets all security and compliance requirements.
 An intuitive and easy-to-use DaaS solution allows employees to get work done. It also frees you from many IT administration
responsibilities related to OS and application management.
 Like all cloud-based services, DaaS migrates CapEx to an operating expense (OpEx) consumption model. However, not all DaaS
providers are created equal when comparing services versus price. Therefore, you should compare the cost with the value of
different DaaS providers to get the best service.
Top Providers of DaaS in Cloud Computing:
Working with DaaS providers is the best option for most organizations as it provides access to managed services and support. Below are the three
largest DaaS providers currently available.
 Amazon Workspace:Amazon Workspace is an AWS desktop as a service product that you can use to access a Linux or Windows
desktop. When using this service, you can choose from various software and hardware configurations and multiple billing types. You can
use workstations in multiple AWS regions.
Workstations operate on a server-based model. You enumerate predefined OS, storage, and resource bundles when using the Services.
The bundle you choose determines the maximum performance you expect and your costs.
For example, in one of the standard bundles, you can use Windows 7 or 10, two CPUs, 4GB of memory, and 100GB of storage
for $44 per month.
The workspace also includes bringing in existing Windows licenses and applications.
With this option, you can import your existing Windows VM images and play those images on dedicated hardware. The caveat to fetch
your license is that it is only available for Windows 7 SP1 and select Windows 10 editions. Additionally, you will need to purchase at least
200 desktops.
Learn more about the AWS DaaS offering in our guide.
 VMware Horizon Cloud:VMware Horizon Cloud is a DaaS offering available as a server- or client-based option. These services are
provided from a VMware-hosted control plane that enables you to manage and deploy your desktop and applications centrally.
With Horizon Cloud, you can access fully managed desktops in three configurations:
 Session desktops are ephemeral desktops in which multiple users share resources on a single server.
 Dedicated Desktop-Continuous desktop resources are provided to a single user. This option uses a client-based model.
 Floating Desktop-Non-persistent desktop associated with a single user. These desktops can provide users with a consistent
experience through Horizon Cloud features, such as the User Environment Manager, enabling administrators to maintain settings
and user data. This option uses a client-based model.
Challenges of data as a service:
While DaS offers many benefits, it also poses special challenges.
 Unique security considerations: Because DaaS requires organizations to move data to cloud infrastructure and to transfer data
over a network, it can pose a security risk that would not exist if the data was persisted on local, behind-the-firewall infrastructure.
These challenges can be mitigated by using encryption for data in transit.
 Additional compliance steps: For some organizations, compliance challenges can arise when sensitive data is moved to a cloud
environment. It does not mean that data cannot be integrated or managed in the cloud, but companies subject to special data
compliance requirements should meet those requirements with their DaaS solutions. For example, they may need to host their DaS
on cloud servers in a specific country to remain compliant.
 Potentially Limited Capabilities: In some cases, DaaS platforms may limit the number of devices available to work with the
data. Users can only work with tools that are hosted on or compatible with their DaaS platform instead of being able to use any tool
of their choice to set up their data-processing solution. Choosing a DaaS solution that offers maximum flexibility in device selection
mitigates this challenge.
 Data transfer timing: Due to network bandwidth limitations, it may take time to transfer large amounts of data to the DaaS
platform. Depending on how often your organization needs to move data across the DaaS platform, this may or may not be a
serious challenge.
Data compression and edge computing strategies can help accelerate transfer speeds. Successful DaaS Adoption:
DaaS solutions have been slow to catch on compared to SaaS and other traditional cloud-based services. However, as DaaS matures
and the cloud becomes central to modern business operations, many organizations successfully leverage DaaS.
 Pointbet uses DaaS to scale quickly while remaining compliant: Point bet uses cloud-based data solutions to manage its
unique compliance and scaling requirements. The company can easily adjust its operations to meet the fluctuating demand for
online gaming and ensure that it operates within local and international government regulations.
 DMD Marketing accelerates data operations with DaaS: DMD Marketing Corp. has adopted a cloud-first approach to data
management to give its users faster access to their data and, by extension, reduce data processing time. The company can refresh
data faster thanks to cloud-based data management, giving them an edge over competitors.
How to get started with Data as a Service
Although getting started with DaaS may seem intimidating, as DaaS is still a relatively new solution, the process is simple.
This is particularly simple because DaaS eliminates most of the setup and preparation work of building an on-premises data processing solution.
And because of the simplicity of deploying a DaaS solution and the availability of technical support services from DaaS providers, your company
does not need to have specialized personnel for this process.
The main steps to get started with DaS include:
 Choose a DaaS Solution: Factors to consider when selecting a DaaS offering include price, scalability, reliability, flexibility, and
how easy it is to integrate DaaS with existing workflows and ingest data.
 Migrate data to a DaaS solution. Depending on how much data you need to migrate and the network connection speed between
your local infrastructure and your DaaS, data migration may or may not require a lot of time.
 Start leveraging the DaaS platform to deliver faster, more reliable data integration and insights.
There has been much discussion about whether cloud computing replaces data centers, costly computer hardware, and software upgrades. Some
experts say that although cloud technology is changing how establishments use IT processes, the cloud cannot be seen as a replacement for the
data center. However, the industry agrees that consumer and business applications outweigh the importance of cloud services.
According to data provided by Cisco, cloud data centers traffic will account for 95 percent of total data centers traffic in 2021. This has resulted in
large-scale data centers, which are essentially large public cloud data centers. ,
Cloud computing is streamlining the operations of today's workplaces. Its three main components are Software as a Service (SaaS), Platform as a
Service (PaaS), and Infrastructure as a Service (IaaS). Cloud services provide the convenience of not having to worry about issues like increasing
the device's storage capacity. Similarly, cloud computing also ensures no loss of data as it comes with backup and recovery features.
Edge Computing vs. Cloud Computing: Is Edge Better?
With the increasing demand for real-time applications, the adoption of edge computing has increased significantly. Today's technology expects low
latency and high speeds to deliver a better customer experience. Although centralized cloud computing systems provide ease of collaboration and
access, they are far from data sources. Therefore, it requires data transmission, which causes delays in processing information due to network
latency. Thus, one cannot afford cloud computing for every need.
Although cloud has some benefits, edge computing has more benefits when compared:
 Speed: Edge devices only play a limited role in sending raw information and receiving processed information from the cloud. All of the
raw information you work with in the cloud goes through the edge devices, collecting and sending the data. But, Exchange can only be
used with applications where time delay is allowed.
Therefore, edge computing provides better speed with lower latency, allowing the interpretation of the input data closer to the source.
This provides more scope for applications that require real-time services.
 Lower connectivity cost and better security: Instead of filtering data at a central data center, edge computing allows organizations
to filter data at the source. This results in less transfer of companies' sensitive information between devices and the cloud, which is better
for the security of organizations and their customers. Minimizing data movement also reduces costs by eliminating the need for storage
requirements.
 better data management: According to statistics, connected devices will reach about 20 billion by 2020. Edge computing takes an
approach where it deals with certain systems with special needs, freeing up cloud computing to serve as a general-purpose platform. For
example, the best route to a destination via a car's GPS would come from analyzing the surrounding areas rather than from the car
manufacturer's data centers, far from the GPS. This results in less reliance on the cloud and helps applications perform better.
The Internet of Things connects all nearby smart devices to the network. These devices use sensors and actuators to communicate with each
other. Sensors sense surrounding movements while actuators respond to sensory activities. The devices can be a smartphone, smart washing
machine, smartwatch, smart TV, smart car, etc.
Assume a smart shoe that is connected to the Internet. It can collect data on the number of steps it can run. The smartphone can connect to the
Internet and view this data. It analyzes the data and provides the user with the number of calories burned and other fitness advice.
Another example is a smart traffic camera that can monitor congestion and accidents. It sends data to the gateway. This gateway receives data
from that camera as well as other similar cameras. All these connected devices form an intelligent traffic management system. It shares,
analyzes, and stores data on the cloud.
When an accident occurs, the system analyzes the impact and sends instructions to guide drivers to avoid the accident.
Overall, the Internet of Things is an emerging technology, and it will grow rapidly in the future. Similarly, there are many examples in healthcare,
manufacturing, energy production, agriculture, etc. One drawback is that there can be security and privacy issues as the devices capture data
throughout the day.
Which is better IoT or cloud computing?
Over the years, IoT and cloud computing have contributed to implementing many application scenarios such as smart transportation, cities and
communities, homes, the environment, and healthcare.
Both technologies work to increase efficiency in our everyday tasks. Cloud computing collects data from IoT sensors and calculates it accordingly.
Although the two are very different paradigms, they are not contradictory technologies; They complement each other.
Difference between the Internet of things and cloud computing:
Meaning of Internet of things and cloud computing
IoT is a network of interconnected devices, machines, vehicles, and other 'things' that can be embedded with sensors, electronics, and software
that allows them to collect and interchange data. IoT is a system of interconnected things with unique identifiers and can exchange data over a
network with little or no human interaction.
Cloud computing allows individuals and businesses to access on-demand computing resources and applications.
Internet of Things and Cloud Computing:
The main objective of IoT is to create an ecosystem of interconnected things and give them the ability to sense, touch, control, and communicate
with others. The idea is to connect everything and everyone and help us live and work better. IoT provides businesses with real-time insights into
everything from everyday operations to the performance of machines and logistics and supply chains.
On the other hand, cloud computing helps us make the most of all the data generated by IoT, allowing us to connect with our business from
anywhere, whenever we want.
Applications of Internet of Things and Cloud Computing
IoT's most important and common applications are smartwatches, fitness trackers, smartphones, smart home appliances, smart cities, automated
transportation, smart surveillance, virtual assistants, driverless cars, thermostats, implants, lights, and more. Real-world examples of cloud
computing include antivirus applications, online data storage, data analysis, email applications, digital video software, online meeting applications,
etc.
Internet of Things vs. Cloud Computing:
Internet of things Cloud Computing
Iot is a network of interconnected devices that are capable of exchanging
data over a network.
Cloud computing is the on-demand delivery of IT resources and
application via the internet.
The main purpose is to create an ecosystem of interconnected things and
give them the ability to sense, touch, control, and communicate.
The purpose is to allow access to large amounts of computing
power virtually, and offering a single system view.
The role of IoT is to generate massive amounts of data. Cloud computing provides a way to store IoT data and provides
tools to create IoT applications.
Summary:
While IoT and cloud computing are two different technologies that aim to make our daily lives easier, they are not contradictory technologies; They
complement each other. The two work in collaboration to increase efficiency in our daily tasks.
The basic concept of IoT is connectivity, in which physical objects or things are connected to the web - from fitness trackers to smart cars and
smart home devices. The idea is to connect everything to the Internet and control them from the Internet. Cloud computing helps to manage the
IoT infrastructure.
The Internet is the worldwide connectivity of hundreds of thousands of computers belonging to many different networks.
A web service is a standardized method for propagating messages between client and server applications on the World Wide Web. A web service is
a software module that aims to accomplish a specific set of tasks. Web services can be found and implemented over a network in cloud computing.
The web service would be able to provide the functionality to the client that invoked the web service.
A web service is a set of open protocols and standards that allow data exchange between different applications or systems. Web services can be
used by software programs written in different programming languages and on different platforms to exchange data through computer networks
such as the Internet. In the same way, communication on a computer can be inter-processed.
Any software, application, or cloud technology that uses a standardized Web protocol (HTTP or HTTPS) to connect, interoperate, and exchange
data messages over the Internet-usually XML (Extensible Markup Language) is considered a Web service. Is.
Web services allow programs developed in different languages to be connected between a client and a server by exchanging data over a web
service. A client invokes a web service by submitting an XML request, to which the service responds with an XML response.
 Web services functions
 It is possible to access it via the Internet or intranet network.
 XML messaging protocol that is standardized.
 Operating system or programming language independent.
 Using the XML standard is self-describing.
A simple location approach can be used to detect this.
Web Service Components
XML and HTTP is the most fundamental web service platform. All typical web services use the following components:
 SOAP (Simple Object Access Protocol):
SOAP stands for "Simple Object Access Protocol". It is a transport-independent messaging protocol. SOAP is built on sending XML data in the form
of SOAP messages. A document known as an XML document is attached to each message.
Only the structure of an XML document, not the content, follows a pattern. The great thing about web services and SOAP is that everything is sent
through HTTP, the standard web protocol.
Every SOAP document requires a root element known as an element. In an XML document, the root element is the first element.
The "envelope" is divided into two halves. The header comes first, followed by the body. Routing data, or information that directs the XML
document to which client it should be sent, is contained in the header. The real message will be in the body.
 UDDI (Universal Description, Search, and Integration):
UDDI is a standard for specifying, publishing and searching online service providers. It provides a specification that helps in hosting the data
through web services. UDDI provides a repository where WSDL files can be hosted so that a client application can search the WSDL file to learn
about the various actions provided by the web service. As a result, the client application will have full access to UDDI, which acts as the database
for all WSDL files.
The UDDI Registry will keep the information needed for online services, such as a telephone directory containing the name, address, and phone
number of a certain person so that client applications can find where it is.
 WSDL (Web Services Description Language):
The client implementing the web service must be aware of the location of the web service. If a web service cannot be found, it cannot be used.
Second, the client application must understand what the web service does to implement the correct web service. WSDL, or Web Service Description
Language, is used to accomplish this. A WSDL file is another XML-based file that describes what a web service does with a client application. The
client application will understand where the web service is located and how to access it using the WSDL document.
How does web service work?
The diagram shows a simplified version of how a web service would function. The client will use requests to send a sequence of web service calls to
the server hosting the actual web service.
Remote procedure calls are used to perform these requests. The calls to the methods hosted by the respective web service are known as Remote
Procedure Calls (RPC). Example: Flipkart provides a web service that displays the prices of items offered on Flipkart.com. The front end or
presentation layer can be written in .NET or Java, but the web service can be communicated using a programming language.
The data exchanged between the client and the server, XML, is the most important part of web service design. XML (Extensible Markup Language)
is a simple, intermediate language understood by various programming languages. It is the equivalent of HTML.
As a result, when programs communicate with each other, they use XML. It forms a common platform for applications written in different
programming languages to communicate with each other.
Web services employ SOAP (Simple Object Access Protocol) to transmit XML data between applications. The data is sent using standard HTTP. A
SOAP message is data sent from a web service to an application. An XML document is all that is contained in a SOAP message. The client
application that calls the web service can be built in any programming language as the content is written in XML.
Features of Web Service
Web services have the following characteristics:
 XML-based: A web service's information representation and record transport layers employ XML. There is no need for networking,
operating system, or platform bindings when using XML. At the mid-level, web offering-based applications are highly interactive.
 Loosely Coupled: The subscriber of an Internet service provider may not necessarily be directly connected to that service provider. The
user interface for a web service provider may change over time without affecting the user's ability to interact with the service provider. A
strongly coupled system means that the decisions of the mentor and the server are inextricably linked, indicating that if one interface
changes, the other must be updated.
A loosely connected architecture makes software systems more manageable and easier to integrate between different structures.
 Ability to be synchronous or asynchronous: Synchronicity refers to the client's connection to the execution of the function.
Asynchronous operations allow the client to initiate a task and continue with other tasks. The client is blocked, and the client must wait
for the service to complete its operation before continuing in synchronous invocation.
Asynchronous clients get their results later, but synchronous clients get their effect immediately when the service is complete. The ability
to enable loosely connected systems requires asynchronous capabilities.
 Coarse Grain: Object-oriented systems, such as Java, make their services available differently. At the corporate level, an operation is
too great for a character technique to be useful. Building a Java application from the ground up requires the development of several
granular strategies, which are then combined into a coarse grain provider that is consumed by the buyer or service.
Corporations should be coarse-grained, as should the interfaces they expose. Building web services is an easy way to define coarse-
grained services that have access to substantial business enterprise logic.
 Supports remote procedural calls: Consumers can use XML-based protocols to call procedures, functions, and methods on remote
objects that use web services. A web service must support the input and output framework of the remote system.
Enterprise-wide component development Over the years, JavaBeans (EJBs) and .NET components have become more prevalent in
architectural and enterprise deployments. Several RPC techniques are used to both allocate and access them.
A web function can support RPC by providing its services, similar to a traditional role, or translating incoming invocations into an EJB or
.NET component invocation.
 Supports document exchanges: One of the most attractive features of XML for communicating with data and complex entities.
A container is a useful unit of software into which application code and libraries and their dependencies can be run anywhere, whether on a
desktop, traditional IT, or in the cloud.
To do this, containers take advantage of virtual operating systems (OS) in which OS features (in the Linux kernel, which are groups of first names
and domains) are used in CPU partitions, memory, and disk access.
A container as a Service (CaaS) is a cloud service model that allows users to upload, edit, start, stop, rate, and otherwise manage containers,
applications and collections. It enables these processes through tool-based virtualization, a programming interface (API), or a web portal interface.
CaaS helps users build rich, secure, segmented applications through local or cloud data centers. Containers and collections are used as a service
with this model and installed on-site in the cloud or data centers.
CaaS assists development teams in deploying and managing systems efficiently while providing more control of container orchestration than is
permitted by PaaS.
Containers-as-a-service (CaaS) is part of cloud services where the service provider empowers customers to manage and distribute applications
containing containers and collections. CaaS is sometimes regarded as a special infrastructure-as-a-service (IaaS) model for cloud service delivery.
Still, where larger assets are containers, there are virtual machines and physical hardware.
Advantages of Container as a Service (CaaS):
 Containers and CaaS make it easy to deploy and design distributed applications or build small services.
 A collection of containers can handle different responsibilities or different coding environments during development.
 Network protocol relationships between containers can be defined, and forwarding can be enforced.
 CaaS promises that these defined and dedicated container structures can be quickly deployed in cloud capture.
 For example, consider a mock software program designed with a microservice design, in which the service plan is organized with a
business domain ID. Service domains can be payment, authentication, and a shopping cart.
 Using CaaS, these application containers can be sent to a live system instantly.
 Enables program performance using log integration and monitoring tools by posting the installed application to the CaaS platform.
 CaaS also includes built-in automated measurement performance and orchestration management.
 It enables teams to quickly build high visibility and distributed systems for high availability.
 Furthermore, CaaS enhances team development with vigor by enabling rapid deployment.
 Containers prevent targeted deployment, while CaaS can reduce operational engineering costs by reducing the DevOps resources
required to manage the deployment.
Disadvantages of Container as a Service (CaaS): Extracting business data from the cloud is dangerous. Depending on the provider,
there are limits to the technology available.
Security issues:
 Containers are considered safer than their Microsoft counterparts but have some risks.
 Although they are platform agnostic, containers share the same kernel as the operating system.
 It puts containers at risk of being targeted if they are targeted.
 As containers are deployed in the cloud via CaaS, the risk increases exponentially.
Performance Limits:
 Containers are field of view and do not run directly on bare metal.
 Something is missing with the bare metal and the extra layer between the application containers and their characters.
 Combine this with the net loss of the container associated with the hosting plan; the result is a significant performance loss.
 Therefore, businesses face some loss in the functionality of containers even after high-quality hardware is available.
 Therefore, it is sometimes referred to use bare-metal programs to test the application's full potential.
How does CaaS Works?
A Container as a Service is a computing and accessible computer cloud. Used by users to upload, build, manage and deploy container-based
applications on cloud platforms. Cloud-based environment connections can be made through a graphical interface (GUI) or API calls.
The essence of the entire CaaS platform is an orchestration tool that enables the management of complex container structures. Orchestration tools
combine between active containers and enable automated operations. The existing orchestrators in the CaaS framework directly impact the
services provided by the service users.
What is a Container in Cars?
Virtualization has been one of the most important paradigms in computing and software development over the past decade, leading to increased
resource utilization and reduced time-to-value for development teams while reducing the duplication required to deliver services. The ability to
deploy applications in virtualized environments means that development teams can more easily replicate the conditions of a production
environment and operate more targeted applications at a lower cost. It helps to reduce the amount of work done.
Virtualization meant that a user could divide his processing power among multiple virtual environments running on the same machine. Still, each
environment contained a substantial amount of memory, as the virtual environments each had to run their operating system.
To work and require six instances to run. Operating systems on the same hardware can be extremely resource-intensive.
Containers emerged as a mechanism to develop better control of virtualization. Instead of virtualizing an entire machine, including the operating
system and hardware, containers create a separate context in which an application and its important dependencies such as binaries, configuration
files, and other dependencies are in a discrete package.
Both containers and virtual machines allow applications to be deployed in virtual environments.
The main difference is that the container environment contains only those files that the application needs to run. In contrast, virtual machines
contain many additional files and services, resulting in increased resource usage without providing additional functions. As a result, a computer that
may be capable of running 5 or 6 virtual machines can run tens or even hundreds of containers.
What are Containers used For?
One of the major advantages of containers is that they take significantly less time to initiate than virtual machines. Because containers share the
Linux kernel, each virtual machine must fire its operating system at start-up.
The fast spin-up times for containers make them ideal for large discrete applications with many different parts of services that must be started, run,
and terminated in a relatively short time frame.
This process takes less time to perform with containers than virtual machines and uses fewer CPU resources, making it significantly more efficient.
Containers fit well with applications built in a microservices application architecture rather than the traditional monolithic application architecture.
Communicate with another. Whereas traditional monolithic applications tie every part of the application together, most applications today are
developed in the microservice model. The application consists of separate microservices or features deployed in containers and shared through an
API.
The use of containers makes it easy for developers to check the health and security of individual services within applications, turn services on/off in
production environments, and ensure that individual services meet performance and CPU usage goals.
CaaS vs PaaS, IaaS, and FaaS:
 Cars vs. PaaS: Platform as a Service (PaaS) consists of third parties providing a combined platform, including hardware and software.
The PaaS model allows end-users to develop, manage and run their applications, while the platform provider manages the infrastructure.
In addition to storage and other computing resources, providers typically provide tools for application development, testing, and
deployment.
CaaS differs from PaaS in that it is a lower-level service that only provides a specific infrastructure component - a container. CaaS
services can provide development services and tools such as CI/CD release management, which brings them closer to a PaaS model.
 Cars vs. IaaS: Infrastructure as a Service (IaaS) provides raw computing resources such as servers, storage, and networks in the public
cloud. It allows organizations to increase resources without upfront costs and less risk and overhead.
CaaS differs from IaaS in that it provides an abstraction layer on top of raw hardware resources. IaaS services such as Amazon EC2
provide compute instances, essentially computers with operating systems running in the public cloud. CaaS services run and manage
containers on top of these virtual machines, or in the case of services such as Azure Container Instances, allowing users to run containers
directly on bare metal resources.
 Cars vs. FaaS:Work as a Service (FaaS), also known as serverless computing, is suitable for users who need to run a specific function or
component of an application without managing servers. With FaaS, the service provider automatically manages the physical hardware,
virtual machines, and other infrastructure, while the user provides the code and pays per period or number of executions.
CaaS differs from FAS because it provides direct access to the infrastructure-users can configure and manage containers. However, some
CaaS services, such as Amazon Fargate, use a serverless deployment model to provide container services while abstracting servers from
users, making them more similar to the FaaS model.
What is a Container Cluster in CaaS?
A container cluster is a dynamic content management system that holds and manages containers, grouped into pods and running on nodes. It also
manages all the interconnections and communication channels that tie containers together within the system. A container cluster consists of three
major components:
 Dynamic Container Placement: Container clusters rely on cluster scheduling, whereby workloads packaged in a container image can
be intelligently allocated between virtual and physical machines based on their capacity, CPU, and hardware requirements.
Cluster Scheduler enables flexible management of container-based workloads by automatically rescheduling tasks when a failure occurs,
increasing or decreasing clusters when appropriate, and workloads across machines to reduce or eliminate risks from correlated failures
spread. Dynamic container placement is all about automating the execution of workloads by sending the container to the right place for
execution.
 Thinking in Sets of Containers: For companies using CaaS that require large quantities of containers, it is useful to start thinking
about sets of containers rather than individuals. CaaS service providers enable their customers to configure pods, a collection of co-
scheduled containers in any way they like. Instead of single scheduling containers, users can group containers using pods to ensure that
certain sets of containers are executed simultaneously on the same host.
 Connecting within a Cluster: Today, many newly developed applications include micro-services that are networked to communicate
with each other. Each of these microservices is deployed in a container that runs on nodes, and the nodes must be able to communicate
with each other effectively. Each node contains information such as the hostname and IP address of the node, the status of all running
nodes, the node's currently available capacity to schedule additional pods, and other software license data.
Communication between nodes is necessary to maintain a failover system, where if an individual node fails, the workload can be sent to
an alternate or backup node for execution.
Why are containers important?
With the help of containers, application code can be packaged so that we can run it anywhere.
 Helps promote portability between multiple platforms.
 Helps in faster release of products.
 Provides increased efficiency for developing and deploying innovative solutions and designing distributed systems.
Why is CAAS important?
 Helps developers to develop fully scaled containers as well as application deployment.
 Helps to simplify container management.
 Google helps automate key IT tasks like Kubernetes and Docker.
 Helps increase the velocity of team development resulting in faster development and deployment.
Conclusion:
There's a reason many industrialists swear by containers. Ease of operation, resource friendliness, elegance, and portability make it a clear favorite
in the coding community. The benefits offered by containers far outweigh any disadvantages.
Fault tolerance in cloud computing means creating a blueprint for ongoing work whenever some parts are down or unavailable. It helps enterprises
evaluate their infrastructure needs and requirements and provides services in case the respective device becomes unavailable for some reason.
It does not mean that the alternative system can provide 100% of the entire service. Still, the concept is to keep the system usable and, most
importantly, at a reasonable level in operational mode. It is important if enterprises continue growing in a continuous mode and increase their
productivity levels.
Main Concepts behind Fault Tolerance in Cloud Computing System
 Replication: Fault-tolerant systems work on running multiple replicas for each service. Thus, if one part of the system goes wrong,
other instances can be used to keep it running instead. For example, take a database cluster that has 3 servers with the same
information on each. All the actions like data entry, update, and deletion are written on each. Redundant servers will remain idle
until a fault tolerance system demands their availability.
 Redundancy: When a system part fails or goes downstate, it is important to have a backup type system. The server works with
emergency databases that include many redundant services. For example, a website program with MS SQL as its database may fail
midway due to some hardware fault. Then the redundancy concept has to take advantage of a new database when the original is in
offline mode.
Techniques for Fault Tolerance in Cloud Computing
 Priority should be given to all services while designing a fault tolerance system. Special preference should be given to the database
as it powers many other entities.
 After setting the priorities, the Enterprise has to work on mock tests. For example, Enterprise has a forums website that enables
users to log in and post comments. When authentication services fail due to a problem, users will not be able to log in.
Then, the forum becomes read-only and does not serve the purpose. But with fault-tolerant systems, healing will be ensured, and the user can
search for information with minimal impact.
Major Attributes of Fault Tolerance in Cloud Computing
 None Point of Failure: The concepts of redundancy and replication define that fault tolerance can occur but with some minor
effects. If there is no single point of failure, then the system is not fault-tolerant.
 Accept the fault isolation concept: the fault occurrence is handled separately from other systems. It helps to isolate the
Enterprise from an existing system failure.
Existence of Fault Tolerance in Cloud Computing
 System Failure: This can either be a software or hardware issue. A software failure results in a system crash or hangs, which may
be due to Stack Overflow or other reasons. Any improper maintenance of physical hardware machines will result in hardware system
failure.
 Incidents of Security Breach: There are many reasons why fault tolerance may arise due to security failures. The hacking of the
server hurts the server and results in a data breach. Other reasons for requiring fault tolerance in the form of security breaches
include ransomware, phishing, virus attacks, etc.
Take-Home Points
Fault tolerance in cloud computing is a crucial concept that must be understood in advance. Enterprises are caught unaware when there is a data
leak or system network failure resulting in complete chaos and lack of preparedness. It is advised that all enterprises should actively pursue the
matter of fault tolerance.
If an enterprise is in growing mode even when some failure occurs, a fault tolerance system design is necessary. Any constraints should not affect
the growth of the Enterprise, especially when using the cloud platform.
Principles of Cloud Computing
Studying the principles of cloud computing will help you understand the adoption and use of cloud computing. These principles reveal opportunities
for cloud customers to move their computing to the cloud and for the cloud vendor to deploy a successful cloud environment.
The National Institute of Standards and Technology (NIST) said cloud computing provides worldwide and on-demand access to computing
resources that can be configured based on customer demand. NSIT has also introduced the 5-4-3 Principle of Cloud Computing which includes five
distinctive features of cloud computing, four deployment models, and three service models.
Five Essential Characteristics Features
The essential characteristics of cloud computing define the important features for successful cloud computing. If any feature is missing from the
defining feature, fortunately, it is not cloud computing. Let us now discuss what these essential features are:
 On-demand Service: Customers can self-provision computing resources like server time, storage, network, applications as per their
demands without human intervention, i.e., cloud service provider.
 Broad Network Access: Computing resources are available over the network and can be accessed using heterogeneous client
platforms like mobiles, laptops, desktops, PDAs, etc.
 Resource Pooling: Computing resources such as storage, processing, network, etc., are pooled to serve multiple clients. For this, cloud
computing adopts a multitenant model where the computing resources of service providers are dynamically assigned to the customer on
their demand.
The customer is not even aware of the physical location of these resources. However, at a higher level of abstraction, the location of
resources can be specified.
 Sharp elasticity: Computing resources for a cloud customer often appear limitless because cloud resources can be rapidly and
elastically provisioned. The resource can be released at an increasingly large scale to meet customer demand.
Computing resources can be purchased at any time and in any quantity depending on the customers' demand.
 Measured Service: Monitoring and control of computing resources used by clients can be done by implementing meters at some level
of abstraction depending on the type of Service.
The resources used can be reported with metering capability, thereby providing transparency between the provider and the customer.
Principles to Scale Up Cloud Computing
This section will discuss the principles that leverage the Internet to scale up cloud computing services.
 Federation: Cloud resources are always unlimited for customers, but each cloud has a limited capacity. If customer demand continues
to grow, the cloud will have to exceed its potential, for which the form federation of service providers enables collaboration and resource
sharing.
A federated cloud must allow virtual applications to be deployed on federated sites. Virtual applications should not be location-dependent
and should be able to migrate easily between sites.
Union members should be independent, making it easier for competing service providers to form unions.
 Freedom: Cloud computing services should provide end-users complete freedom that allows the user to use cloud services without
depending on a specific cloud provider.
Even the cloud provider should be able to manage and control the computing service without sharing internal details with customers or
partners.
 Isolation: We are all aware that a cloud service provider provides its computing resources to multiple end-users. The end-user must be
assured before moving his computing cloud that his data or information will be isolated in the cloud and cannot be accessed by other
members sharing the cloud.
 Elasticity: Cloud computing resources should be elastic, which means that the user should be free to attach and release computing
resources on their demand.
 Business Orientation: Companies must ensure the quality of service providers offer before moving mission-critical applications to the
cloud. The cloud service provider should develop a mechanism to understand the exact business requirement of the customer and
customize the service parameters as per the customer's requirement.
 Trust: Trust is the most important factor that drives any customer to move their computing to the cloud. For the cloud to be successful,
trust must be maintained to create a federation between the cloud customer, the cloud vendor, and the various cloud providers.
So, these are the principles of cloud computing that take advantage of the Internet to enhance cloud computing. A cloud provider
considers these principles before deploying cloud services to end-users.
We trace the roots of cloud computing by focusing at the advancement of technologies in hardware (multi-core chips, virtualization), Internet
technologies (Web 2.0, web services, service-oriented architecture), distributed computing (grids or clusters) and system management (data center
automation, autonomous computing).
Some of the technologies are marked in the early stages of their development; A specification process was followed, leading to maturity and
universal adoption as a result.
The emergence of cloud computing is linked to these technologies. We take a closer look at the technologies which is the basis of cloud computing
that give a canvas of the cloud ecosystem.
Cloud computing Internet technologies have so many roots. They help the computers to increase their capability and make them more powerful.
In cloud computing, there are three main types of services which are IaaS - Infrastructure as a Service, PaaS - Platforms as a service and SaaS -
Software as a Service.
There are four types of cloud depending on the platform which are free, public, hybrid, and platform.
What is Cloud Computing?
"Cloud computing contains many servers that host the web services and data storage. The technology allows the companies to eliminate the
requirement for costly and powerful systems."
Company data will be stored on low-cost servers, and employees can easily access the data by a normal network.
In the traditional data system, the company maintains the physical hardware, which costs a lot, while cloud computing supply a virtual platform.
In a virtual platform, every server hosts the applications, and the data is handled by a distinct provider. Therefore, we should to pay them.
The development of cloud computing is tremendous with the advancement of Internet technologies. And it is a new concept for low capitalization
firms.
Most of the companies are switching to cloud computing to provide the flexibility, accuracy, speed, and low cost to their customer.
Cloud computing has much of applications, Like as infrastructure management, application execution, and also data access management tool.
There are four roots of cloud computing which are given below:
Root 1: Internet Technologies:
The first one is Internet Technologies that includes service-oriented architecture, and Web 2.0, and also the web services.
Internet technologies are commonly accessible by the public. People access content and run applications that depend on network connections.
Cloud computing relies on centralized storage, networks and bandwidth. However, the Internet is not a network - it is highly multiplexed and
centralized management.
Therefore, anyone can host the number of websites anywhere in worldwide. Because of network servers, a lot of websites can be created.
Service-Oriented Architecture is a self-contained module designed for business functions.
It is provided for authentication services business management and event logging, also saves us a lot of paperwork and time.
Web services such as XML and HTTP provide web delivery services by common mechanisms. It is an universal concept of web service globally.
Web 2.0 services are more convenient for the users, and they do not need to know much about programming and coding concepts to work.
Information technology companies provide services in which people can access the services on a platform.
Predefined templates and blocks make it easy to work with, and they can work together via a centralized cloud computing system.
Examples of Web 2.0 services are hosted services such as Google Maps, micro blogging sites such as Twitter, and social sites such as Facebook.
Root 2: Distributed Computing
The second root of cloud computing is distributed computing, that includes the grid, utility computing, and cluster.
To understand it more easily, here's an example, computer is a storage area, and save documents in the form of files or pictures.
Each document stored in a computer has some specific location, on a hard disk or stored on the Internet.
When someone visits the website on the Internet, that person browses by downloading the files.
Users can access files at a location after processing; it can send the file back to the server.
So, it is known as the distributed computing of the cloud. People can access it from anywhere in overseas.
All resources in memory space, processor speed and hard disk space are used with the help of the route.
The company using the technology never faces any problem and will always be in competition with other companies too.
Root 3: Hardware:
The third one is the hardware by the roots of cloud computing, that includes multi-core chips and virtualization.
When we talk about the hardware, it is virtual cloud computing and people do not need it more.
Computers require hardware like Random access memory, CPU, , Read Only Memory and motherboard to store, process, analyze and manage the
data and information.
There are no hardware devices because in cloud computing all the apps are managed by the internet.
If you are using huge amount of data, it becomes so difficult for your computer to manage the continuous increase in data.
The cloud stores the data on its own computer slightly than the computer that holds the data.
Virtualization allows the people to access the resources from virtual machines in cloud computing. It makes it cheaper for customers to use
the cloud services.
Furthermore, in the Service Level Agreement based cloud computing model, each customer gets their virtual machine called a Virtual Private
Cloud (VPC).
The single cloud computing platform which distribute the hardware, software and operating systems.
Root 4: System Management:
The fourth root of cloud computing contains autonomous cloud and data center automation here.
System management handles operations to improve productivity and efficiency of the root system.
To achieve it, the system management ensures that all the employees have an easy access to the necessary data and information.
Employees can change the configuration, receive/retransmit information and perform other related tasks from any location.
It makes for the system administrator to respond to any user demand. In addition, the administrator can restrict or deny access for different users.
In the autonomous system, the administrator task becomes easier as the system is autonomous or self-managing. Additionally, data analysis is
controlled by sensors.
System responses perform many functions such as optimization, configuration, and protection based on the data.
Therefore, human involvement is low here, but here the computing system handles most of the work.
Difference between roots of cloud computing
The most fundamental differences between utilities and clouds are in storage, bandwidth, and power availability. In a utility system, all these
utilities are provided through the company, whereas in a cloud environment, it is provided through the provider you work with.
You might be using a file-sharing service to upload the pictures, documents, and files to the server which work remotely.
You need many physical storage devices to hold the data with access to electricity and the Internet.
In addition, the physical components required the file sharing service and access to the Internet by providing thwe third-party service provider's
data center.
Many different Internet technologies can make up the infrastructure of a cloud.
For example, if any internet service provider has lower speed of internet, then they can transfer their data without getting the better
infrastructure of hardware.
Conclusion
The cloud is a collection of the four roots running on the remote server.
Many organizations are moving towards the technology as they manage huge amount of memory, hardware and other resources.
The potential of the technology is enormous as it is increasing the overall efficiency, security, reliability, and flexibility of businesses.
What is a Data Center?
A data center - also known as a data center or data center - is a facility made up of networked computers, storage systems, and computing
infrastructure that businesses and other organizations use to organize, process, store large amounts of data. And to broadcast. A business typically
relies heavily on applications, services, and data within a data center, making it a focal point and critical asset for everyday operations.
Enterprise data centers increasingly incorporate cloud computing resources and facilities to secure and protect in-house, onsite resources. As
enterprises increasingly turn to cloud computing, the boundaries between cloud providers' data centers and enterprise data centers become less
clear.
How do Data Centers work?
A data center facility enables an organization to assemble its resources and infrastructure for data processing, storage, and communication,
including:
 systems for storing, sharing, accessing, and processing data across the organization;
 physical infrastructure to support data processing and data communication; And
 Utilities such as cooling, electricity, network access, and uninterruptible power supplies (UPS).
Gathering all these resources in one data center enables the organization to:
 protect proprietary systems and data;
 Centralizing IT and data processing employees, contractors, and vendors;
 Enforcing information security controls on proprietary systems and data; And
 Realize economies of scale by integrating sensitive systems in one place.
AD
Why are data centers important?
Data centers support almost all enterprise computing, storage, and business applications. To the extent that the business of a modern enterprise
runs on computers, the data center is business.
Data centers enable organizations to concentrate their processing power, which in turn enables the organization to focus its attention on:
 IT and data processing personnel;
 computing and network connectivity infrastructure; And
 Computing Facility Security.
What are the main components of Data Centers?
Elements of a data center are generally divided into three categories:
 Calculation
 enterprise data storage
 networking
A modern data center concentrates an organization's data systems in a well-protected physical infrastructure, which includes:
 Server;
 storage subsystems;
 networking switches, routers, and firewalls;
 cabling; And
 Physical racks for organizing and interconnecting IT equipment.
Datacenter Resources typically include:
 power distribution and supplementary power subsystems;
 electrical switching;
 UPS;
 backup generator;
 ventilation and data center cooling systems, such as in-row cooling configurations and computer room air conditioners; And
 Adequate provision for network carrier (telecom) connectivity.
It demands a physical facility with physical security access controls and sufficient square footage to hold the entire collection of infrastructure and
equipment.
How are Datacenters managed?
Datacenter management is required to administer many different topics related to the data center, including:
 Facilities Management. Management of a physical data center facility may include duties related to the facility's real estate, utilities,
access control, and personnel.
 Datacenter inventory or asset management. Datacenter features include hardware assets and software licensing, and release
management.
 Datacenter Infrastructure Management. DCIM lies at the intersection of IT and facility management and is typically accomplished
by monitoring data center performance to optimize energy, equipment, and floor use.
 Technical support. The data center provides technical services to the organization, and as such, it should also provide technical support
to the end-users of the enterprise.
 Datacenter management includes the day-to-day processes and services provided by the data center.
The image shows an IT professional installing and maintaining a high-capacity rack-mounted system in a data center.
Datacenter Infrastructure Management and Monitoring:
Modern data centers make extensive use of monitoring and management software. Software, including DCIM tools, allows remote IT data center
administrators to monitor facility and equipment, measure performance, detect failures and implement a wide range of corrective actions without
ever physically entering the data center room.
The development of virtualization has added another important dimension to data center infrastructure management. Virtualization now supports
the abstraction of servers, networks, and storage, allowing each computing resource to be organized into pools regardless of their physical location.
Action Network, storage and server virtualization can be implemented through software, giving software-defined data centers traction.
Administrators can then provision workloads, storage instances, and even network configurations from those common resource pools. When
administrators no longer need those resources, they can return them to the pool for reuse.
Energy Consumption and Efficiency
Datacenter designs also recognize the importance of energy efficiency. A simple data center may require only a few kilowatts of energy, but
enterprise data centers may require more than 100 megawatts. Today, green data centers with minimal environmental impact through low-
emission building materials, catalytic converters, and alternative energy technologies are growing in popularity.
Data centers can maximize efficiency through physical layouts, known as hot aisle and cold isle layouts. The server racks are lined up in alternating
rows, with cold air intakes on one side and hot air exhausts. The result is alternating hot and cold aisles, with the exhaust forming a hot aisle and
the intake forming a cold aisle. Exhausts are pointing to air conditioning equipment. The equipment is often placed between the server cabinets in
the row or aisle and distributes the cold air back into the cold aisle. This configuration of air conditioning equipment is known as in-row cooling.
Organizations often measure data center energy efficiency through power usage effectiveness (PUE), which represents the ratio of the total power
entering the data center divided by the power used by IT equipment.
However, the subsequent rise of virtualization has allowed for more productive use of IT equipment, resulting in much higher efficiency, lower
energy usage, and reduced energy costs. Metrics such as PUE are no longer central to energy efficiency goals. However, organizations can still
assess PUE and use comprehensive power and cooling analysis to understand better and manage energy efficiency.
Datacenter Level:
Data centers are not defined by their physical size or style. Small businesses can operate successfully with multiple servers and storage arrays
networked within a closet or small room. At the same time, major computing organizations -- such as Facebook, Amazon, or Google -- can fill a
vast warehouse space with data center equipment and infrastructure.
In other cases, data centers may be assembled into mobile installations, such as shipping containers, also known as data centers in a box, that can
be moved and deployed.
However, data centers can be defined by different levels of reliability or flexibility, sometimes referred to as data center tiers.
In 2005, the American National Standards Institute (ANSI) and the Telecommunications Industry Association (TIA) published the standard
ANSI/TIA-942, "Telecommunications Infrastructure Standards for Data Centers", which defined four levels of data center design and
implementation guidelines.
Each subsequent level aims to provide greater flexibility, security, and reliability than the previous level. For example, a Tier I data center is little
more than a server room, while a Tier IV data center provides redundant subsystems and higher security.
Levels can be differentiated by available resources, data center capabilities, or uptime guarantees. The Uptime Institute defines data center
levels as:
 Tier I. These are the most basic types of data centers, including UPS. Tier I data centers do not provide redundant systems but must
guarantee at least 99.671% uptime.
 Tier II.These data centers include system, power and cooling redundancy and guarantee at least 99.741% uptime.
 Tier III. These data centers offer partial fault tolerance, 72-hour outage protection, full redundancy, and a 99.982% uptime guarantee.
 Tier IV. These data centers guarantee 99.995% uptime - or no more than 26.3 minutes of downtime per year - as well as full fault
tolerance, system redundancy, and 96 hours of outage protection.
Most data center outages can be attributed to these four general categories.
Datacenter Architecture and Design
Although almost any suitable location can serve as a data center, a data center's deliberate design and implementation require careful
consideration. Beyond the basic issues of cost and taxes, sites are selected based on several criteria: geographic location, seismic and
meteorological stability, access to roads and airports, availability of energy and telecommunications, and even the prevailing political environment.
Once the site is secured, the data center architecture can be designed to focus on the structure and layout of mechanical and electrical
infrastructure and IT equipment. These issues are guided by the availability and efficiency goals of the desired data center tier.
Datacenter Security
Datacenter designs must also implement sound security and security practices. For example, security is often reflected in the layout of doors and
access corridors, which must accommodate the movement of large, cumbersome IT equipment and allow employees to access and repair
infrastructure.
Fire fighting is another major safety area, and the widespread use of sensitive, high-energy electrical and electronic equipment precludes common
sprinklers. Instead, data centers often use environmentally friendly chemical fire suppression systems, which effectively oxygenate fires while
minimizing collateral damage to equipment. Comprehensive security measures and access controls are needed as the data center is also a core
business asset. These may include:
 Badge Access;
 biometric access control, and
 video surveillance.
These security measures can help detect and prevent employee, contractor, and intruder misconduct.
What is Data Center Consolidation?
There is no need for a single data center. Modern businesses can use two or more data center installations in multiple locations for greater
flexibility and better application performance, reducing latency by locating workloads closer to users.
Conversely, a business with multiple data centers may choose to consolidate data centers while reducing the number of locations to reduce the cost
of IT operations. Consolidation typically occurs during mergers and acquisitions when most businesses no longer need data centers owned by the
subordinate business.
What is Data Center Colocation?
Datacenter operators may also pay a fee to rent server space in a colocation facility. A colocation is an attractive option for organizations that want
to avoid the large capital expenditure associated with building and maintaining their data centers.
Today, colocation providers are expanding their offerings to include managed services such as interconnectivity, allowing customers to connect to
the public cloud.
Because many service providers today offer managed services and their colocation features, the definition of managed services becomes hazy, as
all vendors market the term slightly differently. The important distinction to make is:
 The organization pays a vendor to place their hardware in a facility. The customer is paying for the location alone.
 Managed services. The organization pays the vendor to actively maintain or monitor the hardware through performance reports,
interconnectivity, technical support, or disaster recovery.
What is the difference between Data Center vs. Cloud?
Cloud computing vendors offer similar features to enterprise data centers. The biggest difference between a cloud data center and a typical
enterprise data center is scale. Because cloud data centers serve many different organizations, they can become very large. And cloud computing
vendors offer these services through their data centers.
Large enterprises such as Google may require very large data centers, such as the Google data center in Douglas County, Ga.
Because enterprise data centers increasingly implement private cloud software, they increasingly see end-users, like the services provided by
commercial cloud providers.
Private cloud software builds on virtualization to connect cloud-like services, including:
 system automation;
 user self-service; And
 Billing/Charge Refund to Data Center Administration.
The goal is to allow individual users to provide on-demand workloads and other computing resources without IT administrative intervention.
Further blurring the lines between the enterprise data center and cloud computing is the development of hybrid cloud environments. As enterprises
increasingly rely on public cloud providers, they must incorporate connectivity between their data centers and cloud providers.
For example, platforms such as Microsoft Azure emphasize hybrid use of local data centers with Azure or other public cloud resources. The result is
not the elimination of data centers but the creation of a dynamic environment that allows organizations to run workloads locally or in the cloud or
move those instances to or from the cloud as desired.
Evolution of Data Centers
The origins of the first data centers can be traced back to the 1940s and the existence of early computer systems such as the Electronic Numerical
Integrator and Computer (ENIAC). These early machines were complicated to maintain and operate and had cables connecting all the necessary
components. They were also in use by the military - meaning special computer rooms with racks, cable trays, cooling mechanisms, and access
restrictions were necessary to accommodate all equipment and implement appropriate safety measures.
However, it was not until the 1990s, when IT operations began to gain complexity and cheap networking equipment became available, that the
term data center first came into use. It became possible to store all the necessary servers in one room within the company. These specialized
computer rooms gained traction, dubbed data centers within organizations.
At the time of the dot-com bubble in the late 1990s, the need for Internet speed and a constant Internet presence for companies required large
amounts of networking equipment required large facilities. At this point, data centers became popular and began to look similar to those described
above.
In the history of computing, as computers get smaller and networks get bigger, the data center has evolved and shifted to accommodate the
necessary technology of the day.
A data center can be described as a facility/location of networked computers and associated components (such as telecommunications and storage)
that help businesses and organizations handle large amounts of data. These data centers allow data to be organized, processed, stored, and
transmitted across applications used by businesses.
Types of Data Center:
Businesses use different types of data centers, including:
 Telecom Data Center: It is a type of data center operated by telecommunications or service providers. It requires high-speed
connectivity to work.
 Enterprise data center: This is a type of data center built and owned by a company that may or may not be onsite.
 Colocation Data Center: This type of data center consists of a single data center owner's location, providing cooling to multiple
enterprises and hyper-scale their customers.
 Hyper-Scale Data Center: This is a type of data center owned and operated by the company itself.
Difference between Cloud and Data Center:
S.No Cloud Data Center
1. Cloud is a virtual resource that helps businesses store, organize,
and operate data efficiently.
Data Center is a physical resource that helps businesses store,
organize, and operate data efficiently.
2. The scalability of the cloud required less amount of investment. The scalability of the Data Center is huge in investment compared to
the cloud.
3. Maintenance cost is less as compared to service providers. Maintenance cost is high because the developers of the organization
do the maintenance.
4. The organization needs to rely on third parties to store its data. The organization's developers are trusted for the data stored in the
data centers.
5. The performance is huge compared to the investment. The performance is less than the investment.
6. This requires a plan for optimizing the cloud. It is easily customizable without any hard planning.
7. It requires a stable internet connection to provide the function. This may or may not require an internet connection.
8. The cloud is easy to operate and is considered a viable option. Data centers require experienced developers to operate and are not
considered a viable option.
Resilience computing is a form of computing that distributes redundant IT resources for operational purposes. In this computing, IT resources
are pre-configured so that these sources are needed at processing time; Can be used in processing without interruption.
The characteristic of flexibility in cloud computing can refer to redundant IT resources within a single cloud or across multiple clouds. By taking
advantage of the flexibility of cloud-based IT services, cloud consumers can improve both the efficiency and availability of their applications.
Fixes and continues operation. Cloud Resilience is a term used to describe the ability of servers, storage systems, data servers, or entire networks
to remain connected to the network without interfering with their functions or losing their operational capabilities. For a cloud system to remain
resilient, it needs to cluster the servers, has redundant workloads, and even rely on multiple physical servers. High-quality products and services
will accomplish this task.
The three basic strategies that are used to improve a cloud system's resilience are:
 Testing and Monitoring: An independent method ensures that equipment meets minimum behavioural requirements. It is important
for system failure detection and resource reconfiguration.
 Checkpoint and Restart: Based on such conditions, the state of the whole system is saved. System failures represent a phase of
restoration to the most recent corrected checkpoint and recovery of the system.
 Replication: The essential components of a device are replicated, using additional resources (hardware and software), ensuring that
they are usable at any given time. With this strategy, the additional difficulty is the state synchronization task between replicas and the
main device.
Security with Cloud Technology
Cloud technology, used correctly, provides superior security to customers anywhere. High-quality cloud products can protect against DDoS
(Distributed Denial of Service) attacks, where a cyberattack affects the system's bandwidth and makes the computer unavailable to the user.
Cloud protection can also use redundant security mechanisms to protect someone's data from being hacked or leaked. In addition, cloud security
allows one to maintain regulatory compliance and control advanced networks while improving the security of sensitive personal and financial data.
Finally, having access to high-quality customer service and IT support is critical to fully taking advantage of these cloud security benefits.
Advantages of Cloud Resilience
The permanence of the cloud is considered a way of responding to the "crisis". It refers to data and technology.
The infrastructure, consisting of virtual servers, is built to handle sufficient computing power and data volume variability while allowing ubiquitous
use of various devices, such as laptops, smartphones, PCs, etc.
All data can be recovered if the computer machine is damaged or destroyed and guarantees the stability of the infrastructure and data.
Issues or Critical aspects of Resiliency
A major problem is how cloud application resilience can be tested, evaluated and defined before going live, so that system availability is protected
against business objectives. Traditional research methods do not effectively reveal cloud application durability problems for many factors.
Heterogeneous and multi-layer architectures are vulnerable to failure due to the sophistication of the interactions of different software entities.
Failures are often asymptomatic and remain hidden as internal equipment errors unless their visibility is due to special circumstances.
Poor scheduling of production usage patterns and the architecture of cloud applications result in unexpected 'accidental' behaviour, especially
hybrid and multi-cloud.
Cloud layers can have different stakeholders managed by different administrators, resulting in unexpected configuration changes during application
design that cause interfaces to break.
Security in cloud computing is a major concern. Proxy and brokerage services should be employed to restrict a client from accessing the shared
data directly. Data in the cloud should be stored in encrypted form.
Security Planning
Before deploying a particular resource to the cloud, one should need to analyze several aspects of the resource, such as:
 A select resource needs to move to the cloud and analyze its sensitivity to risk.
 Consider cloud service models such as IaaS, PaaS,and These models require the customer to be responsible for Security at different
service levels.
 Consider the cloud type, such as public, private, community, or
 Understand the cloud service provider's system regarding data storage and its transfer into and out of the cloud.
 The risk in cloud deployment mainly depends upon the service models and cloud types.
Understanding Security of Cloud
 Security Boundaries
The Cloud Security Alliance (CSA) stack model defines the boundaries between each service model and shows how different functional units
relate. A particular service model defines the boundary between the service provider's responsibilities and the customer. The following diagram
shows the CSA stack model:
Key Points to CSA Model
 IaaS is the most basic level of service, with PaaS and SaaS next two above levels of services.
 Moving upwards, each service inherits the capabilities and security concerns of the model beneath.
 IaaS provides the infrastructure, PaaS provides the platform development environment, and SaaS provides the operating
environment.
 IaaS has the lowest integrated functionality and security level, while SaaS has the highest.
 This model describes the security boundaries at which cloud service providers' responsibilities end and customers' responsibilities
begin.
 Any protection mechanism below the security limit must be built into the system and maintained by the customer.
 AD
Although each service model has a security mechanism, security requirements also depend on where these services are located, private, public,
hybrid, or community cloud.
 Understanding data security
Since all data is transferred using the Internet, data security in the cloud is a major concern. Here are the key mechanisms to protect the data.
o access control
o audit trail
o certification
o authority
The service model should include security mechanisms working in all of the above areas.
 Separate access to data
Since the data stored in the cloud can be accessed from anywhere, we need to have a mechanism to isolate the data and protect it from the
client's direct access.
Broker cloud storage is a way of separating storage in the Access Cloud. In this approach, two services are created:
o A broker has full access to the storage but does not have access to the client.
o A proxy does not have access to storage but has access to both the client and the broker.
o Working on a Brocade cloud storage access system
o When the client issues a request to access data:
o The client data request goes to the external service interface of the proxy.
o The proxy forwards the request to the broker.
o The broker requests the data from the cloud storage system.
o The cloud storage system returns the data to the broker.
o The broker returns the data to the proxy.
o Finally, the proxy sends the data to the client.
All the above steps are shown in the following diagram:
Encoding
Encryption helps to protect the data from being hacked. It protects the data being transferred and the data stored in the cloud. Although
encryption helps protect data from unauthorized access, it does not prevent data loss.
Why is cloud security architecture important?
The difference between "cloud security" and "cloud security architecture" is that the former is built from problem-specific measures while the latter
is built from threats. A cloud security architecture can reduce or eliminate the holes in Security that point-of-solution approaches are almost
certainly about to leave.
It does this by building down - defining threats starting with the users, moving to the cloud environment and service provider, and then to the
applications. Cloud security architectures can also reduce redundancy in security measures, which will contribute to threat mitigation and increase
both capital and operating costs.
The cloud security architecture also organizes security measures, making them more consistent and easier to implement, particularly during cloud
deployments and redeployments. Security is often destroyed because it is illogical or complex, and these flaws can be identified with the proper
cloud security architecture.
Elements of cloud security architecture
The best way to approach cloud security architecture is to start with a description of the goals. The architecture has to address three things: an
attack surface represented by external access interfaces, a protected asset set that represents the information being protected, and vectors
designed to perform indirect attacks anywhere, including in the cloud and attacks the system.
The goal of the cloud security architecture is accomplished through a series of functional elements. These elements are often considered separately
rather than part of a coordinated architectural plan. It includes access security or access control, network security, application security, contractual
Security, and monitoring, sometimes called service security. Finally, there is data protection, which are measures implemented at the protected-
asset level.
A complete cloud security architecture addresses the goals by unifying the functional elements.
Cloud security architecture and shared responsibility model
The security and security architectures for the cloud are not single-player processes. Most enterprises will keep a large portion of their IT workflow
within their data centers, local networks, and VPNs. The cloud adds additional players, so the cloud security architecture should be part of a
broader shared responsibility model.
A shared responsibility model is an architecture diagram and a contract form. It exists formally between a cloud user and each cloud provider and
network service provider if they are contracted separately.
Each will divide the components of a cloud application into layers, with the top layer being the responsibility of the customer and the lower layer
being the responsibility of the cloud provider. Each separate function or component of the application is mapped to the appropriate layer depending
on who provides it. The contract form then describes how each party responds.
Ad

More Related Content

Similar to cloud computing is the delivery of computing services (20)

cloudcomputingppt-170825044254.pdf
cloudcomputingppt-170825044254.pdfcloudcomputingppt-170825044254.pdf
cloudcomputingppt-170825044254.pdf
SANDY4772
 
Cloud-Computing-ppt.pptx
Cloud-Computing-ppt.pptxCloud-Computing-ppt.pptx
Cloud-Computing-ppt.pptx
MiltonMolla1
 
Cloud Computing ppt
Cloud Computing pptCloud Computing ppt
Cloud Computing ppt
OECLIB Odisha Electronics Control Library
 
Cloud-Computing-ppt.pptx
Cloud-Computing-ppt.pptxCloud-Computing-ppt.pptx
Cloud-Computing-ppt.pptx
HODIT12
 
Cloud computing ppt
Cloud computing pptCloud computing ppt
Cloud computing ppt
Pravesh ARYA
 
Cloud-Computing-ppt (1).pptx
Cloud-Computing-ppt (1).pptxCloud-Computing-ppt (1).pptx
Cloud-Computing-ppt (1).pptx
SamiullahKhan794730
 
Cloud computing
Cloud computingCloud computing
Cloud computing
Gopika Babu
 
Cloud computing ppt
Cloud computing pptCloud computing ppt
Cloud computing ppt
Amex Ka
 
Cloud computing
Cloud computingCloud computing
Cloud computing
ABDALLA SAID
 
Cloud computing
Cloud computingCloud computing
Cloud computing
ABDALLA SAID
 
Clpud-Computing-PPT-2.pptx
Clpud-Computing-PPT-2.pptxClpud-Computing-PPT-2.pptx
Clpud-Computing-PPT-2.pptx
AnjanaPundir
 
Ijirsm choudhari-priyanka-backup-and-restore-in-smartphone-using-mobile-cloud...
Ijirsm choudhari-priyanka-backup-and-restore-in-smartphone-using-mobile-cloud...Ijirsm choudhari-priyanka-backup-and-restore-in-smartphone-using-mobile-cloud...
Ijirsm choudhari-priyanka-backup-and-restore-in-smartphone-using-mobile-cloud...
IJIR JOURNALS IJIRUSA
 
Cloud-Computing-ppt.pptx
Cloud-Computing-ppt.pptxCloud-Computing-ppt.pptx
Cloud-Computing-ppt.pptx
NikhilRanaCSELEET005
 
Cloud computing
Cloud computingCloud computing
Cloud computing
Vaishnavi Mishra
 
Cloud-Computing-ppt.pptx
Cloud-Computing-ppt.pptxCloud-Computing-ppt.pptx
Cloud-Computing-ppt.pptx
SameerWadkar32
 
Cloud computing and Advantages
Cloud computing and AdvantagesCloud computing and Advantages
Cloud computing and Advantages
Toneshkumar Pardhi
 
CLOUD COMPUTING.pptx
CLOUD COMPUTING.pptxCLOUD COMPUTING.pptx
CLOUD COMPUTING.pptx
GoogleGaming2
 
Cloud computing (1)
Cloud computing (1)Cloud computing (1)
Cloud computing (1)
Hussain Hamil
 
Cloud Computing
Cloud ComputingCloud Computing
Cloud Computing
Suman Jha
 
Cloud computing
Cloud computingCloud computing
Cloud computing
ApoorvaBhoge07
 
cloudcomputingppt-170825044254.pdf
cloudcomputingppt-170825044254.pdfcloudcomputingppt-170825044254.pdf
cloudcomputingppt-170825044254.pdf
SANDY4772
 
Cloud-Computing-ppt.pptx
Cloud-Computing-ppt.pptxCloud-Computing-ppt.pptx
Cloud-Computing-ppt.pptx
MiltonMolla1
 
Cloud-Computing-ppt.pptx
Cloud-Computing-ppt.pptxCloud-Computing-ppt.pptx
Cloud-Computing-ppt.pptx
HODIT12
 
Cloud computing ppt
Cloud computing pptCloud computing ppt
Cloud computing ppt
Pravesh ARYA
 
Cloud computing ppt
Cloud computing pptCloud computing ppt
Cloud computing ppt
Amex Ka
 
Clpud-Computing-PPT-2.pptx
Clpud-Computing-PPT-2.pptxClpud-Computing-PPT-2.pptx
Clpud-Computing-PPT-2.pptx
AnjanaPundir
 
Ijirsm choudhari-priyanka-backup-and-restore-in-smartphone-using-mobile-cloud...
Ijirsm choudhari-priyanka-backup-and-restore-in-smartphone-using-mobile-cloud...Ijirsm choudhari-priyanka-backup-and-restore-in-smartphone-using-mobile-cloud...
Ijirsm choudhari-priyanka-backup-and-restore-in-smartphone-using-mobile-cloud...
IJIR JOURNALS IJIRUSA
 
Cloud-Computing-ppt.pptx
Cloud-Computing-ppt.pptxCloud-Computing-ppt.pptx
Cloud-Computing-ppt.pptx
SameerWadkar32
 
Cloud computing and Advantages
Cloud computing and AdvantagesCloud computing and Advantages
Cloud computing and Advantages
Toneshkumar Pardhi
 
CLOUD COMPUTING.pptx
CLOUD COMPUTING.pptxCLOUD COMPUTING.pptx
CLOUD COMPUTING.pptx
GoogleGaming2
 
Cloud Computing
Cloud ComputingCloud Computing
Cloud Computing
Suman Jha
 

Recently uploaded (20)

Dark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanizationDark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanization
Jakub Šimek
 
Q1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor PresentationQ1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor Presentation
Dropbox
 
Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025
Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025
Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025
João Esperancinha
 
An Overview of Salesforce Health Cloud & How is it Transforming Patient Care
An Overview of Salesforce Health Cloud & How is it Transforming Patient CareAn Overview of Salesforce Health Cloud & How is it Transforming Patient Care
An Overview of Salesforce Health Cloud & How is it Transforming Patient Care
Cyntexa
 
Building the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdfBuilding the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdf
Cheryl Hung
 
Smart Investments Leveraging Agentic AI for Real Estate Success.pptx
Smart Investments Leveraging Agentic AI for Real Estate Success.pptxSmart Investments Leveraging Agentic AI for Real Estate Success.pptx
Smart Investments Leveraging Agentic AI for Real Estate Success.pptx
Seasia Infotech
 
Top-AI-Based-Tools-for-Game-Developers (1).pptx
Top-AI-Based-Tools-for-Game-Developers (1).pptxTop-AI-Based-Tools-for-Game-Developers (1).pptx
Top-AI-Based-Tools-for-Game-Developers (1).pptx
BR Softech
 
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Markus Eisele
 
Cybersecurity Threat Vectors and Mitigation
Cybersecurity Threat Vectors and MitigationCybersecurity Threat Vectors and Mitigation
Cybersecurity Threat Vectors and Mitigation
VICTOR MAESTRE RAMIREZ
 
AsyncAPI v3 : Streamlining Event-Driven API Design
AsyncAPI v3 : Streamlining Event-Driven API DesignAsyncAPI v3 : Streamlining Event-Driven API Design
AsyncAPI v3 : Streamlining Event-Driven API Design
leonid54
 
UiPath Automation Suite – Cas d'usage d'une NGO internationale basée à Genève
UiPath Automation Suite – Cas d'usage d'une NGO internationale basée à GenèveUiPath Automation Suite – Cas d'usage d'une NGO internationale basée à Genève
UiPath Automation Suite – Cas d'usage d'une NGO internationale basée à Genève
UiPathCommunity
 
May Patch Tuesday
May Patch TuesdayMay Patch Tuesday
May Patch Tuesday
Ivanti
 
GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...
GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...
GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...
James Anderson
 
Optima Cyber - Maritime Cyber Security - MSSP Services - Manolis Sfakianakis ...
Optima Cyber - Maritime Cyber Security - MSSP Services - Manolis Sfakianakis ...Optima Cyber - Maritime Cyber Security - MSSP Services - Manolis Sfakianakis ...
Optima Cyber - Maritime Cyber Security - MSSP Services - Manolis Sfakianakis ...
Mike Mingos
 
Slack like a pro: strategies for 10x engineering teams
Slack like a pro: strategies for 10x engineering teamsSlack like a pro: strategies for 10x engineering teams
Slack like a pro: strategies for 10x engineering teams
Nacho Cougil
 
Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Everything You Need to Know About Agentforce? (Put AI Agents to Work)Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Cyntexa
 
Viam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdfViam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdf
camilalamoratta
 
The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...
The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...
The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...
SOFTTECHHUB
 
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Raffi Khatchadourian
 
DevOpsDays SLC - Platform Engineers are Product Managers.pptx
DevOpsDays SLC - Platform Engineers are Product Managers.pptxDevOpsDays SLC - Platform Engineers are Product Managers.pptx
DevOpsDays SLC - Platform Engineers are Product Managers.pptx
Justin Reock
 
Dark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanizationDark Dynamism: drones, dark factories and deurbanization
Dark Dynamism: drones, dark factories and deurbanization
Jakub Šimek
 
Q1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor PresentationQ1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor Presentation
Dropbox
 
Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025
Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025
Could Virtual Threads cast away the usage of Kotlin Coroutines - DevoxxUK2025
João Esperancinha
 
An Overview of Salesforce Health Cloud & How is it Transforming Patient Care
An Overview of Salesforce Health Cloud & How is it Transforming Patient CareAn Overview of Salesforce Health Cloud & How is it Transforming Patient Care
An Overview of Salesforce Health Cloud & How is it Transforming Patient Care
Cyntexa
 
Building the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdfBuilding the Customer Identity Community, Together.pdf
Building the Customer Identity Community, Together.pdf
Cheryl Hung
 
Smart Investments Leveraging Agentic AI for Real Estate Success.pptx
Smart Investments Leveraging Agentic AI for Real Estate Success.pptxSmart Investments Leveraging Agentic AI for Real Estate Success.pptx
Smart Investments Leveraging Agentic AI for Real Estate Success.pptx
Seasia Infotech
 
Top-AI-Based-Tools-for-Game-Developers (1).pptx
Top-AI-Based-Tools-for-Game-Developers (1).pptxTop-AI-Based-Tools-for-Game-Developers (1).pptx
Top-AI-Based-Tools-for-Game-Developers (1).pptx
BR Softech
 
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Markus Eisele
 
Cybersecurity Threat Vectors and Mitigation
Cybersecurity Threat Vectors and MitigationCybersecurity Threat Vectors and Mitigation
Cybersecurity Threat Vectors and Mitigation
VICTOR MAESTRE RAMIREZ
 
AsyncAPI v3 : Streamlining Event-Driven API Design
AsyncAPI v3 : Streamlining Event-Driven API DesignAsyncAPI v3 : Streamlining Event-Driven API Design
AsyncAPI v3 : Streamlining Event-Driven API Design
leonid54
 
UiPath Automation Suite – Cas d'usage d'une NGO internationale basée à Genève
UiPath Automation Suite – Cas d'usage d'une NGO internationale basée à GenèveUiPath Automation Suite – Cas d'usage d'une NGO internationale basée à Genève
UiPath Automation Suite – Cas d'usage d'une NGO internationale basée à Genève
UiPathCommunity
 
May Patch Tuesday
May Patch TuesdayMay Patch Tuesday
May Patch Tuesday
Ivanti
 
GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...
GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...
GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...
James Anderson
 
Optima Cyber - Maritime Cyber Security - MSSP Services - Manolis Sfakianakis ...
Optima Cyber - Maritime Cyber Security - MSSP Services - Manolis Sfakianakis ...Optima Cyber - Maritime Cyber Security - MSSP Services - Manolis Sfakianakis ...
Optima Cyber - Maritime Cyber Security - MSSP Services - Manolis Sfakianakis ...
Mike Mingos
 
Slack like a pro: strategies for 10x engineering teams
Slack like a pro: strategies for 10x engineering teamsSlack like a pro: strategies for 10x engineering teams
Slack like a pro: strategies for 10x engineering teams
Nacho Cougil
 
Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Everything You Need to Know About Agentforce? (Put AI Agents to Work)Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Everything You Need to Know About Agentforce? (Put AI Agents to Work)
Cyntexa
 
Viam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdfViam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdf
camilalamoratta
 
The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...
The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...
The No-Code Way to Build a Marketing Team with One AI Agent (Download the n8n...
SOFTTECHHUB
 
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...
Raffi Khatchadourian
 
DevOpsDays SLC - Platform Engineers are Product Managers.pptx
DevOpsDays SLC - Platform Engineers are Product Managers.pptxDevOpsDays SLC - Platform Engineers are Product Managers.pptx
DevOpsDays SLC - Platform Engineers are Product Managers.pptx
Justin Reock
 
Ad

cloud computing is the delivery of computing services

  • 1. Cloud computing is a virtualization-based technology that allows us to create, configure, and customize applications via an internet connection. The cloud technology includes a development platform, hard disk, software application, and database. What is Cloud Computing The term cloud refers to a network or the internet. It is a technology that uses remote servers on the internet to store, manage, and access data online rather than local drives. The data can be anything such as files, images, documents, audio, video, and more. There are the following operations that we can do using cloud computing:  Developing new applications and services  Storage, back up, and recovery of data  Hosting blogs and websites  Delivery of software on demand  Analysis of data  Streaming videos and audios Why Cloud Computing? Small as well as large IT companies, follow the traditional methods to provide the IT infrastructure. That means for any IT company, we need a Server Room that is the basic need of IT companies. In that server room, there should be a database server, mail server, networking, firewalls, routers, modem, switches, QPS (Query Per Second means how much queries or load will be handled by the server), configurable system, high net speed, and the maintenance engineers. To establish such IT infrastructure, we need to spend lots of money. To overcome all these problems and to reduce the IT infrastructure cost, Cloud Computing comes into existence. Characteristics of Cloud Computing: The characteristics of cloud computing are given below:  Agility: The cloud works in a distributed computing environment. It shares resources among users and works very fast.  High availability and reliability: The availability of servers is high and more reliable because the chances of infrastructure failure are minimum.  High Scalability: Cloud offers "on-demand" provisioning of resources on a large scale, without having engineers for peak loads.  Multi-Sharing: With the help of cloud computing, multiple users and applications can work more efficiently with cost reductions by sharing common infrastructure.  Device and Location Independence: Cloud computing enables the users to access systems using a web browser regardless of their location or what device they use e.g. PC, mobile phone, etc. As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.  Maintenance: Maintenance of cloud computing applications is easier, since they do not need to be installed on each user's computer and can be accessed from different places. So, it reduces the cost also.  Low Cost: By using cloud computing, the cost will be reduced because to take the services of cloud computing, IT company need not to set its own infrastructure and pay-as-per usage of resources.  Services in the pay-per-use mode: Application Programming Interfaces (APIs) are provided to the users so that they can access services on the cloud by using these APIs and pay the charges as per the usage of services. Prerequisite Before learning cloud computing, you must have the basic knowledge of computer fundamentals. Audience Our cloud computing is designed to help beginners and professionals. Problem We assure that you will not find any difficulty while learning our cloud computing tutorial. But if there is any mistake in this tutorial, kindly post the problem or error in the contact form.  Advantages of Cloud Computing As we all know that Cloud computing is trending technology. Almost every company switched their services on the cloud to rise the company growth. Here, we are going to discuss some important advantages of Cloud Computing-
  • 2. o Back-up and restore data: Once the data is stored in the cloud, it is easier to get back-up and restore that data using the cloud. o Improved collaboration: Cloud applications improve collaboration by allowing groups of people to quickly and easily share information in the cloud via shared storage. o Excellent accessibility: Cloud allows us to quickly and easily access store information anywhere, anytime in the whole world, using an internet connection. An internet cloud infrastructure increases organization productivity and efficiency by ensuring that our data is always accessible. o Low maintenance cost: Cloud computing reduces both hardware and software maintenance costs for organizations. o Mobility: Cloud computing allows us to easily access all cloud data via mobile. o IServices in the pay-per-use model: Cloud computing offers Application Programming Interfaces (APIs) to the users for access services on the cloud and pays the charges as per the usage of service. o Unlimited storage capacity: Cloud offers us a huge amount of storing capacity for storing our important data such as documents, images, audio, video, etc. in one place. o Data security: Data security is one of the biggest advantages of cloud computing. Cloud offers many advanced features related to security and ensures that data is securely stored and handled.  Disadvantages of Cloud Computing A list of the disadvantage of cloud computing is given below –  Internet Connectivity: As you know, in cloud computing, every data (image, audio, video, etc.) is stored on the cloud, and we access these data through the cloud by using the internet connection. If you do not have good internet connectivity, you cannot access these data. However, we have no any other way to access data from the cloud.  Vendor lock-in: Vendor lock-in is the biggest disadvantage of cloud computing. Organizations may face problems when transferring their services from one vendor to another. As different vendors provide different platforms, that can cause difficulty moving from one cloud to another.  Limited Control: As we know, cloud infrastructure is completely owned, managed, and monitored by the service provider, so the cloud users have less control over the function and execution of services within a cloud infrastructure.  Security: Although cloud service providers implement the best security standards to store important information. But, before adopting cloud technology, you should be aware that you will be sending all your organization's sensitive information to a third party, i.e., a cloud computing service provider. While sending the data on the cloud, there may be a chance that your organization's information is hacked by Hackers. Before emerging the cloud computing, there was Client/Server computing which is basically a centralized storage in which all the software applications, all the data and all the controls are resided on the server side. If a single user wants to access specific data or run a program, he/she need to connect to the server and then gain appropriate access, and then he/she can do his/her business. Then after, distributed computing came into picture, where all the computers are networked together and share their resources when needed. On the basis of above computing, there was emerged of cloud computing concepts that later implemented. At around in 1961, John MacCharty suggested in a speech at MIT that computing can be sold like a utility, just like a water or electricity. It was a brilliant idea, but like all brilliant ideas, it was ahead if its time, as for the next few decades, despite interest in the model, the technology simply was not ready for it. But of course time has passed and the technology caught that idea and after few years we mentioned that: In 1999, Salesforce.com started delivering of applications to users using a simple website. The applications were delivered to enterprises over the Internet, and this way the dream of computing sold as utility were true. In 2002, Amazon started Amazon Web Services, providing services like storage, computation and even human intelligence. However, only starting with the launch of the Elastic Compute Cloud in 2006 a truly commercial service open to everybody existed. In 2009, Google Apps also started to provide cloud computing enterprise applications. Of course, all the big players are present in the cloud computing evolution, some were earlier, some were later. In 2009, Microsoft launched Windows Azure, and companies like Oracle and HP have all joined the game. This proves that today, cloud computing has become mainstream. As we know, cloud computing technology is used by both small and large organizations to store the information in cloud and access it from anywhere at anytime using the internet connection. Cloud computing architecture is a combination of service-oriented architecture and event-driven architecture. Cloud computing architecture is divided into the following two parts -  Front End: The front end is used by the client. It contains client-side interfaces and applications that are required to access the cloud computing platforms. The front end includes web servers (including Chrome, Firefox, internet explorer, etc.), thin & fat clients, tablets, and mobile devices.  Back End: The back end is used by the service provider. It manages all the resources that are required to provide cloud computing services. It includes a huge amount of data storage, security mechanism, virtual machines, deploying models, servers, traffic control mechanisms, etc. The below diagram shows the architecture of cloud computing -
  • 3. Note: Both front end and back end are connected to others through a network, generally using the internet connection.  Components of Cloud Computing Architecture There are the following components of cloud computing architecture - o Client Infrastructure: Client Infrastructure is a Front end component. It provides GUI (Graphical User Interface) to interact with the cloud. o Application: The application may be any software or platform that a client wants to access. o Service: A Cloud Services manages that which type of service you access according to the client’s requirement. Cloud computing offers the following three type of services: i. Software as a Service (SaaS) – It is also known as cloud application services. Mostly, SaaS applications run directly through the web browser means we do not require to download and install these applications. Some important example of SaaS is given below – Example: Google Apps, Salesforce Dropbox, Slack, Hubspot, Cisco WebEx. ii. Platform as a Service (PaaS) – It is also known as cloud platform services. It is quite similar to SaaS, but the difference is that PaaS provides a platform for software creation, but using SaaS, we can access software over the internet without the need of any platform. Example: Windows Azure, Force.com, Magento Commerce Cloud, OpenShift. iii. Infrastructure as a Service (IaaS) – It is also known as cloud infrastructure services. It is responsible for managing applications data, middleware, and runtime environments. Example: Amazon Web Services (AWS) EC2, Google Compute Engine (GCE), Cisco Metapod. o Runtime Cloud: Runtime Cloud provides the execution and runtime environment to the virtual machines. o Storage: Storage is one of the most important components of cloud computing. It provides a huge amount of storage capacity in the cloud to store and manage data. o Infrastructure: It provides services on the host level, application level, and network level. Cloud infrastructure includes hardware and software components such as servers, storage, network devices, virtualization software, and other storage resources that are needed to support the cloud computing model. o Management: Management is used to manage components such as application, service, runtime cloud, storage, infrastructure, and other security issues in the backend and establish coordination between them. o Security: Security is an in-built back end component of cloud computing. It implements a security mechanism in the back end. o Internet: The Internet is medium through which front end and back end can interact and communicate with each other. Cloud computing uses a client-server architecture to deliver computing resources such as servers, storage, databases, and software over the cloud (Internet) with pay-as-you-go pricing. Cloud computing becomes a very popular option for organizations by providing various advantages, including cost-saving, increased productivity, efficiency, performance, data back-ups, disaster recovery, and security. Grid Computing: Grid computing is also called as "distributed computing." It links multiple computing resources (PC's, workstations, servers, and storage elements) together and provides a mechanism to access them. The main advantages of grid computing are that it increases user productivity by providing transparent access to resources, and work can be completed more quickly. Let's understand the difference between cloud computing and grid computing. Cloud Computing Grid Computing Cloud Computing follows client-server computing architecture. Grid computing follows a distributed computing architecture. Scalability is high. Scalability is normal. Cloud Computing is more flexible than grid computing. Grid Computing is less flexible than cloud computing. Cloud operates as a centralized management system. Grid operates as a decentralized management system.
  • 4. In cloud computing, cloud servers are owned by infrastructure providers. In Grid computing, grids are owned and managed by the organization. Cloud computing uses services like Iaas, PaaS, and SaaS. Grid computing uses systems like distributed computing, distributed information, and distributed pervasive. Cloud Computing is Service-oriented. Grid Computing is Application-oriented. It is accessible through standard web protocols. It is accessible through grid middleware. Assume that you are an executive at a very big corporation. Your particular responsibilities include to make sure that all of your employees have the right hardware and software they need to do their jobs. To buy computers for everyone is not enough. You also have to purchase software as well as software licenses and then provide these softwares to your employees as they require. Whenever you hire a new employee, you need to buy more software or make sure your current software license allows another user. It is so stressful that you have to spend lots of money. But, there may be an alternative for executives like you. So, instead of installing a suite of software for each computer, you just need to load one application. That application will allow the employees to log-in into a Web-based service which hosts all the programs for the user that is required for his/her job. Remote servers owned by another company and that will run everything from e-mail to word processing to complex data analysis programs. It is called cloud computing, and it could change the entire computer industry. In a cloud computing system, there is a significant workload shift. Local computers have no longer to do all the heavy lifting when it comes to run applications. But cloud computing can handle that much heavy load easily and automatically. Hardware and software demands on the user's side decrease. The only thing the user's computer requires to be able to run is the cloud computing interface software of the system, which can be as simple as a Web browser and the cloud's network takes care of the rest. Cloud service providers provide various applications in the field of art, business, data storage and backup services, education, entertainment, management, social networking, etc. The most widely used cloud computing applications are given below - 1. Art Applications: Cloud computing offers various art applications for quickly and easily design attractive cards, booklets, and images. Some most commonly used cloud art applications are given below: i Moo: Moo is one of the best cloud art applications. It is used for designing and printing business cards, postcards, and mini cards. ii. Vistaprint: Vistaprint allows us to easily design various printed marketing products such as business cards, Postcards, Booklets, and wedding invitations cards. iii. Adobe Creative Cloud: Adobe creative cloud is made for designers, artists, filmmakers, and other creative professionals. It is a suite of apps which includes PhotoShop image editing programming, Illustrator, InDesign, TypeKit, Dreamweaver, XD, and Audition. 2. Business Applications Business applications are based on cloud service providers. Today, every organization requires the cloud business application to grow their business. It also ensures that business applications are 24*7 available to users. There are the following business applications of cloud computing - i. MailChimp: MailChimp is an email publishing platform which provides various options to design, send, and save templates for emails. iii. Salesforce: Salesforce platform provides tools for sales, service, marketing, e-commerce, and more. It also provides a cloud development platform. iv. Chatter: Chatter helps us to share important information about the organization in real time. v. Bitrix24: Bitrix24 is a collaboration platform which provides communication, management, and social collaboration tools. vi. Paypal: Paypal offers the simplest and easiest online payment mode using a secure internet account. Paypal accepts the payment through debit cards, credit cards, and also from Paypal account holders. vii. Slack: Slack stands for Searchable Log of all Conversation and Knowledge. It provides a user-friendly interface that helps us to create public and private channels for communication. viii. Quickbooks: Quickbooks works on the terminology "Run Enterprise anytime, anywhere, on any device." It provides online accounting solutions for the business. It allows more than 20 users to work simultaneously on the same system. 3. Data Storage and Backup Applications Cloud computing allows us to store information (data, files, images, audios, and videos) on the cloud and access this information using an internet connection. As the cloud provider is responsible for providing security, so they offer various backup recovery application for retrieving the lost data. A list of data storage and backup applications in the cloud are given below - i. Box.com:Box provides an online environment for secure content management, workflow, and collaboration. It allows us to store different files such as Excel, Word, PDF, and images on the cloud. The main advantage of using box is that it provides drag & drop service for files and easily integrates with Office 365, G Suite, Salesforce, and more than 1400 tools. ii. Mozy: Mozy provides powerful online backup solutions for our personal and business data. It schedules automatically back up for each day at a specific time. iii. Joukuu: Joukuu provides the simplest way to share and track cloud-based backup files. Many users use joukuu to search files, folders, and collaborate on documents.
  • 5. iv. Google G Suite: Google G Suite is one of the best cloud storage and backup application. It includes Google Calendar, Docs, Forms, Google+, Hangouts, as well as cloud storage and tools for managing cloud apps. The most popular app in the Google G Suite is Gmail. Gmail offers free email services to users. AD 4. Education Applications Cloud computing in the education sector becomes very popular. It offers various online distance learning platforms and student information portals to the students. The advantage of using cloud in the field of education is that it offers strong virtual classroom environments, Ease of accessibility, secure data storage, scalability, greater reach for the students, and minimal hardware requirements for the applications. There are the following education applications offered by the cloud - i. Google Apps for Education: Google Apps for Education is the most widely used platform for free web-based email, calendar, documents, and collaborative study. ii. Chromebooks for Education: Chromebook for Education is one of the most important Google's projects. It is designed for the purpose that it enhances education innovation. iii. Tablets with Google Play for Education: It allows educators to quickly implement the latest technology solutions into the classroom and make it available to their students. iv. AWS in Education: AWS cloud provides an education-friendly environment to universities, community colleges, and schools. 5. Entertainment Applications Entertainment industries use a multi-cloud strategy to interact with the target audience. Cloud computing offers various entertainment applications such as online games and video conferencing. i. Online games: Today, cloud gaming becomes one of the most important entertainment media. It offers various online games that run remotely from the cloud. The best cloud gaming services are Shaow, GeForce Now, Vortex, Project xCloud, and PlayStation Now. ii. Video Conferencing Apps: Video conferencing apps provides a simple and instant connected experience. It allows us to communicate with our business partners, friends, and relatives using a cloud-based video conferencing. The benefits of using video conferencing are that it reduces cost, increases efficiency, and removes interoperability. 6. Management Applications Cloud computing offers various cloud management tools which help admins to manage all types of cloud activities, such as resource deployment, data integration, and disaster recovery. These management tools also provide administrative control over the platforms, applications, and infrastructure. Some important management applications are - i. Toggl: Toggl helps users to track allocated time period for a particular project. ii. Evernote: Evernote allows you to sync and save your recorded notes, typed notes, and other notes in one convenient place. It is available for both free as well as a paid version. It uses platforms like Windows, macOS, Android, iOS, Browser, and Unix. iii. Outright: Outright is used by management users for the purpose of accounts. It helps to track income, expenses, profits, and losses in real-time environment. iv. GoToMeeting: GoToMeeting provides Video Conferencing and online meeting apps, which allows you to start a meeting with your business partners from anytime, anywhere using mobile phones or tablets. Using GoToMeeting app, you can perform the tasks related to the management such as join meetings in seconds, view presentations on the shared screen, get alerts for upcoming meetings, etc. 7. Social Applications Social cloud applications allow a large number of users to connect with each other using social networking applications such as Facebook, Twitter, Linkedln, etc. There are the following cloud based social applications - i. Facebook: Facebook is a social networking website which allows active users to share files, photos, videos, status, more to their friends, relatives, and business partners using the cloud storage system. On Facebook, we will always get notifications when our friends like and comment on the posts. ii. Twitter: Twitter is a social networking site. It is a microblogging system. It allows users to follow high profile celebrities, friends, relatives, and receive news. It sends and receives short posts called tweets. iii. Yammer: Yammer is the best team collaboration tool that allows a team of employees to chat, share images, documents, and videos. iv. LinkedIn: LinkedIn is a social network for students, freshers, and professionals. Cloud computing provides various advantages, such as improved collaboration, excellent accessibility, Mobility, Storage capacity, etc. But there are also security risks in cloud computing. Some most common Security Risks of Cloud Computing are given below-  Data Loss: Data loss is the most common cloud security risks of cloud computing. It is also known as data leakage. Data loss is the process in which data is being deleted, corrupted, and unreadable by a user, software, or application. In a cloud computing environment, data loss occurs when our sensitive data is somebody else's hands, one or more data elements can not be utilized by the data owner, hard disk is not working properly, and software is not updated.  Hacked Interfaces and Insecure APIs: As we all know, cloud computing is completely depends on Internet, so it is compulsory to protect interfaces and APIs that are used by external users. APIs are the easiest way to communicate with most of the cloud services. In cloud computing, few services are available in the public domain. These services can be accessed by third parties, so there may be a chance that these services easily harmed and hacked by hackers.  Data Breach: Data Breach is the process in which the confidential data is viewed, accessed, or stolen by the third party without any authorization, so organization's data is hacked by the hackers.  Vendor lock-in: Vendor lock-in is the of the biggest security risks in cloud computing. Organizations may face problems when transferring their services from one vendor to another. As different vendors provide different platforms, that can cause difficulty moving one cloud to another.  Increased complexity strains IT staff: Migrating, integrating, and operating the cloud services is complex for the IT staff. IT staff must require the extra capability and skills to manage, integrate, and maintain the data to the cloud.
  • 6.  Spectre & Meltdown: Spectre & Meltdown allows programs to view and steal data which is currently processed on computer. It can run on personal computers, mobile devices, and in the cloud. It can store the password, your personal information such as images, emails, and business documents in the memory of other running programs.  Denial of Service (DoS) attacks: Denial of service (DoS) attacks occur when the system receives too much traffic to buffer the server. Mostly, DoS attackers target web servers of large organizations such as banking sectors, media companies, and government organizations. To recover the lost data, DoS attackers charge a great deal of time and money to handle the data.  Account hijacking: Account hijacking is a serious security risk in cloud computing. It is the process in which individual user's or organization's cloud account (bank account, e-mail account, and social media account) is stolen by hackers. The hackers use the stolen account to perform unauthorized activities. There are the following 4 types of cloud that you can deploy according to the organization's needs-  Public Cloud Public cloud is open to all to store and access information via the Internet using the pay-per-usage method. In public cloud, computing resources are managed and operated by the Cloud Service Provider (CSP). Example: Amazon elastic compute cloud (EC2), IBM SmartCloud Enterprise, Microsoft, Google App Engine, Windows Azure Services Platform. Advantages of Public Cloud  Public cloud is owned at a lower cost than the private and hybrid cloud.  Public cloud is maintained by the cloud service provider, so do not need to worry about the maintenance.  Public cloud is easier to integrate. Hence it offers a better flexibility approach to consumers.  Public cloud is location independent because its services are delivered through the internet.  Public cloud is highly scalable as per the requirement of computing resources.  It is accessible by the general public, so there is no limit to the number of users. A Disadvantages of Public Cloud  Public Cloud is less secure because resources are shared publicly.  Performance depends upon the high-speed internet network link to the cloud provider.  The Client has no control of data.  Private Cloud Private cloud is also known as an internal cloud or corporate cloud. It is used by organizations to build and manage their own data centers internally or by the third party. It can be deployed using Opensource tools such as Openstack and Eucalyptus. Based on the location and management, National Institute of Standards and Technology (NIST) divide private cloud into the following two parts-  On-premise private cloud  Outsourced private cloud Advantages of Private Cloud  Private cloud provides a high level of security and privacy to the users.  Private cloud offers better performance with improved speed and space capacity.  It allows the IT team to quickly allocate and deliver on-demand IT resources.  The organization has full control over the cloud because it is managed by the organization itself. So, there is no need for the organization to depends on anybody.  It is suitable for organizations that require a separate cloud for their personal use and data security is the first priority. Disadvantages of Private Cloud  Skilled people are required to manage and operate cloud services.  Private cloud is accessible within the organization, so the area of operations is limited.  Private cloud is not suitable for organizations that have a high user base, and organizations that do not have the prebuilt infrastructure, sufficient manpower to maintain and manage the cloud.  Hybrid Cloud Hybrid Cloud is a combination of the public cloud and the private cloud. we can say: Hybrid Cloud = Public Cloud + Private Cloud Hybrid cloud is partially secure because the services which are running on the public cloud can be accessed by anyone, while the services which are running on a private cloud can be accessed only by the organization's users. Example: Google Application Suite (Gmail, Google Apps, and Google Drive), Office 365 (MS Office on the Web and One Drive), Amazon Web Services.
  • 7. Advantages of Hybrid Cloud  Hybrid cloud is suitable for organizations that require more security than the public cloud.  Hybrid cloud helps you to deliver new products and services more quickly.  Hybrid cloud provides an excellent way to reduce the risk.  Hybrid cloud offers flexible resources because of the public cloud and secure resources because of the private cloud. Disadvantages of Hybrid Cloud  In Hybrid Cloud, security feature is not as good as the private cloud.  Managing a hybrid cloud is complex because it is difficult to manage more than one type of deployment model.  In the hybrid cloud, the reliability of the services depends on cloud service providers. AD  Community Cloud Community cloud allows systems and services to be accessible by a group of several organizations to share the information between the organization and a specific community. It is owned, managed, and operated by one or more organizations in the community, a third party, or a combination of them. Example: Health Care community cloud Advantages of Community Cloud :There are the following advantages of Community Cloud -  Community cloud is cost-effective because the whole cloud is being shared by several organizations or communities.  Community cloud is suitable for organizations that want to have a collaborative cloud with more security features than the public cloud.  It provides better security than the public cloud.  It provdes collaborative and distributive environment.  Community cloud allows us to share cloud resources, infrastructure, and other capabilities among various organizations. Disadvantages of Community Cloud  Community cloud is not a good choice for every organization.  Security features are not as good as the private cloud.  It is not suitable if there is no collaboration.  The fixed amount of data storage and bandwidth is shared among all community members. The below table shows the difference between public cloud, private cloud, hybrid cloud, and community cloud. Parameter Public Cloud Private Cloud Hybrid Cloud Community Cloud Host Service provider Enterprise (Third party) Enterprise (Third party) Community (Third party) Users General public Selected users Selected users Community members Access Internet Internet, VPN Internet, VPN Internet, VPN Owner Service provider Enterprise Enterprise Community There are the following three types of cloud service models:  Infrastructure as a Service (IaaS) Iaas is also known as Hardware as a Service (HaaS). It is one of the layers of the cloud computing platform. It allows customers to outsource their IT infrastructures such as servers, networking, processing, storage, virtual machines, and other resources. Customers access these resources on the Internet using a pay-as-per use model. In traditional hosting services, IT infrastructure was rented out for a specific period of time, with pre-determined hardware configuration. The client paid for the configuration and time, regardless of the actual use. With the help of the IaaS cloud computing platform layer, clients can dynamically scale the configuration to meet changing requirements and are billed only for the services actually used. IaaS cloud computing platform layer eliminates the need for every organization to maintain the IT infrastructure. IaaS is offered in three models: public, private, and hybrid cloud. The private cloud implies that the infrastructure resides at the customer-premise. In the case of public cloud, it is located at the cloud computing platform vendor's data center, and the hybrid cloud is a combination of the two in which the customer selects the best of both public cloud or private cloud. IaaS provider provides the following services - 1. Compute: Computing as a Service includes virtual central processing units and virtual main memory for the Vms that is provisioned to the end- users. 2. Storage: IaaS provider provides back-end storage for storing files. 3. Network: Network as a Service (NaaS) provides networking components such as routers, switches, and bridges for the Vms. 4. Load balancers: It provides load balancing capability at the infrastructure layer. Advantages of IaaS : There are the following advantages of IaaS computing layer - 1. Shared infrastructure: IaaS allows multiple users to share the same physical infrastructure. 2. Web access to the resources: Iaas allows IT users to access resources over the internet. 3. Pay-as-per-use model: IaaS providers provide services based on the pay-as-per-use basis. The users are required to pay for what they have used. 4. Focus on the core business: IaaS providers focus on the organization's core business rather than on IT infrastructure.
  • 8. 5. On-demand scalability: On-demand scalability is one of the biggest advantages of IaaS. Using IaaS, users do not worry about to upgrade software and troubleshoot the issues related to hardware components. Disadvantages of IaaS : 1. Security: Security is one of the biggest issues in IaaS. Most of the IaaS providers are not able to provide 100% security. 2. Maintenance & Upgrade: Although IaaS service providers maintain the software, but they do not upgrade the software for some organizations. 3. Interoperability issues: It is difficult to migrate VM from one IaaS provider to the other, so the customers might face problem related to vendor lock-in. AD Some important point about IaaS cloud computing layer: IaaS cloud computing platform cannot replace the traditional hosting method, but it provides more than that, and each resource which are used are predictable as per the usage. IaaS cloud computing platform may not eliminate the need for an in-house IT department. It will be needed to monitor or control the IaaS setup. IT salary expenditure might not reduce significantly, but other IT expenses can be reduced. Breakdowns at the IaaS cloud computing platform vendor's can bring your business to the halt stage. Assess the IaaS cloud computing platform vendor's stability and finances. Make sure that SLAs (i.e., Service Level Agreement) provide backups for data, hardware, network, and application failures. Image portability and third-party support is a plus point. The IaaS cloud computing platform vendor can get access to your sensitive data. So, engage with credible companies or organizations. Study their security policies and precautions. AD Top Iaas Providers who are providing IaaS cloud computing platform: IaaS Vendor Iaas Solution Details Amazon Web Services Elastic, Elastic Compute Cloud (EC2) MapReduce, Route 53, Virtual Private Cloud, etc. The cloud computing platform pioneer, Amazon offers auto scaling, cloud monitoring, and load balancing features as part of its portfolio. Netmagic Solutions Netmagic IaaS Cloud Netmagic runs from data centers in Mumbai, Chennai, and Bangalore, and a virtual data center in the United States. Plans are underway to extend services to West Asia. Rackspace Cloud servers, cloud files, cloud sites, etc. The cloud computing platform vendor focuses primarily on enterprise-level hosting services. Reliance Communications Reliance Internet Data Center RIDC supports both traditional hosting and cloud services, with data centers in Mumbai, Bangalore, Hyderabad, and Chennai. The cloud services offered by RIDC include IaaS and SaaS. Sify Technologies Sify IaaS Sify's cloud computing platform is powered by HP's converged infrastructure. The vendor offers all three types of cloud services: IaaS, PaaS, and SaaS. Tata Communications InstaCompute InstaCompute is Tata Communications' IaaS offering. InstaCompute data centers are located in Hyderabad and Singapore, with operations in both countries.  Platform as a Service (PaaS) Platform as a Service (PaaS) provides a runtime environment. It allows programmers to easily create, test, run, and deploy web applications. You can purchase these applications from a cloud service provider on a pay-as-per use basis and access them using the Internet connection. In PaaS, back end scalability is managed by the cloud service provider, so end- users do not need to worry about managing the infrastructure. PaaS includes infrastructure (servers, storage, and networking) and platform (middleware, development tools, database management systems, business intelligence, and more) to support the web application life cycle. Example: Google App Engine, Force.com, Joyent, Azure. PaaS providers provide the Programming languages, Application frameworks, Databases, and Other tools: 1. Programming languages: PaaS providers provide various programming languages for the developers to develop the applications. Popular programming languages provided by PaaS providers are Java, PHP, Ruby, Perl, and Go. 2. Application frameworks: PaaS providers provide application frameworks to easily understand the application development. Some popular application frameworks provided by PaaS providers are Node.js, Drupal, Joomla, WordPress, Spring, Play, Rack, and Zend. 3. Databases: PaaS providers provide various databases such as ClearDB, PostgreSQL, MongoDB, and Redis to communicate with the applications. 4. Other tools: PaaS providers provide various other tools that are required to develop, test, and deploy the applications. Advantages of PaaS: 1. Simplified Development: PaaS allows developers to focus on development and innovation without worrying about infrastructure management. 2. Lower risk: No need for up-front investment in hardware and software. Developers only need a PC and an internet connection to start building applications. 3. Prebuilt business functionality: Some PaaS vendors also provide already defined business functionality so that users can avoid building everything from very scratch and hence can directly start the projects only. 4. Instant community: PaaS vendors frequently provide online communities where the developer can get the ideas to share experiences and seek advice from others. 5. Scalability: Applications deployed can scale from one to thousands of users without any changes to the applications. Disadvantages of PaaS: 1. Vendor lock-in:One has to write the applications according to the platform provided by the PaaS vendor, so the migration of an application to another PaaS vendor would be a problem. 2. Data Privacy: Corporate data, whether it can be critical or not, will be private, so if it is not located within the walls of the company, there can be a risk in terms of privacy of data. 3. Integration with the rest of the systems applications: It may happen that some applications are local, and some are in the cloud. So there will be chances of increased complexity when we want to use data which in the cloud with the local data. Popular PaaS Providers: The below table shows some popular PaaS providers and services that are provided by them - Providers Services
  • 9. Google App Engine (GAE) App Identity, URL Fetch, Cloud storage client library, Logservice Salesforce.com Faster implementation, Rapid scalability, CRM Services, Sales cloud, Mobile connectivity, Chatter. Windows Azure Compute, security, IoT, Data Storage. AppFog Justcloud.com, SkyDrive, GoogleDocs Openshift RedHat, Microsoft Azure. Cloud Foundry from VMware Data, Messaging, and other services.  Software as a Service (SaaS) SaaS is also known as "On-Demand Software". It is a software distribution model in which services are hosted by a cloud service provider. These services are available to end-users over the internet so, the end-users do not need to install any software on their devices to access these services. There are the following services provided by SaaS providers -  Business Services - SaaS Provider provides various business services to start-up the business. The SaaS business services include ERP (Enterprise Resource Planning), CRM (Customer Relationship Management), billing, and sales.  Document Management - SaaS document management is a software application offered by a third party (SaaS providers) to create, manage, and track electronic documents.  Example: Slack, Samepage, Box, and Zoho Forms.  Social Networks - As we all know, social networking sites are used by the general public, so social networking service providers use SaaS for their convenience and handle the general public's information.  Mail Services - To handle the unpredictable number of users and load on e-mail services, many e-mail providers offering their services using SaaS. Advantages of SaaS: 1. SaaS is easy to buy: SaaS pricing is based on a monthly fee or annual fee subscription, so it allows organizations to access business functionality at a low cost, which is less than licensed applications. Unlike traditional software, which is sold as a licensed based with an up-front cost (and often an optional ongoing support fee), SaaS providers are generally pricing the applications using a subscription fee, most commonly a monthly or annually fee. 2. One to Many: SaaS services are offered as a one-to-many model means a single instance of the application is shared by multiple users. 3. Less hardware required for SaaS: The software is hosted remotely, so organizations do not need to invest in additional hardware. 4. Low maintenance required for SaaS: Software as a service removes the need for installation, set-up, and daily maintenance for the organizations. The initial set-up cost for SaaS is typically less than the enterprise software. SaaS vendors are pricing their applications based on some usage parameters, such as a number of users using the application. So SaaS does easy to monitor and automatic updates. 5. No special software or hardware versions required: All users will have the same version of the software and typically access it through the web browser. SaaS reduces IT support costs by outsourcing hardware and software maintenance and support to the IaaS provider. 6. Multidevice support: SaaS services can be accessed from any device such as desktops, laptops, tablets, phones, and thin clients. 7. API Integration: SaaS services easily integrate with other software or services through standard APIs. 8. No client-side installation: SaaS services are accessed directly from the service provider using the internet connection, so do not need to require any software installation. Disadvantages of SaaS: 1. Security: Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud computing is not more secure than in-house deployment. 2. Latency issue: Since data and applications are stored in the cloud at a variable distance from the end-user, there is a possibility that there may be greater latency when interacting with the application compared to local deployment. Therefore, the SaaS model is not suitable for applications whose demand response time is in milliseconds. 3. Total Dependency on Internet: Without an internet connection, most SaaS applications are not usable. 4. Switching between SaaS vendors is difficult: Switching SaaS vendors involves the difficult and slow task of transferring the very large data files over the internet and then converting and importing them into another SaaS also. Popular SaaS Providers: The below table shows some popular SaaS providers and services that are provided by them - Provider Services Salseforce.com On-demand CRM solutions Microsoft Office 365 Online office suite Google Apps Gmail, Google Calendar, Docs, and sites NetSuite ERP, accounting, order management, CRM, Professionals Services Automation (PSA), and e-commerce applications. GoToMeeting Online meeting and video-conferencing software Constant Contact E-mail marketing, online survey, and event marketing Oracle CRM CRM applications Workday, Inc Human capital management, payroll, and financial management. Difference between IaaS, PaaS, and SaaS: IaaS Paas SaaS It provides a virtual data center to store information and create platforms for app development, testing, and deployment. It provides virtual platforms and tools to create, test, and deploy apps. It provides web software and apps to complete business tasks. It provides access to resources such as virtual machines, virtual storage, etc. It provides runtime environments and deployment tools for applications. It provides software as a service to the end-users. It is used by network architects. It is used by developers. It is used by end users. IaaS provides only Infrastructure. PaaS provides Infrastructure+Platform. SaaS provides Infrastructure+Platform +Software.
  • 10. Virtualization is the "creation of a virtual (rather than actual) version of something, such as a server, a desktop, a storage device, an operating system or network resources". In other words, Virtualization is a technique, which allows to share a single physical instance of a resource or an application among multiple customers and organizations. It does by assigning a logical name to a physical storage and providing a pointer to that physical resource when demanded. What is the concept behind the Virtualization? Creation of a virtual machine over existing operating system and hardware is known as Hardware Virtualization. A Virtual machine provides an environment that is logically separated from the underlying hardware. The machine on which the virtual machine is going to create is known as Host Machine and that virtual machine is referred as a Guest Machine Types of Virtualization: 1. Hardware Virtualization: When the virtual machine software or virtual machine manager (VMM) is directly installed on the hardware system is known as hardware virtualization. The main job of hypervisor is to control and monitoring the processor, memory and other hardware resources. After virtualization of hardware system we can install different operating system on it and run different applications on those OS. Usage: Hardware virtualization is mainly done for the server platforms, because controlling virtual machines is much easier than controlling a physical server. 2. Operating System Virtualization: When the virtual machine software or virtual machine manager (VMM) is installed on the Host operating system instead of directly on the hardware system is known as operating system virtualization. Usage: Operating System Virtualization is mainly used for testing the applications on different platforms of OS. 3. Server Virtualization: When the virtual machine software or virtual machine manager (VMM) is directly installed on the Server system is known as server virtualization. Usage: Server virtualization is done because a single physical server can be divided into multiple servers on the demand basis and for balancing the load. 4. Storage Virtualization: Storage virtualization is the process of grouping the physical storage from multiple network storage devices so that it looks like a single storage device. Storage virtualization is also implemented by using software applications. Usage: Storage virtualization is mainly done for back-up and recovery purposes. Virtualization plays a very important role in the cloud computing technology, normally in the cloud computing, users share the data present in the clouds like application etc, but actually with the help of virtualization users shares the Infrastructure. The main usage of Virtualization Technology is to provide the applications with the standard versions to their cloud users, suppose if the next version of that application is released, then cloud provider has to provide the latest version to their cloud users and practically it is possible because it is more expensive. To overcome this problem we use basically virtualization technology, By using virtualization, all severs and the software application which are required by other cloud providers are maintained by the third party people, and the cloud providers has to pay the money on monthly or annual basis. AD Conclusion: Mainly Virtualization means, running multiple operating systems on a single machine but sharing all the hardware resources. And it helps us to provide the pool of IT resources so that we can share these IT resources in order get benefits in the business. Data virtualization is the process of retrieve data from various resources without knowing its type and physical location where it is stored. It collects heterogeneous data from different resources and allows data users across the organization to access this data according to their work requirements. This heterogeneous data can be accessed using any application such as web portals, web services, E-commerce, Software as a Service (SaaS), and mobile application. We can use Data Virtualization in the field of data integration, business intelligence, and cloud computing. Advantages of Data Virtualization: There are the following advantages of data virtualization -  It allows users to access the data without worrying about where it resides on the memory.  It offers better customer satisfaction, retention, and revenue growth.  It provides various security mechanism that allows users to safely store their personal and professional information.  It reduces costs by removing data replication.  It provides a user-friendly interface to develop customized views.  It provides various simple and fast deployment resources.
  • 11.  It increases business user efficiency by providing data in real-time.  It is used to perform tasks such as data integration, business integration, Service-Oriented Architecture (SOA) data services, and enterprise search. Disadvantages of Data Virtualization:  It creates availability issues, because availability is maintained by third-party providers.  It required a high implementation cost.  It creates the availability and scalability issues.  Although it saves time during the implementation phase of virtualization but it consumes more time to generate the appropriate result.A D Uses of Data Virtualization: There are the following uses of Data Virtualization - 1. Analyze performance: Data virtualization is used to analyze the performance of the organization compared to previous years. 2. Search and discover interrelated data: Data Virtualization (DV) provides a mechanism to easily search the data which is similar and internally related to each other. 3. Agile Business Intelligence: It is one of the most common uses of Data Virtualization. It is used in agile reporting, real-time dashboards that require timely aggregation, analyze and present the relevant data from multiple resources. Both individuals and managers use this to monitor performance, which helps to make daily operational decision processes such as sales, support, finance, logistics, legal, and compliance. 4. Data Management: Data virtualization provides a secure centralized layer to search, discover, and govern the unified data and its relationships. Data Virtualization Tools: There are the following Data Virtualization tools –  Red Hat JBoss data virtualization: Red Hat virtualization is the best choice for developers and those who are using micro services and containers. It is written in Java.  TIBCO data virtualization: TIBCO helps administrators and users to create a data virtualization platform for accessing the multiple data sources and data sets. It provides a builtin transformation engine to combine non-relational and un-structured data sources.  Oracle data service integrator: It is a very popular and powerful data integrator tool which is mainly worked with Oracle products. It allows organizations to quickly develop and manage data services to access a single view of data.  SAS Federation Server: SAS Federation Server provides various technologies such as scalable, multi-user, and standards-based data access to access data from multiple data services. It mainly focuses on securing data.  Denodo: Denodo is one of the best data virtualization tools which allows organizations to minimize the network traffic load and improve response time for large data sets. It is suitable for both small as well as large organizations. Industries that use Data Virtualization:  Communication & Technology: In Communication & Technology industry, data virtualization is used to increase revenue per customer, create a real-time ODS for marketing, manage customers, improve customer insights, and optimize customer care, etc.  Finance: In the field of finance, DV is used to improve trade reconciliation, empowering data democracy, addressing data complexity, and managing fixed-risk income.  Government: In the government sector, DV is used for protecting the environment.  Healthcare: Data virtualization plays a very important role in the field of healthcare. In healthcare, DV helps to improve patient care, drive new product innovation, accelerating M&A synergies, and provide a more efficient claims analysis.  Manufacturing: In manufacturing industry, data virtualization is used to optimize a global supply chain, optimize factories, and improve IT assets utilization. Previously, there was "one to one relationship" between physical servers and operating system. Low capacity of CPU, memory, and networking requirements were available. So, by using this model, the costs of doing business increased. The physical space, amount of power, and hardware required meant that costs were adding up. The hypervisor manages shared the physical resources of the hardware between the guest operating systems and host operating system. The physical resources become abstracted versions in standard formats regardless of the hardware platform. The abstracted hardware is represented as actual hardware. Then the virtualized operating system looks into these resources as they are physical entities. Virtualization means abstraction. Hardware virtualization is accomplished by abstracting the physical hardware layer by use of a hypervisor or VMM (Virtual Machine Monitor). When the virtual machine software or virtual machine manager (VMM) or hypervisor software is directly installed on the hardware system is known as hardware virtualization. The main job of hypervisor is to control and monitoring the processor, memory and other hardware resources. After virtualization of hardware system we can install different operating system on it and run different applications on those OS. Usage of Hardware Virtualization: Hardware virtualization is mainly done for the server platforms, because controlling virtual machines is much easier than controlling a physical server. Advantages of Hardware Virtualization: The main benefits of hardware virtualization are more efficient resource utilization, lower overall costs as well as increased uptime and IT flexibility.  More Efficient Resource Utilization: Physical resources can be shared among virtual machines. Although the unused resources can be allocated to a virtual machine and that can be used by other virtual machines if the need exists.  Lower Overall Costs Because Of Server Consolidation: Now it is possible for multiple operating systems can co-exist on a single hardware platform, so that the number of servers, rack space, and power consumption drops significantly.  Increased Uptime Because Of Advanced Hardware Virtualization Features: The modern hypervisors provide highly orchestrated operations that maximize the abstraction of the hardware and help to ensure the maximum uptime. These functions help to migrate a running virtual machine from one host to another dynamically, as well as maintain a running copy of virtual machine on another physical host in case the primary host fails.  Increased IT Flexibility: Hardware virtualization helps for quick deployment of server resources in a managed and consistent ways. That results in IT being able to adapt quickly and provide the business with resources needed in good time.
  • 12. Managing applications and distribution becomes a typical task for IT departments. Installation mechanism differs from application to application. Some programs require certain helper applications or frameworks and these applications may have conflict with existing applications. Software virtualization is just like a virtualization but able to abstract the software installation procedure and create virtual software installations. Virtualized software is an application that will be "installed" into its own self-contained unit. Example of software virtualization is VMware software, virtual box etc. In the next pages, we are going to see how to install linux OS and windows OS on VMware application. Advantages of Software Virtualization  Client Deployments Become Easier: Copying a file to a workstation or linking a file in a network then we can easily install virtual software.  Easy to manage: To manage updates becomes a simpler task. You need to update at one place and deploy the updated virtual application to the all clients.  Software Migration: Without software virtualization, moving from one software platform to another platform takes much time for deploying and impact on end user systems. With the help of virtualized software environment the migration becomes easier. Server Virtualization is the process of dividing a physical server into several virtual servers, called virtual private servers. Each virtual private server can run independently. The concept of Server Virtualization widely used in the IT infrastructure to minimizes the costs by increasing the utilization of existing resources. Types of Server Virtualization 1. Hypervisor: In the Server Virtualization, Hypervisor plays an important role. It is a layer between the operating system (OS) and hardware. There are two types of hypervisors.  Type 1 hypervisor ( also known as bare metal or native hypervisors)  Type 2 hypervisor ( also known as hosted or Embedded hypervisors) The hypervisor is mainly used to perform various tasks such as allocate physical hardware resources (CPU, RAM, etc.) to several smaller independent virtual machines, called "guest" on the host machine. 2. Full Virtualization: Full Virtualization uses a hypervisor to directly communicate with the CPU and physical server. It provides the best isolation and security mechanism to the virtual machines. The biggest disadvantage of using hypervisor in full virtualization is that a hypervisor has its own processing needs, so it can slow down the application and server performance. VMWare ESX server is the best example of full virtualization. 3. Para Virtualization: Para Virtualization is quite similar to the Full Virtualization. The advantage of using this virtualization is that it is easier to use, Enhanced performance, and does not require emulation overhead. Xen primarily and UML use the Para Virtualization. The difference between full and pare virtualization is that, in para virtualization hypervisor does not need too much processing power to manage the OS. 4. Operating System Virtualization: Operating system virtualization is also called as system-lever virtualization. It is a server virtualization technology that divides one operating system into multiple isolated user-space called virtual environments. The biggest advantage of using server visualization is that it reduces the use of physical space, so it will save money. Linux OS Virtualization and Windows OS Virtualization are the types of Operating System virtualization. FreeVPS, OpenVZ, and Linux Vserver are some examples of System-Level Virtualization. Note: OS-Level Virtualization never uses a hypervisor. 5. Hardware Assisted Virtualization: Hardware Assisted Virtualization was presented by AMD and Intel. It is also known as Hardware virtualization, AMD virtualization, and Intel virtualization. It is designed to increase the performance of the processor. The advantage of using Hardware Assisted Virtualization is that it requires less hypervisor overhead. 6. Kernel-Level Virtualization: Kernel-level virtualization is one of the most important types of server virtualization. It is an open- source virtualization which uses the Linux kernel as a hypervisor. The advantage of using kernel virtualization is that it does not require any special administrative software and has very less overhead. User Mode Linux (UML) and Kernel-based virtual machine are some examples of kernel virtualization. Advantages of Server Virtualization: There are the following advantages of Server Virtualization - 1. Independent Restart: In Server Virtualization, each server can be restart independently and does not affect the working of other virtual servers. 2. Low Cost: Server Virtualization can divide a single server into multiple virtual private servers, so it reduces the cost of hardware components. 3. Disaster Recovery: Disaster Recovery is one of the best advantages of Server Virtualization. In Server Virtualization, data can easily and quickly move from one server to another and these data can be stored and retrieved from anywhere. 4. Faster deployment of resources: Server virtualization allows us to deploy our resources in a simpler and faster way. 5. Security: It allows uses to store their sensitive data inside the data centers. Disadvantages of Server Virtualization There are the following disadvantages of Server Virtualization - 1. The biggest disadvantage of server virtualization is that when the server goes offline, all the websites that are hosted by the server will also go down. 2. There is no way to measure the performance of virtualized environments. 3. It requires a huge amount of RAM consumption.
  • 13. 4. It is difficult to set up and maintain. 5. Some core applications and databases are not supported virtualization. 6. It requires extra hardware resources. Uses of Server Virtualization A list of uses of server virtualization is given below - 1. Server Virtualization is used in the testing and development environment. 2. It improves the availability of servers. 3. It allows organizations to make efficient use of resources. 4. It reduces redundancy without purchasing additional hardware components. As we know that, there has been a strong link between the physical host and the locally installed storage devices. However, that paradigm has been changing drastically, almost local storage is no longer needed. As the technology progressing, more advanced storage devices are coming to the market that provide more functionality, and obsolete the local storage. Storage virtualization is a major component for storage servers, in the form of functional RAID levels and controllers. Operating systems and applications with device can access the disks directly by themselves for writing. The controllers configure the local storage in RAID groups and present the storage to the operating system depending upon the configuration. However, the storage is abstracted and the controller is determining how to write the data or retrieve the requested data for the operating system. Storage virtualization is becoming more and more important in various other forms: File servers: The operating system writes the data to a remote location with no need to understand how to write to the physical media. WAN Accelerators: Instead of sending multiple copies of the same data over the WAN environment, WAN accelerators will cache the data locally and present the re-requested blocks at LAN speed, while not impacting the WAN performance. SAN and NAS: Storage is presented over the Ethernet network of the operating system. NAS presents the storage as file operations (like NFS). SAN technologies present the storage as block level storage (like Fibre Channel). SAN technologies receive the operating instructions only when if the storage was a locally attached device. Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage tiering analyze the most commonly used data and places it on the highest performing storage pool. The lowest one used data is placed on the weakest performing storage pool. This operation is done automatically without any interruption of service to the data consumer. Advantages of Storage Virtualization 1. Data is stored in the more convenient locations away from the specific host. In the case of a host failure, the data is not compromised necessarily. 2. The storage devices can perform advanced functions like replication, reduplication, and disaster recovery functionality. 3. By doing abstraction of the storage level, IT operations become more flexible in how storage is provided, partitioned, and protected. With the help of OS virtualization nothing is pre-installed or permanently loaded on the local device and no-hard disk is needed. Everything runs from the network using a kind of virtual disk. This virtual disk is actually a disk image file stored on a remote server, SAN (Storage Area Network) or NAS (Non-volatile Attached Storage). The client will be connected by the network to this virtual disk and will boot with the Operating System installed on the virtual disk. How does OS Virtualization works? Components needed for using OS Virtualization in the infrastructure are given below: The first component is the OS Virtualization server. This server is the center point in the OS Virtualization infrastructure. The server manages the streaming of the information on the virtual disks for the client and also determines which client will be connected to which virtual disk (using a database, this information is stored). Also the server can host the storage for the virtual disk locally or the server is connected to the virtual disks via a SAN (Storage Area Network). In high availability environments there can be more OS Virtualization servers to create no redundancy and load balancing. The server also ensures that the client will be unique within the infrastructure. Secondly, there is a client which will contact the server to get connected to the virtual disk and asks for components stored on the virtual disk for running the operating system. The available supporting components are database for storing the configuration and settings for the server, a streaming service for the virtual disk content, a (optional) TFTP service and a (also optional) PXE boot service for connecting the client to the OS Virtualization servers. As it is already mentioned that the virtual disk contains an image of a physical disk from the system that will reflect to the configuration and the settings of those systems which will be using the virtual disk. When the virtual disk is created then that disk needs to be assigned to the client that will be using this disk for starting. The connection between the client and the disk is made through the administrative tool and saved within the database. When a client has a assigned disk, the machine can be started with the virtual disk using the following process as displayed in the given below
  • 14.  Connecting to the OS Virtualization server: First we start the machine and set up the connection with the OS Virtualization server. Most of the products offer several possible methods to connect with the server. One of the most popular and used methods is using a PXE service, but also a boot strap is used a lot (because of the disadvantages of the PXE service). Although each method initializes the network interface card (NIC), receiving a (DHCP-based) IP address and a connection to the server.  Connecting the Virtual Disk: When the connection is established between the client and the server, the server will look into its database for checking the client is known or unknown and which virtual disk is assigned to the client. When more than one virtual disk are connected then a boot menu will be displayed on the client side. If only one disk is assigned, that disk will be connected to the client which is mentioned in step number 3.  VDisk connected to the client: After the desired virtual disk is selected by the client, that virtual disk is connected through the OS Virtualization server . At the back-end, the OS Virtualization server makes sure that the client will be unique (for example computer name and identifier) within the infrastructure.  OS is "streamed" to the client: As soon the disk is connected the server starts streaming the content of the virtual disk. The software knows which parts are necessary for starting the operating system smoothly, so that these parts are streamed first. The information streamed in the system should be stored somewhere (i.e. cached). Most products offer several ways to cache that information. For examples on the client hard disk or on the disk of the OS Virtualization server.  Additional Streaming: After that the first part is streamed then the operating system will start to run as expected. Additional virtual disk data will be streamed when required for running or starting a function called by the user (for example starting an application available within the virtual disk). Vmware workstation software is used to do the virtualization of Operating System. For installing any Operating System virtually, you need to install VMware software. We are using VMware workstation 10. Before installing linux OS, you need to have iso image file of linux OS. Let's see the steps to install the linux os virtually. How to create new virtual machine for linux OS? 1) Click on create new virtual machine. 2) In welcome window, choose custom option and click next button.
  • 15. 3) In choose the virtual machine hardware compatibility window, click on next button. 4) In the Guest operating system window, choose iso image file from the disk or any drive. I have put the iso file of ubuntu in e: drive. So browse your iso image and click on next button. 5) In the easy install information window, provide full name, username, password and confirm password then click on next button.
  • 16. You can see the given information. 6) In the processor configuration information, you can select number of processors, number of processor per core. If you don't want to change the default settings, click on next only. 7) In the memory of the virtual machine window, you can set the memory limit. Click on the next button.
  • 17. 8) In the specify disk capacity window, you can set the disk size. Click on the next button. 9) In the specify disk file window, you can specify the disk file then click on the next button. 10) In the ready to create virtual machine window, click on the finish button.
  • 18. 11) Now you will see vmware screen then ubuntu screen. To install windows OS virtually, you need to install VMware first. After installing virtualization software, you will get a window to install the new operating system. Let's see the steps to install the windows OS on VMware workstation.
  • 22. Cloud Service providers (CSP) offers various services such as Software as a Service, Platform as a service, Infrastructure as a service, network services, business applications, mobile applications, and infrastructure in the cloud. The cloud service providers host these services in a data center, and users can access these services through cloud provider companies using an Internet connection. There are the following Cloud Service Providers Companies -  Amazon Web Services (AWS) AWS (Amazon Web Services) is a secure cloud service platform provided by Amazon. It offers various services such as database storage, computing power, content delivery, Relational Database, Simple Email, Simple Queue, and other functionality to increase the organization's growth. Features of AWS AWS provides various powerful features for building scalable, cost-effective, enterprise applications. Some important features of AWS is given below-  AWS is scalable because it has an ability to scale the computing resources up or down according to the organization's demand.  AWS is cost-effective as it works on a pay-as-you-go pricing model.  It provides various flexible storage options.  It offers various security services such as infrastructure security, data encryption, monitoring & logging, identity & access control, penetration testing, and DDoS attacks.  It can efficiently manage and secure Windows workloads.  Microsoft Azure Microsoft Azure is also known as Windows Azure. It supports various operating systems, databases, programming languages, frameworks that allow IT professionals to easily build, deploy, and manage applications through a worldwide network. It also allows users to create different groups for related utilities. Features of Microsoft Azure  Microsoft Azure provides scalable, flexible, and cost-effective  It allows developers to quickly manage applications and websites.  It managed each resource individually.  Its IaaS infrastructure allows us to launch a general-purpose virtual machine in different platforms such as Windows and Linux.  It offers a Content Delivery System (CDS) for delivering the Images, videos, audios, and applications. AD  Google Cloud Platform Google cloud platform is a product of Google. It consists of a set of physical devices, such as computers, hard disk drives, and virtual machines. It also helps organizations to simplify the migration process. Features of Google Cloud  Google cloud includes various big data services such as Google BigQuery, Google CloudDataproc, Google CloudDatalab, and Google Cloud Pub/Sub.  It provides various services related to networking, including Google Virtual Private Cloud (VPC), Content Delivery Network, Google Cloud Load Balancing, Google Cloud Interconnect, and Google Cloud DNS.  It offers various scalable and high-performance  GCP provides various serverless services such as Messaging, Data Warehouse, Database, Compute, Storage, Data Processing, and Machine learning (ML)  It provides a free cloud shell environment with Boost Mode. AD  IBM Cloud Services IBM Cloud is an open-source, faster, and more reliable platform. It is built with a suite of advanced data and AI tools. It offers various services such as Infrastructure as a service, Software as a service, and platform as a service. You can access its services like compute power, cloud data & Analytics, cloud use cases, and storage networking using internet connection. Feature of IBM Cloud  IBM cloud improves operational efficiency.  Its speed and agility improve the customer's satisfaction.  It offers Infrastructure as a Service (IaaS), Platform as a Service (PaaS), as well as Software as a Service (SaaS)  It offers various cloud communications services to our IT environment.  VMware Cloud VMware cloud is a Software-Defined Data Center (SSDC) unified platform for the Hybrid Cloud. It allows cloud providers to build agile, flexible, efficient, and robust cloud services.
  • 23. Features of VMware  VMware cloud works on the pay-as-per-use model and monthly subscription  It provides better customer satisfaction by protecting the user's data.  It can easily create a new VMware Software-Defined Data Center (SDDC) cluster on AWS cloud by utilizing a RESTful API.  It provides flexible storage options. We can manage our application storage on a per-application basis.  It provides a dedicated high-performance network for managing the application traffic and also supports multicast networking.  It eliminates the time and cost complexity.  Oracle cloud Oracle cloud platform is offered by the Oracle Corporation. It combines Platform as a Service, Infrastructure as a Service, Software as a Service, and Data as a Service with cloud infrastructure. It is used to perform tasks such as moving applications to the cloud, managing development environment in the cloud, and optimize connection performance. Features of Oracle cloud  Oracle cloud provides various tools for build, integrate, monitor, and secure the applications.  Its infrastructure uses various languages including, Java, Ruby, PHP, Node.js.  It integrates with Docker, VMware, and other DevOps tools.  Oracle database not only provides unparalleled integration between IaaS, PaaS, and SaaS, but also integrates with the on-premises platform to improve operational efficiency.  It maximizes the value of IT investments.  It offers customizable Virtual Cloud Networks, firewalls, and IP addresses to securely support private networks.  Red Hat Red Hat virtualization is an open standard and desktop virtualization platform produced by Red Hat. It is very popular for the Linux environment to provide various infrastructure solutions for virtualized servers as well as technical workstations. Most of the small and medium-sized organizations use Red Hat to run their organizations smoothly. It offers higher density, better performance, agility, and security to the resources. It also improves the organization's economy by providing cheaper and easier management capabilities. Features of Rad Hat  Red Hat provides secure, certified, and updated container images via the Red Hat Container catalog.  Red Hat cloud includes OpenShift, which is an app development platform that allows developers to access, modernize, and deploy apps  It supports up to 16 virtual machines, each having up to 256GB of RAM.  It offers better reliability, availability, and serviceability.  It provides flexible storage capabilities, including very large SAN-based storage, better management of memory allocations, high availability of LVMs, and support for particularly roll-back.  In the Desktop environment, it includes features like New on-screen keyboard, GNOME software, which allows us to install applications, update application, as well as extended device support.  DigitalOcean DigitalOcean is the unique cloud provider that offers computing services to the organization. It was founded in 2011 by Moisey Uretsky and Ben. It is one of the best cloud provider that allows us to manage and deploy web applications. Features of DigitalOcean  It uses the KVM hypervisor to allocate physical resources to the virtual servers.  It provides high-quality performance.  It offers a digital community platform that helps to answer queries and holding feedbacks.  It allows developers to use cloud servers to quickly create new virtual machines for their projects.  It offers one-click apps for droplets. These apps include MySQL, Docker, MongoDB, Wordpress, PhpMyAdmin, LAMP stack, Ghost, and Machine Learning.  Rackspace Rackspace offers cloud computing services such as hosting web applications, Cloud Backup, Cloud Block Storage, Databases, and Cloud Servers. The main aim to designing Rackspace is to easily manage private and public cloud deployments. Its data centers operating in the USA, UK, Hong Kong, and Australia. Features of Rackspace  Rackspace provides various tools that help organizations to collaborate and communicate more efficiently.  We can access files that are stored on the Rackspace cloud drive, anywhere, anytime using any device.  It offers 6 globally data centers.  It can manage both virtual servers and dedicated physical servers on the same network.  It provides better performance at a lower cost.
  • 24.  Alibaba Cloud Alibaba Cloud is used to develop data management and highly scalable cloud computing services. It offers various services, including Elastic Computing, Storage, Networking, Security, Database Services, Application Services, Media Services, Cloud Communication, and Internet of Things. Features of Alibaba Cloud  Alibaba cloud offers a suite of global cloud computing services for both international customers and Alibaba Group's e-commerce ecosystem.  Its services are available on a pay-as-per-use basis.  It globally deals with its 14 data centers.  It offers scalable and reliable data storage. Difference between AWS, Azure, and Google Cloud Platform Although AWS, Microsoft Azure, and Google cloud platforms offer various high-level features in terms of computing, management, storage, and other services, but there are also some differences between these three.  Amazon Web Services (AWS): Amazon Web Services (AWS) is a cloud computing platform which was introduced in 2002. It offers a wide range of cloud services such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). AWS provides the largest community with millions of active customers as well as thousands of partners globally. Most of the organizations use AWS to expand their business by moving their IT management to the AWS. Flexibility, security, scalability, and better performance are some important features of AWS.  Microsoft Azure: Microsoft Azure is also called as Windows Azure. It is a worldwide cloud platform which is used for building, deploying, and managing services. It supports multiple programming languages such as Java, Nodejs, C, and C#. The advantage of using Microsoft Azure is that it allows us to a wide variety of services without arranging and purchasing additional hardware components. Microsoft Azure provides several computing services, including servers, storage, databases, software, networking, and analytics over the Internet.  Google Cloud Platform (GCP): Google Cloud Platform (GCP) is introduced by Google in 2011. It allows us to use Google's products such as Google search engine, Gmail, YouTube, etc. Most of the companies use this platform to easily build, move, and deploy applications on the cloud. It allows us to access these applications using a high-speed internet connection. The advantage of GCP is that it supports various databases such as SQL, MYSQL, Oracle, Sam, and more. Google Cloud Platform (GCP) provides various cloud computing services, including computing, data analytics, data storage, and machine learning. AD The below table shows the difference between AWS, Azure, and Google Cloud Platform - Parameter AWS Azure Google Cloud Platform App Testing It uses device farm It uses DevTest labs It uses Cloud Test labs. API Management Amazon API gateway Azure API gateway Cloud endpoints. Kubernetes Management EKS Kubernetes service Kubernetes engine Git Repositories AWS source repositories Azure source repositories Cloud source repositories. Data warehouse Redshift SQL warehouse Big Query Object Storage S3 Block Blobs and files Google cloud storage. Relational DB RDS Relational DBs Google Cloud SQL Block Storage EBS Page Blobs Persistent disks Marketplace AWS Azure G suite File Storage EFS Azure Files ZFS and Avere Media Services Amazon Elastic transcoder Azure media services Cloud video intelligence API Virtual network VPC VNet Subnet Pricing Per hour Per minute Per minute Maximum processors in VM 128 128 96 Maximum memory in VM (GiB) 3904 3800 1433 Catching ElasticCache RedisCache CloudCDN Load Balancing Configuration Elastic Load Balancing Load Balancer Application Gateway Cloud Load Balancing Global Content Delivery Networks CloudFront Content Delivery Network Cloud Interconnect Google upgraded its algorithm in July 2018 to include page load speed as a ranking metric. Consider the consequences if customers leave the page because of load time then the rankings of the page suffer. Load-time was one of many instances of the significance of hosting services and its effects are on the overall profitability of the company. Now, let's disintegrate the distinction between the two key kinds of services provided to understand the significance of web hosting servers: These two servers are: Cloud hosting and dedicated servers. Each server has certain benefits and drawbacks that may become especially significant to an organization on a plan, meeting time restrictions or looking to develop. The meanings and variations you need to know are discussed here. Cloud Ecosystem A cloud environment is a dynamic system of interrelated components, all of which come together to produce cloud services possible. The cloud infrastructure of cloud services is made up of software and hardware components and also cloud clients, cloud experts, vendors, integrators and partners.
  • 25. The cloud is a technique that is applied to function as a single entity with limitless multiple-servers. As data is stored "in the cloud," it implies that it is kept in a virtual environment that can pull support from numerous geographically placed physical platforms across the world. Similarly, the hubs are specific servers that are linked via the opportunity to exchange services in virtual space, mostly in data center facilities. It's a cloud. To distribute computing resources, cloud servers support pooled files and folders, including Ceph or a wide Storage Area Network (SAN). Through devolution, hosted and virtual server data are integrated. In the context of a malfunction, its condition can be easily transferred from this environment. To manage the various sizes of cloud storage that are splintered, a hypervisor is often built. It also controls the assignment of hardware facilities, such as core processors, RAM and storage space, to every cloud server. Dedicated Hosting System The dedicated environment for server hosting may not allow usage of virtual technologies. The strengths and weaknesses of a specific item of hardware devices are the foundation of all tools. The word 'dedicated' derives from the fact that, depending on hardware, it is separated from any other physical environment around it. The equipment is deliberately developed to offer industry-leading efficiency, power, longevity and, very important, durability. What is Cloud Server, and How it works The on-demand procurement of computer network resources, particularly storing data (cloud services) and computational capability, is cloud computing without explicit, active user intervention. In common, the term describes data centers accessible over the Web to many users. Large servers, over all today, also have operations spread through cloud servers over several environments. If the communication to the user is slightly closer, an edge server can be assigned. Cloud server hosting is, in basic words, a virtualized storage network. The core level support for several cloud storage is provided by devices known as bare metal servers. Various bare metal nodes are mainly composed of a public cloud, typically housed in protected network infrastructure for collocation. Multiple virtual servers are hosted by all of these physical servers. In a couple of seconds, a virtual machine can be built. When it is no longer required, it can also be discarded fast. It is also an easy task to submit information to a virtual server, without the need for in-depth hardware upgrades. Another of the main benefits of cloud infrastructure is versatility, and it is a quality that is central to the cloud service concept. There will be several web servers within such a private cloud that provide services for the same physical environment. And though each device will be a bare metal server, what consumers invest for and eventually use is the virtual environment. AD Dedicated Server Hosting Dedicated hosting contains the ability to provide a data center with only a specific customer. All of the server's facilities are offered to the particular client who leases or purchases the computer equipment. Services are designed to the customer's requirements, such as storage, RAM, bandwidth load, and processor sort. The most efficient computers in the marketplace are dedicated hosting servers, which most often include several processors. A dedicated server can need a server network. The cluster is based on modern technology, everyone connecting to a virtual network location for several dedicated servers. After all, only one customer has access to the tools that are in the virtual environment.  Hybrid cloud server (Mixture of Dedicated and cloud server) A hybrid cloud is named as an incredibly prevalent architecture that several businesses use. Dedicated and cloud hosting alternatives are used in a hybrid cloud. A hybrid may also combine dedicated hosting servers with protected and public cloud servers. This configuration enables several configurations that are appealing to organizations with unique requirements or financial restrictions on the personalization aspect. Using dedicated servers for back-end operations is one of the most common hybrid cloud architectures. The hybrid servers' power provides the most stable storage space and communication climate. On cloud storage, its front-end is hosted. For Software as a Service (SaaS) applications, which need flexibility and scalability based on customer-handling parameters, this architecture works perfectly.  Common factors of cloud server and dedicated server Either dedicated or cloud servers both perform similar required actions through their root. The following software is used with both strategies:  Keep information preserved  Request permission for the data  Queries for information processed  Return data to the person who needed it. Differences between hosting services or virtual private server (VPS) services are often preserved by cloud storage and physical hosting.  Processing large quantities of data without hiccups from delay or results.  Knowledge reception, analysis and returning to clients with business usual reaction times.  Protection of the integrity of information stored.  Ensuring web apps' efficiency.
  • 26. Cloud-based systems and dedicated servers of the modern generation have the specific capacity to handle almost any service or program. Using related back-end tools, they can be handled, so that both approaches may execute on similar applications. The differentiation is in the results. Matching the perfect approach to a framework will save money for organizations, increase flexibility and agility, and help to optimize the use of resources. Cloud server vs. Dedicated server While analyzing performance, scalability, migration, management, services, and costing, the variations among cloud infrastructure and dedicated servers become more evident.  Scalability:Dedicated hosting ranges separately from servers based on clouds. The classifier model is constrained by the size of stacks or drive-bays of the Distributed Antenna System (DAS) present on the server. Via an existing logical volume manager (LVM) file, a RAID handler, and a connected charger, a dedicated server might be able to communicate a disk to an already open bay. Hot swapping is more complicated for DAS arrays. Cloud server space, by addition, is readily customizable (and contractible). The cloud server is not always a part of the connection to provide more storage capacity since the SAN is away from the host. In the cloud world, extending capacity does not suffer any slowdown. Excluding operational downtime, dedicated servers often require more money and resources to update processors. The complete conversion or communicating of another server is necessary for webservers on a single device that needs additional processing capacity.  Performance:For a business that's looking for easy deployment and information retrieval, dedicated servers are typically the most preferred option. Although they manipulate data locally, they may not experience a wide range of delays when carrying out certain operations. This output pace is particularly essential for organizations, including e-commerce, in which every 1/10th of a second count. To manage information, cloud servers have to go through SAN, which carries the operation through the architecture's rear end. The application should also be routed via the hypervisor. This additional processing imposes a certain delay factor that cannot be decreased. Devices on dedicated servers are dedicated exclusively to the web or software host. They may not require to queue queries until all of the computing capacity is used at one (which is highly doubtful). For businesses with Processor sensitive load balancing operations, this enables dedicated servers an excellent option. CPU units in a cloud system need supervision to prevent efficiency from decaying. Without the need for an additional amount of lag, the existing version of hosts cannot accommodate requests. Dedicated servers are completely connected to the host site or program, preventing the overall environment from being throttled. Especially in comparison to the cloud storage world, the commitment of this degree enables networking to be a simple operation. Using the physical network in the cloud system poses a serious risk of bandwidth being throttled. If more than one occupant is concurrently utilizing the same channel, a variety of adverse impacts can be encountered by both occupants.  Administration and Operations:Dedicated servers can enable an enterprise to track their dedicated devices. In-house workers also ought to grasp the management of programs more precisely. A business would also need a detailed understanding of the load profile to keep storage overhead within the correct range. Scaling, updates and repairs are a collaborative endeavor between customers and suppliers that should be strategically planned to keep downtime to a minimum. It will be more convenient for cloud servers to handle. With much less effect on processes, interoperability is quicker. If a dedicated environment requires scheduling to estimate server needs correctly, cloud services platforms require planning to address the possible constraints that you may encounter.  Cost Comparison:Normally, cloud servers contain a lower initial expense than dedicated servers. After all, when a business scales and needs additional capital, cloud servers start losing this benefit. There are also some characteristics that really can boost the price of cloud and dedicated servers. For example, executing a cloud server via a specific network interface can be very costly. An advantage of dedicated servers is that it is possible to update them. Network cards and Non-Volatile Memory (NVMe) drive with more storage, which can boost capacities at the cost of a business's equipment expenditure. Usually, cloud servers are paid on a regular OpEx (Operational expenditure) model. CapEx (Capital expenditure) are generally physical server alternatives. They enable you to overwrite the assets at no extra cost. You also have capital investment expenses that can be paid off for a period of 3 years.  Migration:Streamlined migration can be attained through both dedicated and cloud hosting services. Migration involves further preparation inside a dedicated setting. The new approach may hold both previous and present progress in view to execute a smooth migration. There should be a full-scale decision made. In most instances, before the new server is entirely prepared to accept over, the old and new implementations can run simultaneously. Maintaining the existing systems as a backup is also recommended before the latest approach can be sufficiently checked. Cloud Deployment Model Today, organizations have many exciting opportunities to reimagine, repurpose and reinvent their businesses with the cloud. The last decade has seen even more businesses rely on it for quicker time to market, better efficiency, and scalability. It helps them achieve lo ng-term digital goals as part of their digital strategy. Though the answer to which cloud model is an ideal fit for a business depends on your organization's computing and business needs. Choosing the right one from the various types of cloud service deployment models is essential. It would ensure your business is equipped with the performance, scalability, privacy, security, compliance & cost-effectiveness it requires. It is important to learn and explore what different deployment types can offer - around what particular problems it can solve. Read on as we cover the various cloud computing deployment and service models to help discover the best choice for your business. Cloud Deployment Model? It works as your virtual computing environment with a choice of deployment model depending on how much data you want to store and who has access to the Infrastructure. Most cloud hubs have tens of thousands of servers and storage devices to enable fast loading. It is often possible to choose a geographic area to put the data "closer" to users. Thus, deployment models for cloud computing are categorized based on their location. To know which model would best fit the requirements of your organization, let us first learn about the various types.  Public Cloud The name says it all. It is accessible to the public. Public deployment models in the cloud are perfect for organizations with growing and fluctuating demands. It also makes a great choice for companies with low-security concerns. Thus, you pay a cloud service provider for networking services, compute virtualization & storage available on the public internet. It is also a great delivery model for the teams with development and testing. Its configuration and deployment are quick and easy, making it an ideal choice for test environments.
  • 27. Benefits of Public Cloud  Minimal Investment - As a pay-per-use service, there is no large upfront cost and is ideal for businesses who need quick access to resources  No Hardware Setup - The cloud service providers fully fund the entire Infrastructure  No Infrastructure Management - This does not require an in-house team to utilize the public cloud. Limitations of Public Cloud  Data Security and Privacy Concerns - Since it is accessible to all, it does not fully protect against cyber-attacks and could lead to vulnerabilities.  Reliability Issues - Since the same server network is open to a wide range of users, it can lead to malfunction and outages  Service/License Limitation - While there are many resources you can exchange with tenants, there is a usage cap. AD  Private Cloud Now that you understand what the public cloud could offer you, of course, you are keen to know what a private cloud can do. Companies that look for cost efficiency and greater control over data & resources will find the private cloud a more suitable choice. It means that it will be integrated with your data center and managed by your IT team. Alternatively, you can also choose to host it externally. The private cloud offers bigger opportunities that help meet specific organizations' requirements when it comes to customization. It's also a wise choice for mission-critical processes that may have frequently changing requirements. Benefits of Private Cloud  Data Privacy - It is ideal for storing corporate data where only authorized personnel gets access  Security - Segmentation of resources within the same Infrastructure can help with better access and higher levels of security.  Supports Legacy Systems - This model supports legacy systems that cannot access the public cloud. Limitations of Private Cloud  Higher Cost - With the benefits you get, the investment will also be larger than the public cloud. Here, you will pay for software, hardware, and resources for staff and training.  Fixed Scalability - The hardware you choose will accordingly help you scale in a certain direction  High Maintenance - Since it is managed in-house, the maintenance costs also increase.  Community Cloud The community cloud operates in a way that is similar to the public cloud. There's just one difference - it allows access to only a specific set of users who share common objectives and use cases. This type of deployment model of cloud computing is managed and hosted internally or by a third-party vendor. However, you can also choose a combination of all three. Benefits of Community Cloud  Smaller Investment - A community cloud is much cheaper than the private & public cloud and provides great performance  Setup Benefits - The protocols and configuration of a community cloud must align with industry standards, allowing customers to work much more efficiently. Limitations of Community Cloud  Shared Resources - Due to restricted bandwidth and storage capacity, community resources often pose challenges.  Not as Popular - Since this is a recently introduced model, it is not that popular or available across industries  Hybrid Cloud As the name suggests, a hybrid cloud is a combination of two or more cloud architectures. While each model in the hybrid cloud functions differently, it is all part of the same architecture. Further, as part of this deployment of the cloud computing model, the internal or external providers can offer resources. Let's understand the hybrid model better. A company with critical data will prefer storing on a private cloud, while less sensitive data can be stored on a public cloud. The hybrid cloud is also frequently used for 'cloud bursting'. It means, supposes an organization runs an application on-premises, but due to heavy load, it can burst into the public cloud.
  • 28. Benefits of Hybrid Cloud  Cost-Effectiveness - The overall cost of a hybrid solution decreases since it majorly uses the public cloud to store data.  Security - Since data is properly segmented, the chances of data theft from attackers are significantly reduced.  Flexibility - With higher levels of flexibility, businesses can create custom solutions that fit their exact requirements Limitations of Hybrid Cloud  Complexity - It is complex setting up a hybrid cloud since it needs to integrate two or more cloud architectures  Specific Use Case - This model makes more sense for organizations that have multiple use cases or need to separate critical and sensitive data A Comparative Analysis of Cloud Deployment Models With the below table, we have attempted to analyze the key models with an overview of what each one can do for you: Important Factors to Consider Public Private Community Hybrid Setup and ease of use Easy Requires professional IT Team Requires professional IT Team Requires professional IT Team Data Security and Privacy Low High Very High High Scalability and flexibility High High Fixed requirements High Cost-Effectiveness Most affordable Most expensive Cost is distributed among members Cheaper than private but more expensive than public Reliability Low High Higher High Making the Right Choice for Cloud Deployment Models There is no one-size-fits-all approach to picking a cloud deployment model. Instead, organizations must select a model based on workload-by- workload. Start with assessing your needs and consider what type of support your application requires. Here are a few factors you can consider before making the call:  Ease of Use - How savvy and trained are your resources? Do you have the time and the money to put them through training?  Cost - How much are you willing to spend on a deployment model? How much can you pay upfront on subscription, maintenance, updates, and more?  Scalability - What is your current activity status? Does your system run into high demand?  Compliance - Are there any specific laws or regulations in your country that can impact the implementation? What are the industry standards that you must adhere to?  Privacy - Have you set strict privacy rules for the data you gather? Each cloud deployment model has a unique offering and can immensely add value to your business. For small to medium-sized businesses, a public cloud is an ideal model to start with. And as your requirements change, you can switch over to a different deployment model. An effective strategy can be designed depending on your needs using the cloud mentioned above deployment models. AD The key is to enable hypervisor virtualization. In its simplest form, a hypervisor is specialized firmware or software, or both, installed on a single hardware that will allow you to host multiple virtual machines. This allows physical hardware to be shared across multiple virtual machines. The computer on which the hypervisor runs one or more virtual machines is called the host machine. Virtual machines are called guest machines. The hypervisor allows the physical host machine to run various guest machines. It helps to get maximum benefit from computing resources such as memory, network bandwidth and CPU cycles. Advantages of Hypervisor: Although virtual machines operate on the same physical hardware, they are isolated from each other. It also denotes that if one virtual machine undergoes a crash, error, or malware attack, it does not affect other virtual machines. Another advantage is that virtual machines are very mobile because they do not depend on the underlying hardware. Since they are not connected to physical hardware, switching between local or remote virtualized servers becomes much easier than with traditional applications. Types of Hypervisors in Cloud Computing: There are two main types of hypervisors in cloud computing. 1. Type I Hypervisor: A Type I hypervisor operates directly on the host's hardware to monitor the hardware and guest virtual machines, and is referred to as bare metal. Typically, they do not require the installation of software ahead of time. Instead, you can install it directly on the hardware. This type of hypervisor is powerful and requires a lot of expertise to function well. In addition, Type I hypervisors are more complex and have few hardware requirements to run adequately. Because of this it is mostly chosen by IT operations and data center computing. Examples of Type I hypervisors include Oracle VM Server for Xen, SPARC, Oracle VM Server for x86, Microsoft Hyper-V, and VMware's ESX/ESXi. 2. Type II Hypervisor: It is also called a hosted hypervisor because it is installed on an existing operating system, and they are not more capable of running more complex virtual tasks. People use it for basic development, testing and simulation. If a security flaw is found inside the host OS, it can potentially compromise all running virtual machines. This is why Type II hypervisors cannot be used for data center computing, and they are designed for end-user systems where security is less of a concern. For example, developers can use a Type II hypervisor to launch virtual machines to test software products prior to their release.
  • 29. Hypervisors, their use and Importance: A hypervisor is a process or a function to help admins isolate operating systems and applications from the underlying hardware. Cloud computing uses it the most as it allows multiple guest operating systems (also known as virtual machines or VMs) to run simultaneously on a single host system. Administrators can use the resources efficiently by dividing computing resources (RAM, CPU, etc.) between multiple VMs. A hypervisor is a key element in virtualization, which has helped organizations achieve higher cost savings, improve their provisioning and deployment speeds, and ensure higher resilience with reduced downtimes. The Evolution of Hypervisors: The use of hypervisors dates back to the 1960s, when IBM deployed them on time-sharing systems and took advantage of them to test new operating systems and hardware. During the 1960s, virtualization techniques were used extensively by developers wishing to test their programs without affecting the main production system. The mid-2000s saw another significant leap forward as Unix, Linux and others experimented with virtualization. With advances in processing power, companies built powerful machines capable of handling multiple workloads. In 2005, CPU vendors began offering hardware virtualization for their x86-based products, making hypervisors mainstream. Why use a hypervisor? Now that we have answered "what is a hypervisor", it will be useful to explore some of their important applications to better understand the role of hypervisors in virtualized environments. Hypervisors simplify server management because VMs are independent of the host environment. In other words, the operation of one VM does not affect other VMs or the underlying hardware. Therefore, even when one VM crashes, others can continue to work without affecting performance. This allows administrators to move VMs between servers, which is a useful capability for workload balancing. Teams seamlessly migrate VMs from one machine to another, and they can use this feature for fail-overs. In addition, a hypervisor is useful for running and testing programs in different operating systems. However, the most important use of hypervisors is consolidating servers on the cloud, and data centers require server consolidation to reduce server sprawl. Virtualization practices and hypervisors have become popular because they are highly effective in solving the problem of underutilized servers. Virtualization enables administrators to easily take advantage of untapped hardware capacity to run multiple workloads at once, rather than running separate workloads on separate physical servers. They can match their workload with appropriate material resources, meeting their time, cost and service level requirements. Need of a Virtualization Management Tool: Today, most enterprises use hypervisors to simplify server management, and it is the backbone of all cloud services. While virtualization has its advantages, IT teams are often less equipped to manage a complex ecosystem of hypervisors from multiple vendors. It is not always easy to keep track of different types of hypervisors and to accurately monitor the performance of VMs. In addition, the ease of provisioning increases the number of applications and operating systems, increasing the routine maintenance, security and compliance burden. In addition, VMs may still require IT support related to provisioning, de-provisioning and auditing as per individual security and compliance mandates. Troubleshooting often involves skimming through multiple product support pages. As organizations grow, the lack of access to proper documentation and technical support can make the implementation and management of hypervisors difficult. Eventually, controlling virtual machine spread becomes a significant challenge. Different groups within an organization often deploy the same workload to different clouds, increasing inefficiency and complicating data management. IT administrators must employ virtualization management tools to address the above challenges and manage their resources efficiently. Virtualization management tools provide a holistic view of the availability of all VMs, their states (running, stopped, etc.), and host servers. These tools also help in performing basic maintenance, provisioning, de-provisioning and migration of VMs. Key Players in Virtualization Management: There are three broad categories of virtualization management tools available in the market:  Proprietary tools (with varying degrees of cross-platform support): VMware venter, Microsoft SCVMM  Open-source tools: Citrix XenCenter  Third-party commercial tools: Dell Foglight, Solar Winds Virtualization Manager, Splunk Virtualization Monitoring System. Cloud computing is an infrastructure and software model that enables ubiquitous access to shared storage pools, networks, servers and applications. It allows data processing on a privately owned cloud or on a third-party server. This creates maximum speed and reliability. But the biggest advantages are its ease of installation, low maintenance and scalability. In this way, it grows with your needs. IaaS and SaaS cloud computing has been skyrocketing since 2009, and it's all around us now. You're probably reading this on the cloud right now. For some perspective on how important cloud storage and computing are to our daily lives, here are 8 real-world examples of cloud computing:  Examples of Cloud Storage Ex: Dropbox, Gmail, Facebook The number of online cloud storage providers is increasing every day, and each is competing on the amount of storage that can be provided to the customer. Right now, Dropbox is the clear leader in streamlined cloud storage, allowing users to access files through their application or website on any device with up to 1 terabyte of free storage. Gmail, Google's email service provider, on the other hand, offers unlimited storage on the cloud. Gmail has revolutionized the way we send email and is largely responsible for the increasing use of email across the world. Facebook is a mixture of both in that it can store an infinite amount of information, pictures and videos on your profile. Then they can be easily accessed on multiple devices. Facebook goes a step further with its Messenger app, which allows profiles to exchange data.
  • 30.  Examples of Marketing Cloud Platforms Ex: Maropost for Marketing, Hubspot, Adobe Marketing Cloud Marketing Cloud is an end-to-end digital marketing platform for customers to manage contacts and target leads. Maropost Marketing Cloud combines easy-to-use marketing automation and hyper-targeting of leads. Plus, making sure email arrives in the inbox, thanks to its advanced email delivery capabilities. In general, marketing clouds fill the need for personalization, and this is important in a market that demands messaging to be "more human". So communicating that your brand is here to help will make all the difference in closing.  Examples of Cloud Computing in Education Ex: SlideRocket, Ratatype, Amazon Web Services Education is rapidly adopting advanced technology as students already are. Therefore, to modernize classrooms, teachers have introduced e- learning software like SlideRocket. SlideRocket is a platform that students can use to create and submit presentations, and students can also present them over the cloud via web conferencing. Another tool teachers use is RataType, which helps students learn to type faster and offers online typing tests to track their progress. Amazon's AWS Cloud for K12 and Primary Education is a virtual desktop infrastructure (VDI) solution for school administration. The cloud allows instructors and students to access teaching and learning software on multiple devices.  Examples of Cloud Computing in Healthcare Ex: ClearDATA, Dell's Secure Healthcare Cloud, IBM Cloud Cloud computing allows nurses, physicians and administrators to quickly share information from anywhere. It also saves on costs by allowing large data files to be shared quickly for maximum convenience. This is a huge boost to efficiency. Ultimately, cloud technology ensures that patients receive the best possible care without unnecessary delay. The patient's status can also be updated in seconds through remote conferencing. However, many modern hospitals have not yet implemented cloud computing, but are expected to do so soon.  Examples of Cloud Computing for Government Uses: IT consolidation, shared services, citizen services The US government and military were early adopters of cloud computing. Under the Obama administration to accelerate cloud adoption across departments, the U.S. The federal cloud computing strategy was introduced. According to the strategy: "The focus will shift from the technology itself to the core competencies and mission of the agency." US Government's cloud includes social, mobile and analytics technologies. However, they must adhere to strict compliance and security measures (FIPS, FISMA, and FedRAMP). This is to protect against cyber threats both domestically and abroad. Cloud computing is the answer for any business struggling to stay organized, increase ROI, or grow their email lists. Maropost has the digital marketing solutions you need to transform your business. Cloud computing is becoming popular day by day. Continuous business expansion and growth requires huge computational power and large-scale data storage systems. Cloud computing can help organizations expand and securely move data from physical locations to the 'cloud' that can be accessed anywhere. Cloud computing has many features that make it one of the fastest growing industries at present. The flexibility offered by cloud services in the form of their growing set of tools and technologies has accelerated its deployment across industries. This blog will tell you about the essential features of cloud computing. 1. Resources Pooling: Resource pooling is one of the essential features of cloud computing. Resource pooling means that a cloud service provider can share resources among multiple clients, each providing a different set of services according to their needs. It is a multi-client strategy that can be applied to data storage, processing and bandwidth-delivered services. The administration process of allocating resources in real-time does not conflict with the client's experience. 2. On-Demand Self-Service: It is one of the important and essential features of cloud computing. This enables the client to continuously monitor server uptime, capabilities and allocated network storage. This is a fundamental feature of cloud computing, and a customer can also control the computing capabilities according to their needs. 3. Easy Maintenance: This is one of the best cloud features. Servers are easily maintained, and downtime is minimal or sometimes zero. Cloud computing powered resources often undergo several updates to optimize their capabilities and potential. Updates are more viable with devices and perform faster than previous versions. 4. Scalability And Rapid Elasticity: A key feature and advantage of cloud computing is its rapid scalability. This cloud feature enables cost- effective handling of workloads that require a large number of servers but only for a short period. Many customers have workloads that can be run very cost-effectively due to the rapid scalability of cloud computing. 5. Economical: This cloud feature helps in reducing the IT expenditure of the organizations. In cloud computing, clients need to pay the administration for the space used by them. There is no cover-up or additional charges that need to be paid. Administration is economical, and more often than not, some space is allocated for free. 6. Measured And Reporting Service: Reporting Services is one of the many cloud features that make it the best choice for organizations. The measurement and reporting service is helpful for both cloud providers and their customers. This enables both the provider and the customer to monitor and report which services have been used and for what purposes. It helps in monitoring billing and ensuring optimum utilization of resources. 7. Security: Data security is one of the best features of cloud computing. Cloud services make a copy of the stored data to prevent any kind of data loss. If one server loses data by any chance, the copied version is restored from the other server. This feature comes in handy when multiple users are working on a particular file in real-time, and one file suddenly gets corrupted. 8. Automation: Automation is an essential feature of cloud computing. The ability of cloud computing to automatically install, configure and maintain a cloud service is known as automation in cloud computing. In simple words, it is the process of making the most of the technology and minimizing the manual effort. However, achieving automation in a cloud ecosystem is not that easy. This requires the installation and deployment of virtual machines, servers, and large storage. On successful deployment, these resources also require constant maintenance. 9. Resilience: Resilience in cloud computing means the ability of a service to quickly recover from any disruption. The resilience of a cloud is measured by how fast its servers, databases and network systems restart and recover from any loss or damage. Availability is another key feature of cloud computing. Since cloud services can be accessed remotely, there are no geographic restrictions or limits on the use of cloud resources. 10. Large Network Access: A big part of the cloud's characteristics is its ubiquity. The client can access cloud data or transfer data to the cloud from any location with a device and internet connection. These capabilities are available everywhere in the organization and are achieved with the help of internet. Cloud providers deliver that large network access by monitoring and guaranteeing measurements that reflect how clients access cloud resources and data: latency, access times, data throughput, and more.
  • 31. Cloud services have many benefits, so let's take a closer look at some of the most important ones.  Flexibility: Cloud computing lets users access files using web-enabled devices such as smartphones and laptops. The ability to simultaneously share documents and other files over the Internet can facilitate collaboration between employees. Cloud services are very easily scalable, so your IT needs can be increased or decreased depending on the needs of your business.  Work from anywhere: Users of cloud systems can work from any location as long as you have an Internet connection. Most of the major cloud services offer mobile applications, so there are no restrictions on what type of device you're using. It allows users to be more productive by adjusting the system to their work schedules.  Cost savings: Using web-based services eliminates the need for large expenditures on implementing and maintaining the hardware. Cloud services work on a pay-as-you-go subscription model.  Automatic updates: With cloud computing, your servers are off-premises and are the responsibility of the service provider. Providers update systems automatically, including security updates. This saves your business time and money from doing it yourself, which could be better spent focusing on other aspects of your organization.  Disaster recovery: Cloud-based backup and recovery ensure that your data is secure. Implementing robust disaster recovery was once a problem for small businesses, but cloud solutions now provide these organizations with the cost-effective solutions with the expertise they need. Cloud services save time, avoid large investments and provide a third party experience for your company. Multitenancy is a type of software architecture where a single software instance can serve multiple distinct user groups. It means that multiple customers of cloud vendors are using the same computing resources. As they are sharing the same computing resources but the data of each Cloud customer is kept separate and secure. It is a very important concept of Cloud Computing. Multitenancy is also a shared host where the same resources are divided among different customers in cloud computing. For Example : The example of multitenancy is the same as working of Bank. Multiple people can store money in the same Bank. But every customer asset is different. One customer cannot access the other customer's money and account, and different customers are not aware of each other's account balance and details, etc. Advantages of Multitenancy :  The use of Available resources is maximized by sharing resources.  Customer's Cost of Physical Hardware System is reduced, and it reduces the usage of physical devices and thus power consumption and cooling cost savings.  Save Vendor's cost as it becomes difficult for a cloud vendor to provide separate Physical Services to each individual. Disadvantages of Multitenancy :  Data is stored in third-party services, which reduces our data security and puts it into vulnerable conditions.  Unauthorized access will cause damage to data. Each tenant's data is not accessible to all other tenants within the cloud infrastructure and can only be accessed with the permission of the cloud provider. In a private cloud, customers, or tenants, can be different individuals or groups within the same company. In a public cloud, completely different organizations can securely share their server space. Most public cloud providers use a multi-tenancy model, which allows them to run servers with single instances, which is less expensive and helps streamline updates. Multitenant Cloud vs. Single-Tenant Cloud In a single-tenant cloud, only one client is hosted on the server and provided access to it. Due to the multi-tenancy architecture hosting multiple clients on the same server, it is important to understand the security and performance of the provider fully. Single-tenant clouds give customers greater control over managing data, storage, security, and performance.
  • 32. Benefits of multitenant architecture There is a whole range of advantages to multitenancy, which are evident in the popularity of cloud computing.  Multitenancy can save money: Computing is cheap to scale, and multi-tenancy allows resources to be allocated coherently and efficiently, ultimately saving on operating costs. For an individual user, paying for access to a cloud service or SaaS application is often more cost-effective than running single-tenant hardware and software.  Enables multi-tenancy flexibility: If you have invested in your hardware and software, it can reach capacity during times of high demand or go idle during slow demand. On the other hand, a multitenant cloud can allocate a pool of resources to the users who need it as their needs go up and down. As a public cloud provider customer, you can access additional capacity when needed and not pay for it when you don't.  Multi-tenancy can be more efficient: Multitenancy reduces the need for individual users to manage the infrastructure and handle updates and maintenance. Individual tenants can rely on a central cloud provider rather than their teams to handle those routine chores. Example Of Multi-Tenancy: Multitenant clouds can be compared to the structure of an apartment building. Each resident has access to their apartments within the entire building agreement, and only authorized persons may enter specific units. However, the entire building shares resources such as water, electricity, and common areas. It is similar to a multitenant cloud in that the provider sets broad quotas, rules, and performance expectations for customers, but each customer has private access to their information. Multitenancy can describe a hardware or software architecture in which multiple systems, applications, or data from different enterprises are hosted on the same physical hardware. It differs from single-tenancy, in which a server runs a single instance of the operating system and application. In the cloud world, a multitenant cloud architecture enables customers ("tenants") to share computing resources in a public or private cloud. Multitenancy is a common feature of purpose-built, cloud-delivered services, as it allows customers to efficiently share resources while safely scaling up to meet increasing demand. Even though they share resources, cloud customers are unaware of each other, and their data is kept separate. What does multitenant mean for the cloud? Cloud providers offer multi-tenancy as a means of sharing the use of computing resources. However, this shared use of resources should not be confused with virtualization, a closely related concept. In a multitenant environment, multiple clients share the same application, in the same operating environment, on the same hardware, with the same storage system. In virtualization, unlike multitenancy, each application runs on a separate virtual machine with its operating system. Each resident has authorized access to their apartment, yet all residents share water, electricity, and common areas. Similarly, in a multitenant cloud, the provider sets broad terms and performance expectations, but individual customers have private access to their information. The multitenant design of a cloud service can dramatically impact the delivery of applications and services. It enables unprecedented reliability, availability, and scalability while enabling cost savings, flexibility, and security for IT organizations.] Multi-tenancy, Security, and Zscaler: The primary advantage of multitenant architectures is that organizations can easily onboard users. There's no difference between onboarding 10,000 users from one company or 10 users from a thousand companies with a multitenant cloud. This type of platform easily scales to handle increasing demand, whereas other architectures easily. From a security perspective, a multitenant architecture enables policies to be implemented globally across the cloud. That's why Zscaler users can walk around anywhere, knowing that their traffic will be routed to the nearest Zscaler data center-one in 150 worldwide-and their policies will follow. Because of this capability, an organization with a thousand users can now have the same security protections as a much larger organization with tens or hundreds of thousands of employees. Cloud-native SASE architectures will almost always be multitenant, with multiple customers sharing the underlying data plane. The future of network security is in the cloud.: Corporate networks now move beyond the traditional "security perimeter" to the Internet. The only way to provide adequate security to users - regardless of where they connect - is by moving security and access control to the cloud. Zscaler leverages multi-tenancy to scale to increasing demands and spikes in traffic without impacting performance. Scalability lets us easily scan every byte of data coming and going over all ports and protocols - including SSL - without negatively impacting the user experience. Another advantage of multitenancy is that we can immediately protect all our customers from this threat as soon as a threat is detected on the Zscaler Cloud. Zscaler Cloud is always updated with the latest security updates to protect customers from rapidly evolving malware. With thousands of new phishing sites coming in every day, the equipment is not working. And Zscaler reduces costs and eliminates the complexity of patching, updating, and maintaining hardware and software. The multitenant environment in Linux: Anyone setting up a multitenant environment will be faced with the option of isolating environments using virtual machines (VMs) or containers. With VMs, a hypervisor spins up guest machines, each of which has its operating system and applications and dependencies. The hypervisor also ensures that users are isolated from each other.
  • 33. Compared to VMs, containers offer a more lightweight, flexible, and easy-to-scale model. Containers simplify multi-tenancy deployment by deploying multiple applications on a single host, using the kernel and container runtime to spin up each container. Unlike VMs, which contain their kernel, applications running in containers share a kernel, even across multiple tenants. In Linux, namespaces make it possible for multiple containers to use the same resource without conflict simultaneously. Securing a container is the same as securing any running process. When using Kubernetes for container orchestration, it is possible to set up a multitenant environment using a single Kubernetes cluster. It is possible to segregate tenants into their namespaces and create policies that enforce tenant segregation. Multitenant Database: When choosing a database for multitenant applications, developers have to balance customer needs or desire for data isolation and quick and economical solutions in response to growth or spikes in application traffic. To ensure complete isolation, the developer can allocate a separate database instance for each tenant; On the other hand, to ensure maximum scalability, the developer can make all tenants share the same database instance. But, most developers opt to use a data store such as PostgreSQL, which enables each tenant to have their schema within a single database instance (sometimes called 'soft isolation') and have the best of both worlds. Provides. What about "hybrid" security solutions? Organizations are increasingly using cloud-based apps, such as Salesforce, Box, and Office 365 when migrating to infrastructure services such as Microsoft Azure and Amazon Web Services (AWS). Therefore, many businesses realize that it makes more sense to secure the traffic in the cloud. In response, older vendors that relied heavily on periodic hardware equipment sales promoted so-called "hybrid solutions". In that data, center security controlled by devices and mobile or branch security is similar to those housed in a cloud environment. The stack handles security. This hybrid strategy complicates, rather than simplifies, enterprise security in that cloud users and administrators get none of the benefits of a real cloud service-speed, scale, global visibility, and threat intelligence-that is only a multitenant one. It can be provided through global architecture. The use of a widely dispersed system strategy to accomplish a common objective is called grid computing. A computational grid can be conceived as a decentralized network of interrelated files and non-interactive activities. Grid computing differs from traditional powerful computational platforms like cluster computing in that each unit is dedicated to a certain function or activity. Grid computers are also more diverse and spatially scattered than cluster machines and are not physically connected. However, a particular grid might be allocated to a unified platform, and grids are frequently utilized for various purposes. General-purpose grid network application packages are frequently used to create grids. The size of the grid might be extremely enormous. Grids are decentralized network computing in which a "super virtual computer" is made up of several loosely coupled devices that work together to accomplish massive operations. Distributed or grid computing is a sort of parallel processing that uses entire devices (with onboard CPUs, storage, power supply, network connectivity, and so on) linked to a network connection (private or public) via a traditional network connection, like Ethernet, for specific applications. This contrasts with the typical quantum computer concept, consisting of several cores linked by an elevated universal serial bus on a local level. This technique has been used in corporate entities for these applications ranging from drug development, market analysis, seismic activity, and backend data management in the assistance of e-commerce and online services. It has been implemented to computationally demanding research, numerical, and educational difficulties via volunteer computer technology. Grid computing brings together machines from numerous organizational sectors to achieve a similar aim, such as completing a single work and then vanishes just as rapidly. Grids can be narrowed to a group of computer terminals within a firm, such as accessible alliances involving multiple organizations and systems. "A limited grid can also be referred to as intra-nodes collaboration, while a bigger, broader grid can be referred to as inter-nodes cooperative". Managing Grid applications can be difficult, particularly when dealing with the data flows among distant computational resources. A grid sequencing system is a combination of workflow automation software that has been built expressly for composing and executing a sequence of mathematical or data modification processes or a sequence in a grid setting. History of Grid Computing In the early nineties, the phrase "grid computing" was used as an analogy for rendering computational power as accessible as an electricity network.  When Ian Foster and Carl Kesselman launched their landmark article, "The Grid: Blueprint for a New Computing Infrastructure", the electric grid analogy for scalable computing immediately became classic (1999). The concept of grid computing (1961) predated this by centuries: computers as a utility service, similar to the telecommunications network.  Ian Foster and Steve Tuecke of the University of Chicago and Carl Kesselman of the University of Southern California's Computer Sciences Institute gathered together the grid's concepts (which included those from parallel development, object-oriented development, and online services). The three are popularly considered as the "fathers of the grid" because they initiated the initiative to establish the Globus Toolkit. Memory maintenance, safety providing, information transportation, surveillance, and a toolset for constructing extra services depending on the same platform, such as contract settlement, notification systems, trigger functions, and analytical expression, are all included in the toolkit.  Although the Globus Toolbox maintains the de facto standard for developing grid systems, some possible technologies have been developed to address a part of the capabilities required to establish a worldwide or business grid.  The phrase "cloud computing" became famous in 2007. It is dreamed up to the standard foster description of grid computing (in which computing resources are consumed as power is consumed from the electrical grid) and earlier utility computing. Grid computing is frequently (but not always) linked to the supply of cloud computing environment, as demonstrated by 3tera's AppLogic In summary, "distributed" or "grid" computing is reliant on comprehensive computer systems (with navigation CPU cores, storage, power supply units, network connectivity, and so on) attached to the network (personal, community, or the World wide web) via a traditional network connection, resulting in existing hardware, as opposed to the lower capacity of designing and developing a small number of custom supercomputers. The fundamental performance drawback is the lack of high-speed connectivity between the multiple CPUs and local storage facilities. Background of Grid Computing In the early 1990s, the phrase "grid computing" was used as a concept for rendering computational complexity as accessible as an electricity network. When Ian Foster and Carl Kesselman released their landmark study, "The Grid: Blueprint for a New Computing Infrastructure," the power
  • 34. network analogy for ubiquitous computing immediately became classic (1999). The analogy of computing services (1961) predated this by decades: computing as a public entity, similar to the telephone system. Distributed.net and SETI@homepopularised CPU scavenge and voluntary computation in 1997 and 1999, respectively, to harness the energy of linked PCs worldwide to discuss CPU-intensive research topics. Ian Foster and Steve Tuecke of the University of Chicago and Carl Kesselman of the University of Southern California's Advanced Research Centre gathered together the grid's concepts (which included those from cloud applications, object-oriented computing, and Online services). The three are popularly considered as the "fathers of the grid" because they led the initiative to establish the Globus Framework. While the Globus Toolbox retains the standard for developing grid systems, several alternative techniques have been developed to address some of the capabilities required to establish a worldwide or business grid. Memory control, protection supply, data transportation, surveillance, and a toolset for constructing extra services based on similar infrastructures, such as contract settlement, alert systems, trigger events, and analytical expression, are all included in the toolkit. The phrase "cloud computing" became prominent in 2007. It is conceptually related to the classic Foster description of grid computing (in which computer resources are deployed as energy is used from the electrical grid) and previous utility computing. Grid computing is frequently (but not always) linked to the supply of cloud computing environment, as demonstrated by 3tera's AppLogic technology. Comparison between Grid and Supercomputers: In summary, "dispersed" or "grid" computer processing depends on comprehensive desktops (with inbuilt processors, backup, power supply units, networking devices, and so on) connected to the network (private, public, or the internet) via a traditional access point, resulting in embedded systems, as opposed to the reduced energy of designing and building a limited handful of modified powerful computers. The relevant performance drawback is the lack of high-speed links between the multiple CPUs and regional storage facilities. This configuration is well-suited to situations in which various concurrent calculations can be performed separately with no need for error values to be communicated among processors. Due to the low demand for connections among units compared to the power of the open network, the high- end scalability of geographically diverse grids is often beneficial. There are various coding and MC variations as well. Writing programs that can function in the context of a supercomputer, which may have a specialized version of windows or need the application to solve parallelism concerns, can be expensive and challenging. If a task can be suitably distributed, a "thin" shell of "grid" architecture can enable traditional, independent code to be executed on numerous machines, each solving a different component of the same issue. This reduces issues caused by several versions of the same code operating in the similar shared processing and disk area at the same time, allowing for writing and debugging on a single traditional system. Differences and Architectural Constraints: Integrated grids can combine computational resources from one or more persons or organizations (known as multiple administrative domains). This can make trades easier, such as computing services or charity computer science. One drawback of this function is that the machines that execute the equations may not be completely reliable. As a consequence, design engineers must include precautions to prevent errors or malicious respondents from generating false, misrepresentative, or incorrect results, as well as using the framework as a variable for invasion. This frequently entails allocating tasks to multiple nodes (assumedly with multiple owners) at irregular intervals and ensuring that at least two endpoints disclose the same solution for a provided workgroup. Inconsistencies would reveal networks that were dysfunctional or malevolent. There is no method of ensuring that endpoints will not opt-out of the connection at arbitrary periods thanks to the shortage of centralized power across the equipment. For unpredictably long durations, some nodes (such as workstations or dial-up Online subscribers) may be accessible for processing but not infrastructure technology. These variances can be compensated by allocating big workgroups (thus lowering the need for constant internet connectivity) and reallocating workgroups when a station refuses to disclose its output within the specified time frame. An additional range of social acceptability difficulties in the initial periods of grid computing included grid researchers' desire to extend their technology far beyond the initial subject of elevated computing or across departmental lines into other domains such as high-energy physics. The effects of credibility and accessibility on continuous quality improvement complexity can determine whether a specialized complex, idle workstations within the creating organization or an unrestricted extranet of amateurs or subcontractors is chosen. In many circumstances, the networking devices must believe the centralized system not to exploit the access granted by tampering with the functioning of other applications, mutilating stored information, sending personal information, or introducing new security vulnerabilities. Other systems use techniques like virtual machines to limit the amount of faith that "client" hubs must put in the centralized computer. Public systems that span organizational sectors (such as those used by various departments within the same company) frequently require the use of embedded devices with diverse operating systems and equipment configurations. There is an exchange with many programs among application development and the number of systems that can be maintained (and thus the size of the resulting network). Cross-platform languages can alleviate the requirement for this compromise but at the risk of sacrificing good performance on any specific node (due to run-time interpretation or lack of optimization for the particular platform). Several networking initiatives have developed universal architecture to observe frequency research and commercial enterprises to exploit a specifically associated grid or establish new grids. BOINC is a popular platform for research projects looking for public participants; a selection of others is provided after the post. In assertion, a system can be considered a surface among equipment and software. Many innovative sectors must be required with the middleware, and these may not be entity framework impartial. SLA management, trustworthiness, virtual organization control, license management, interfaces, and information management are just a few examples. These basic topics may be addressed in a commercial solution, but the workpiece of each is predominantly encountered in independent research initiatives investigating the sector. The CPU as a Scavenger: In a system of members, CPU scavenges, cycle scrounging, or shared computing produces a "grid" from the excess capacity (whether global or internal to an organization). Generally, this strategy uses the 'spare' instructions units created by periodic inaction, such as at night, over lunch breaks, or during the (very brief but frequent) periods of inactive awaiting that desktop workstation CPUs encounter during the day. In actuality, in complement to direct Computational resources, contributing machines also offer some disc storage capacity, RAM, and communication bandwidth. The CPU scavenging model is used by many volunteers computing projects, such as BOINC. This model must be developed to handle such scenarios because nodes are likely to be "offline" from time to time as their owners use their resources for their primary purpose. Establishing an Opportunism Ecosystem, also known as Industrial Computer Grid, is another method of computation in which a customized task management solution harvests unoccupied desktops and laptops for computationally intensive workloads. HTCondor, an accessible, powerful computational feature for poorly graded decentralized rationalization of computationally intensive tasks, can, for example, be designed only to use computer devices where the mouse and keyboard are inactive, allowing it to strap squandered CPU power from otherwise inactive desktop workspaces successfully. HTCondor includes a task queuing system, schedule strategy, prioritization scheme, capacity tracking, and strategic planning, just like other packed batch processes. It can handle demand on a specialized cluster of machines or smoothly blend devoted assets (rack-mounted clusters) and non- dedicated desktops workstations (cycle scavenging) into a single desktop environment. Fastest Virtual Supercomputers  BOINC - 29.8 PFLOPS as of April 7, 2020.  Folding@home - 1.1 exaFLOPS as of March 2020.  Einstein@Home has 3.489 PFLOPS as of February 2018.  SETI@Home - 1.11 PFLOPS as of April 7, 2020.
  • 35.  MilkyWay@Home - 1.465 PFLOPS as of April 7, 2020.  GIMPS - 0.558 PFLOPS as of March 2019. In addition, the Bitcoin Community has a compute power comparable to about 80,000 exaFLOPS as of March 2019 (Floating-point Operations per Second). Because the elements of the Bitcoin network (Bitcoin mining ASICs) perform only the specific cryptographic hash computation required by the Bitcoin protocol, this measurement reflects the number of FLOPS required equal to the hash output of the Bitcoin network rather than its capacity for general floating-point arithmetic operations. Today's Applications and Projects: Grids are a means to make the most of a group's information technology systems. Grid computing enables the Large Hadron Collider at CERN and solves challenges like protein function, financial planning, earthquake prediction, and environment modelling. They also allow for the provision of information technology as a commodity to both corporate and nongovernmental customers, with the latter contributing only for what they consume, similar to how energy or water is provided. The National Community Grid now has over 4 million workstations using the accessible Berkley Public Initiative for Network Computing (BOINC) technology as of October 2016. SETI@home is one of the programs that use BOINC, and as of October 2016, it was employing over 400,000 machines to reach 0.828 TFLOPS. Folding@home, which is not affiliated with BOINC, has reached more than 101 x86-equivalent petabytes on over 110,000 computers as of October 2016. Activities were sponsored by the Euro Zone thru the European Commission's foundation initiatives. The Eu Commission financed BEinGRID (Business Experiments in Grid) as an Integrative Program underneath the Sixth Framework (FP6) financing program. The project, which began on June 1, 2006, ended in November 2009, lasted 42 months. Atos Origin was in charge of the project's coordination. "To build effective paths to support grid computing across the EU and drive innovation into creative marketing strategies employing Grid technology," per the project fact page. Two experts examine several prototypes, one technically and one commercial, to identify best practices and commonalities from the trial solutions. The project is important not only because of its prolonged term but also because of its expenditure, which is the highest of any FP6 integral approach at 24.8 million Euros. The Eu Commission contributes 15.7 million, with the remaining funds coming from the 98 participating alliance partners. BEinGRID's achievements have been picked up and carried further by IT-Tude.com since the program's termination.BEinGRID's achievements have been picked up and moved further by IT-Tude.com ever since the project's termination. The Enabling Grids for E-sciencE initiative was a join to the European DataGrid (EDG) and grew into the European Power Grid. It was located in the European Union and included Asia and the United States. This, including the LHC Computing Grid (LCG), was created to aid research at CERN's Large Hadron Collider. Here, you may find a list of current LCG locations and real-time surveillance of the EGEE infrastructure. The essential tools and information are also available to the general public. Specialized fiber-optic lines, such as those established by CERN to meet the LCG's statistics demands, may one day be accessible to home users, allowing them to access the internet at rates up to 10,000 30 % faster than a regular fiber connection. In 1997, the distributed.net plan was initiated. The NASA Advanced Supercomputing Facility (NAS) used the Condor cycle scavenger to perform evolutionary algorithms on around 350 Sun Microsystems and SGI computers. United Technologies ran the Universal Technologies Cancer Research Project in 2001, which used its Grid MP technology to rotate among participant PCs linked to the internet. Before it was shut down in 2007, the initiative had 3.1 million computers running. Aneka includes an extensible set of APIs associated with programming models like MapReduce. These APIs support different cloud models like a private, public, hybrid Cloud. Manjrasoft focuses on creating innovative software technologies to simplify the development and deployment of private or public cloud applications. Our product plays the role of an application platform as a service for multiple cloud computing.  Multiple Structures:  Aneka is a software platform for developing cloud computing applications.  In Aneka, cloud applications are executed.  Aneka is a pure PaaS solution for cloud computing.  Aneka is a cloud middleware product.  Manya can be deployed over a network of computers, a multicore server, a data center, a virtual cloud infrastructure, or a combination thereof. Multiple containers can be classified into three major categories: 1. Textile Services: Fabric Services defines the lowest level of the software stack that represents multiple containers. They provide access to resource-provisioning subsystems and monitoring features implemented in many. 2. Foundation Services: Fabric Services are the core services of Manya Cloud and define the infrastructure management features of the system. Foundation services are concerned with the logical management of a distributed system built on top of the infrastructure and provide ancillary services for delivering applications. 3. Application Services: Application services manage the execution of applications and constitute a layer that varies according to the specific programming model used to develop distributed applications on top of Aneka. There are mainly two major components in multiple technologies: The SDK (Software Development Kit) includes the Application Programming Interface (API) and tools needed for the rapid development of applications. The Anka API supports three popular cloud programming models: Tasks, Threads and MapReduce; And A runtime engine and platform for managing the deployment and execution of applications on a private or public cloud. One of the notable features of Aneka Pass is to support the provision of private cloud resources from desktop, cluster to a virtual data center using VMware, Citrix Zen Server, and public cloud resources such as Windows Azure, Amazon EC2, and GoGrid cloud service. Aneka's potential as a Platform as a Service has been successfully harnessed by its users and customers in three different areas, including engineering, life sciences, education, and business intelligence. Architecture of Aneka
  • 36. Aneka is a platform and framework for developing distributed applications on the Cloud. It uses desktop PCs on-demand and CPU cycles in addition to a heterogeneous network of servers or datacenters. Aneka provides a rich set of APIs for developers to transparently exploit such resources and express the business logic of applications using preferred programming abstractions. System administrators can leverage a collection of tools to monitor and control the deployed infrastructure. It can be a public cloud available to anyone via the Internet or a private cloud formed by nodes with restricted access. A multiplex-based computing cloud is a collection of physical and virtualized resources connected via a network, either the Internet or a private intranet. Each resource hosts an instance of multiple containers that represent the runtime environment where distributed applications are executed. The container provides the basic management features of a single node and takes advantage of all the other functions of its hosting services. Services are divided into clothing, foundation, and execution services. Foundation services identify the core system of Anka middleware, which provides a set of infrastructure features to enable Anka containers to perform specific and specific tasks. Fabric services interact directly with nodes through the Platform Abstraction Layer (PAL) and perform hardware profiling and dynamic resource provisioning. Execution services deal directly with scheduling and executing applications in the Cloud. One of the key features of Aneka is its ability to provide a variety of ways to express distributed applications by offering different programming models; Execution services are mostly concerned with providing middleware with the implementation of these models. Additional services such as persistence and security are inverse to the whole stack of services hosted by the container. At the application level, a set of different components and tools are provided to  Simplify the development of applications (SDKs),  Port existing applications to the Cloud, and  Monitor and manage multiple clouds. An Aneka-based cloud is formed by interconnected resources that are dynamically modified according to user needs using resource virtualization or additional CPU cycles for desktop machines. A common deployment of Aneka is presented on the side. If the deployment identifies a private cloud, all resources are in-house, for example, within the enterprise. This deployment is enhanced by connecting publicly available on-demand resources or by interacting with several other public clouds that provide computing resources connected over the Internet. Cloud scalability in cloud computing refers to increasing or decreasing IT resources as needed to meet changing demand. Scalability is one of the hallmarks of the cloud and the primary driver of its explosive popularity with businesses. Data storage capacity, processing power, and networking can all be increased by using existing cloud computing infrastructure. Scaling can be done quickly and easily, usually without any disruption or downtime. Third-party cloud providers already have the entire infrastructure in place; In the past, when scaling up with on-premises physical infrastructure, the process could take weeks or months and require exorbitant expenses. This is one of the most popular and beneficial features of cloud computing, as businesses can grow up or down to meet the demands depending on the season, projects, development, etc. By implementing cloud scalability, you enable your resources to grow as your traffic or organization grows and vice versa. There are a few main ways to scale to the cloud: If your business needs more data storage capacity or processing power, you'll want a system that scales easily and quickly. Cloud computing solutions can do just that, which is why the market has grown so much. Using existing cloud infrastructure, third-party cloud vendors can scale with minimal disruption. Types of scaling 1. Vertical Scaling: To understand vertical scaling, imagine a 20-story hotel. There are innumerable rooms inside this hotel from where the guests keep coming and going. Often there are spaces available, as not all rooms are filled at once. People can move easily as there is space for them. As long as the capacity of this hotel is not exceeded, no problem. This is vertical scaling. With computing, you can add or subtract resources, including memory or storage, within the server, as long as the resources do not exceed the capacity of the machine. Although it has its limitations, it is a way to improve your server and avoid latency and extra management. Like in the hotel example, resources can come and go easily and quickly, as long as there is room for them.
  • 37. 2. Horizontal Scaling: Horizontal scaling is a bit different. This time, imagine a two-lane highway. Cars travel smoothly in each direction without major traffic problems. But then the area around the highway develops - new buildings are built, and traffic increases. Very soon, this two-lane highway is filled with cars, and accidents become common. Two lanes are no longer enough. To avoid these issues, more lanes are added, and an overpass is constructed. Although it takes a long time, it solves the problem. Horizontal scaling refers to adding more servers to your network, rather than simply adding resources like with vertical scaling. This method tends to take more time and is more complex, but it allows you to connect servers together, handle traffic efficiently and execute concurrent workloads. 3. Diagonal Scaling: It is a mixture of both Horizontal and Vertical scalability where the resources are added both vertically and horizontally. Well, you get diagonal scaling, which allows you to experience the most efficient infrastructure scaling. When you combine vertical and horizontal, you simply grow within your existing server until you hit the capacity. Then, you can clone that server as necessary and continue the process, allowing you to deal with a lot of requests and traffic concurrently. Scale in the Cloud When you move scaling into the cloud, you experience an enormous amount of flexibility that saves both money and time for a business. When your demand booms, it's easy to scale up to accommodate the new load. As things level out again, you can scale down accordingly. This is so significant because cloud computing uses a pay-as-you-go model. Traditionally, professionals guess their maximum capacity needs and purchase everything up front. If they overestimate, they pay for unused resources. If they underestimate, they don't have the services and resources necessary to operate effectively. With cloud scaling, though, businesses get the capacity they need when they need it, and they simply pay based on usage. This on-demand nature is what makes the cloud so appealing. You can start small and adjust as you go. It's quick, it's easy, and you're in control. Difference between Cloud Elasticity and Scalability: Cloud Elasticity Cloud Scalability Elasticity is used just to meet the sudden up and down in the workload for a small period of time. Scalability is used to meet the static increase in the workload. Elasticity is used to meet dynamic changes, where the resources need can increase or decrease. Scalability is always used to address the increase in workload in an organization. Elasticity is commonly used by small companies whose workload and demand increases only for a specific period of time. Scalability is used by giant companies whose customer circle persistently grows in order to do the operations efficiently. It is a short term planning and adopted just to deal with an unexpected increase in demand or seasonal demands. Scalability is a long term planning and adopted just to deal with an expected increase in demand. AD Why is cloud scalable? Scalable cloud architecture is made possible through virtualization. Unlike physical machines whose resources and performance are relatively set, virtual machines virtual machines (VMs) are highly flexible and can be easily scaled up or down. They can be moved to a different server or hosted on multiple servers at once; workloads and applications can be shifted to larger VMs as needed. Third-party cloud providers also have all the vast hardware and software resources already in place to allow for rapid scaling that an individual business could not achieve cost-effectively on its own. Benefits of cloud scalability Key cloud scalability benefits driving cloud adoption for businesses large and small:  Convenience: Often, with just a few clicks, IT administrators can easily add more VMs that are available-and customized to an organization's exact needs-without delay. Teams can focus on other tasks instead of setting up physical hardware for hours and days. This saves the valuable time of the IT staff.  Flexibility and speed: As business needs change and grow, including unexpected demand spikes, cloud scalability allows IT to respond quickly. Companies are no longer tied to obsolete equipment-they can update systems and easily increase power and storage. Today, even small businesses have access to high-powered resources that used to be cost-prohibitive.  Cost Savings: Thanks to cloud scalability, businesses can avoid the upfront cost of purchasing expensive equipment that can become obsolete in a few years. Through cloud providers, they only pay for what they use and reduce waste.  Disaster recovery: With scalable cloud computing, you can reduce disaster recovery costs by eliminating the need to build and maintain secondary data centers. AD When to Use Cloud Scalability? Successful businesses use scalable business models to grow rapidly and meet changing demands. It's no different with their IT. Cloud scalability benefits help businesses stay agile and competitive.
  • 38. Scalability is one of the driving reasons for migrating to the cloud. Whether traffic or workload demands increase suddenly or increase gradually over time, a scalable cloud solution enables organizations to respond appropriately and cost-effectively to increased storage and performance. How do you determine optimal cloud scalability? Changing business needs or increasing demand often necessitate your scalable cloud solution changes. But how much storage, memory, and processing power do you need? Will you scale in or out? To determine the correct size solution, continuous performance testing is essential. IT administrators must continuously measure response times, number of requests, CPU load, and memory usage. Scalability testing also measures the performance of an application and its ability to scale up or down based on user requests. Automation can also help optimize cloud scalability. You can set a threshold for usage that triggers automatic scaling so as not to affect performance. You may also consider a third-party configuration management service or tool to help you manage your scaling needs, goals, and implementation. The IT market is still buzzing because of the advent of cloud computing. Though the breakthrough technology first came out some 10 years back, companies are benefiting from its benefits for business in various forms. The cloud has offered more than just storage of data and security benefits. It has caused a storm of confusion within organizations because new terms are constantly being invented to describe the various cloud types. At first, the IT industry began to recognize the private cloud infrastructures that could support only the data and workload of the particular company. As time passed, it was apparent that the cloud-based solution had developed and was made public and managed by third-party companies like AWS or Google Cloud and Microsoft. The cloud today is now able to support hybrid and multi-cloud infrastructure. What is Multi-Cloud? Multi-cloud is the dispersion of cloud-based assets, software, and apps across a variety of cloud environments. The multi-cloud infrastructure is managed specifically for a specific workload with the mix-and-match strategy used by diverse cloud services. The main benefit of a multi-cloud for many companies is the possibility of using two or more cloud services or private clouds in order to avoid dependence on one cloud service. However, multi-cloud doesn't allow the orchestration or connection between these various services. Challenges around Multi-Cloud:  Siloed cloud providers - Sometimes, the different cloud providers cause an issue with cloud monitoring and management since they have tools to monitor the workload exclusively within their cloud infrastructure.  Insufficient knowledge - multi-cloud is a relatively new concept, and the market for cloud services isn't at a point where there are people who are proficient in multi-cloud.  Selecting different cloud vendors - It is a fact that many organizations have issues when choosing cloud providers that cooperate with each other without encountering any difficulties. Why do Multi-Cloud? Multi-cloud technology supports changes and growth in business. Each department or team has its tasks, organizational roles, and volume of data produced in every company. They also have different requirements in terms of security, performance, and privacy. In turn, the use of multi-cloud in this type of business setting allows companies to satisfy the unique requirements of their departments in relation to the storage of data, structuring, and security. Additionally, businesses must be able to adapt and allow their IT to evolve as their business expands. It's not just a business-enablement strategy and IT-forward plan. Looking deeper into multi-cloud's many advantages for business, companies get an edge on the marketplace, both technologically as well as commercially. These companies also enjoy geographical benefits from using multi-cloud in that it helps address the issue of app latency and issues to a great extent. However, two other important issues force enterprises to implement multi-cloud on their premises: vendor lock-in and outages for cloud providers. Multi-cloud solutions can be a powerful tool for preventing lock-in from vendors and a method to prevent the possibility of failure or downtime at just a few locations, and a way to take advantage of unique services from various cloud service providers. In a simple statement, CIOs and IT executives of enterprise IT are opting for the multi-cloud option since it allows for greater flexibility, as well as complete control of the data of the business and the workload. Many times, business decision-makers prefer multi-cloud options together with the hybrid cloud strategy. Furthermore, we've got an 8-point list of ways to reduce Multi-Cloud expenses. What is Hybrid-Cloud? The term "hybrid cloud" refers to a mix of third parties' private cloud on-premises and cloud services. It is also referred to as a public and private cloud in addition to conventional data centres. In simple terms, it is made up of multiple cloud combinations. The mix could consist of two cloud types: two private clouds, two public clouds, or one public cloud, as well as the other cloud being private. Challenges around Hybrid Cloud:  Security - Through the hybrid cloud model, enterprises must simultaneously handle different security platforms while transferring specific data from the private cloud or reverse.  Complexities associated with cloud integrations - A high level of technical expertise is required to seamlessly integrate public and private cloud architectures without adding additional complexities to the process.  Complications around scaling - As the data grows, the cloud must also be able to grow. However, altering the hybrid cloud's architecture to keep up with data growth can be extremely difficult. AD Why do Hybrid Cloud? No matter how big the business, the transition to cloud computing cannot be completed in one straightforward move. Even if they plan to migrate to a public cloud managed by third-party companies, it is essential for proper planning for the time needed to ensure that the cloud implementation is as precise as is possible. But, prior to jumping into the cloud, companies should create a checklist of data, resources, as well as workloads and systems that will be moved to the cloud while others remain on their own located in data centres. In general terms, interoperability is a well-known and dependable illustration of the hybrid cloud. Furthermore, unless businesses are based in the cloud in the early days of operation, they're likely to be on a path that involves preparation, strategies, and support for cloud infrastructure and existing infrastructure. A lot of companies have also considered the possibility of constructing and implementing a distinct cloud environment for their IT requirements, which is integrated with their existing data centers in order to reduce the interference between internal processes and cloud-based tools. However, the complexity of the setup is more than decreases because of the necessity to perform a range of functions in different environments. In this scenario, it is essential that every business ensures that they have the resources to create and implement integrated platforms that provide a practical design and architecture for business operations.
  • 39. Which Cloud-based Solution to Adopt? Both hybrid and multi-cloud platforms provide distinct advantages to companies that can be confusing. What are the best ways of picking one of these two to help businesses succeed? Which cloud service is suitable for what department or work? What is the best way to ensure that implementing one of these options will benefit organizations in the many years? All of these questions will be addressed in the next section, which explains how the two cloud services differ from each other and which one is the best choice in the case of an organization. AD How does Multi-Cloud Differ from a Hybrid Cloud? There are distinct differences between hybrid and multi-cloud clouds in the commercial realm. Both terms are commonly employed in conjunction. This distinction is also anticipated to grow since multi-cloud computing has become the default for numerous organizations.  As is well-known that the multi-cloud approach makes use of several cloud services that typically come offered by different Third- party cloud solutions providers. This strategy allows companies to find diverse cloud solutions for various departments.  In contrast to the multi-cloud model, hybrid cloud components typically collaborate. The processes and data tend to mix and interconnect in a hybrid cloud environment, in contrast to multi-cloud environments that operate in silos.  Multi-cloud can provide organizations with additional peace of mind because it reduces the dependence on a single cloud service, thus reducing costs and enhancing flexibility.  Practically speaking, an application that runs on a hybrid cloud platform uses load balancing in conjunction with applications and web services provided by a public cloud. At the same time, databases and storage are located in a private cloud structure. The cloud-based solution includes resources that can perform the same private and public cloud functions.  Practically speaking, an application running in a multi-cloud environment could perform all computing and networking tasks on one cloud service and utilize database services from other cloud providers. In multi-cloud environments, certain applications could use resources exclusively located in Azure. However, other applications may use resources exclusively from AWS. Another example would be the use of a private and public cloud. Some applications may use resources only within the public cloud, whereas others use resources only within private clouds.  In addition to their differences, both cloud-based services give businesses the ability to provide their services to customers in an efficient and productive way. Elasticity is a 'rename' of scalability, a known non-functional requirement in IT architecture for many years already. Scalability is the ability to add or remove capacity, mostly processing, memory, or both, from an IT environment. Ability to dynamically scale the services provided directly to customers' need for space and other services. It is one of the five fundamental aspects of cloud computing. It is usually done in two ways:  Horizontal Scalability: Adding or removing nodes, servers, or instances to or from a pool, such as a cluster or a farm.  Vertical Scalability: Adding or removing resources to an existing node, server, or instance to increase the capacity of a node, server, or instance. Most implementations of scalability are implemented using the horizontal method, as it is the easiest to implement, especially in the current web- based world we live in. Vertical Scaling is less dynamic because this requires reboots of systems, sometimes adding physical components to servers. A well-known example is adding a load balancer in front of a farm of web servers that distributes the requests. Why call it Elasticity? Traditional IT environments have scalability built into their architecture, but scaling up or down isn't done very often. It has to do with Scaling and the amount of time, effort, and cost. Servers have to be purchased, operations need to be screwed into server racks, installed and configured, and then the test team needs to verify functioning, and only after that's done can you get the big There are. And you don't just buy a server for a few months - typically, it's three to five years. So it is a long-term investment that you make. The latch is doing the same, but more like a rubber band. You 'stretch' the ability when you need it and 'release' it when you don't have it. And this is possible because of some of the other features of cloud computing, such as "resource pooling" and "on-demand self-service". Combining these features with advanced image management capabilities allows you to scale more efficiently. Three forms for scalability: Below I describe the three forms of scalability as I see them, describing what makes them different from each other.  Manual Scaling: Manual scalability begins with forecasting the expected workload on a cluster or farm of resources, then manually adding resources to add capacity. Ordering, installing, and configuring physical resources takes a lot of time, so forecasting needs to be done weeks, if not months, in advance. It is mostly done using physical servers, which are installed and configured manually. Another downside of manual scalability is that removing resources does not result in cost savings because the physical server has already been paid for.  Semi-automated Scaling: Semi-automated scalability takes advantage of virtual servers, which are provisioned (installed) using predefined images. A manual forecast or automated warning of system monitoring tooling will trigger operations to expand or reduce the cluster or farm of resources. Using predefined, tested, and approved images, every new virtual server will be the same as others (except for some minor configuration), which gives you repetitive results. It also reduced the manual labor on the systems significantly, and it is a well-known fact that manual actions on systems cause around 70 to 80 percent of all errors. There are also huge benefits to using a virtual server; this saves costs after the virtual server is de-provisioned. The freed resources can be directly used for other purposes.  Elastic Scaling (fully automatic Scaling): Elasticity, or fully automatic scalability, takes advantage of the same concepts that semi- automatic scalability does but removes any manual labor required to increase or decrease capacity. Everything is controlled by a trigger from the System Monitoring tooling, which gives you this "rubber band" effect. If more capacity is needed now, it is added now and there in minutes. Depending on the system monitoring tooling, the capacity is immediately reduced. Scalability vs. Elasticity in Cloud Computing: Imagine a restaurant in an excellent location. It can accommodate up to 30 customers, including outdoor seating. Customers come and go throughout the day. Therefore restaurants rarely exceed their seating capacity. The restaurant increases and decreases its seating capacity
  • 40. within the limits of its seating area. But the staff adds a table or two to lunch and dinner when more people stream in with an appetite. Then they remove the tables and chairs to de-clutter the space. A nearby center hosts a bi-annual event that attracts hundreds of attendees for the week-long convention. The restaurant often sees increased traffic during convention weeks. The demand is usually so high that it has to drive away customers. It often loses business and customers to nearby competitors. The restaurant has disappointed those potential customers for two years in a row. Elasticity allows a cloud provider's customers to achieve cost savings, which are often the main reason for adopting cloud services. Depending on the type of cloud service, discounts are sometimes offered for long-term contracts with cloud providers. If you are willing to charge a higher price and not be locked in, you get flexibility. Let's look at some examples where we can use it.  Cloud Rapid Elasticity Example 1 Let us tell you that 10 servers are needed for a three-month project. The company can provide cloud services within minutes, pay a small monthly OpEx fee to run them, not a large upfront CapEx cost, and decommission them at the end of three months at no charge. We can compare this to before cloud computing became available. Let's say a customer comes to us with the same opportunity, and we have to move to fulfill the opportunity. We have to buy 10 more servers as a huge capital cost. When the project is complete at the end of three months, we'll have servers left when we don't need them anymore. It's not economical, which could mean we have to forgo the opportunity. Because cloud services are much more cost-efficient, we are more likely to take this opportunity, giving us an advantage over our competitors.  Cloud Rapid Elasticity Example 2 Let's say we are an eCommerce store. We're probably going to get more seasonal demand around Christmas time. We can automatically spin up new servers using cloud computing as demand grows. It works to monitor the load on the CPU, memory, bandwidth of the server, etc. When it reaches a certain threshold, we can automatically add new servers to the pool to help meet demand. When demand drops again, we may have another lower limit below which we automatically shut down the server. We can use it to automatically move our resources in and out to meet current demand.  Cloud-based software service example If we need to use cloud-based software for a short period, we can pay for it instead of buying a one-time perpetual license. Most software as service companies offers a range of pricing options that support different features and duration lengths to choose the most cost-effective one. There will often be monthly pricing options, so if you need occasional access, you can pay for it as and when needed. What is the Purpose of Cloud Elasticity? Cloud elasticity helps users prevent over-provisioning or under-provisioning system resources. Over-provisioning refers to a scenario where you buy more capacity than you need. Under-provisioning refers to allocating fewer resources than you are used to. Over-provisioning leads to wastage of cloud costs, while under-provisioning can lead to server outages as the available servers overwork. Server shutdowns result in revenue loss and customer dissatisfaction, which is bad for business. Scaling with Elasticity provides a middle ground. Elasticity is ideal for short-term needs, such as handling website traffic spikes and database backups. But Elasticity Cloud also helps to streamline service delivery when combined with scalability. For example, by spinning up additional VMs in the same server, you create more capacity in that server to handle dynamic workload surges. So, how does cloud elasticity work in a business environment? Rapid Elasticity Use Cases and Examples: At work, three excellent examples of cloud elasticity include e-commerce, insurance, and streaming services.  Use case one: Insurance.: Let's say you are in the auto insurance business. Perhaps your customers renew auto policies at roughly the same time every year. Policyholders will rush to exceed the renewal deadline. You can expect a surge in traffic when you arrive at that time. If you rely on scalability alone, a traffic spike can quickly overwhelm your provisioned virtual machine, causing service outages. It will result in a loss of revenue and customers.
  • 41. But if you have "leased" a few more virtual machines, you can handle the traffic for the entire policy renewal period. Thus, you will have multiple scalable virtual machines to manage demand in real-time. Policyholders wouldn't notice any changes in performance whether you served more customers this year than the previous year. To reduce cloud spending, you can then release some of them to virtual machines when you no longer need them, such as during off-peak months. An Elastic Cloud Platform will let you do just that. It will only charge you for the resources you use on a pay-per-use basis and not for the number of virtual machines you employ.  Use case two: e-commerce.:The more effectively you run your awareness campaign, the more potential buyers' interest you can expect to peak. Let's say you run a limited-time offer on notebooks to mark your anniversary, Black Friday, or a techno celebration. You can expect more traffic and server requests during that time. New buyers will register new accounts. This will put a lot of load on your server during the campaign's duration compared to most times of the year. Existing customers will also revisit abandoned trains from old wishlists or try to redeem accumulated points. You can provide more resources to absorb the high festive season demand with an elastic platform. After that, you can return the excess capacity to your cloud provider and keep what is doable in everyday operations.  Use case three: Streaming services.: Netflix is probably the best example to use here. When the streaming service released all 13 episodes of House of Cards' second season, viewership jumped to 16% of Netflix's subscribers, compared to just 2% for the first season's premiere weekend. Those subscribers streamed one of those episodes within seven to ten hours that Friday. Now, Netflix has over 50 million subscribers (February 2014). So a 16% jump in viewership means that over 8 million subscribers streamed a portion of the show in a single day within a workday. Netflix engineers have repeatedly stated that they take advantage of the Elastic Cloud services by AWS to serve multiple such server requests within a short period and with zero downtime. Bottom line: If your cloud provider offers Cloud Elasticity by default, and you have activated the feature in your account, the platform will allocate you unlimited resources at any time. It means that you will be able to handle both sudden and expected workload spikes. Benefits and Limitations of Cloud Elasticity Elasticity in the cloud has many powerful benefits.  Elasticity balances performance with cost-effectiveness: An Elastic Cloud provider provides system monitoring tools that track resource usage. Then they automatically analyze resource allocation versus usage. The goal is always to ensure that these two metrics match to ensure that the system performs cost-effectively at its peak. Cloud providers also price it on a pay-per-use model, allowing you to pay for what you use and no more. The pay-as-you-expansion model will let you add new infrastructure components to prepare them for growth.  It helps in providing smooth services.: Cloud elasticity combines with cloud scalability to ensure that both the customer and the cloud platform meet changing computing needs when the need arises. For a cloud platform, Elasticity helps keep customers happy. While scalability helps it handle long-term growth, Elasticity currently ensures flawless service availability. It also helps prevent system overloading or runaway cloud costs due to over-provisioning. Limits or disadvantages of cloud elasticity? Cloud elasticity may not be for everyone. Cloud scalability alone may be sufficient if you have a relatively stable demand for your products or services online. For example, if you run a business that doesn't experience seasonal or occasional spikes in server requests, you may not mind using scalability without Elasticity. Keep in mind that Elasticity requires scalability, but not vice versa. Still, no one could have predicted that you might need to take advantage of a sudden wave of interest in your company. So, what do you do when you need to be up for that opportunity but don't want to ruin your cloud budget speculation? Enter cloud cost optimization. How is cloud cost optimization related to cloud elasticity? Elasticity uses dynamic variations to align computing resources to the demands of the workload as closely as possible to prevent wastage and promote cost-efficiency. Another goal is usually to ensure that your systems can continue to serve customers satisfactorily, even when bombarded by heavy, sudden workloads. But not all cloud platform services support the Scaling in and out of cloud elasticity. For example. Some AWS services include Elasticity as a part of their offerings, such as Amazon Simple Storage Service (S3), Amazon Simple Queue Service (SQS), and Amazon Aurora. Amazon Aurora qualifies as serverless Elastic, while others like Amazon Elastic Compute Cloud (EC2) integrate with Amazon Auto Scaling and support Elastic. Whether or not you use the Elastic service to reduce cloud costs dynamically, you'll want to increase your cloud cost visibility in a way that Amazon Cloud Watch doesn't offer. CloudZero allows engineering teams to track and oversee the specific costs and services driving their products, facilities, etc. You can group costs by feature, product, service, or account to uncover unique insights about your cloud costs that will help you answer what's changing, why, and why you want to know more about it. You can also measure and monitor your unit costs, such as cost per customer. Here's a look at Cloud Xero's cost per customer report, where you can uncover important cost information about your customers, which can help guide your engineering and pricing decisions. Fog computing vs. Cloud computing The delivery of on-demand computing services is known as cloud computing. We may use applications to store and process power over the Internet. Without owning any computing infrastructure or data center, anyone can rent access to anything from applications to storage from a cloud service provider. It is a pay-as-you-go service. By using cloud computing services and paying for what we use, we can avoid the complexity of owning and maintaining infrastructure. Cloud computing service providers can benefit from significant economies of scale by providing similar services to customers. Fog computing is a decentralized computing infrastructure or process in which computing resources are located between a data source and a cloud or another data center. Fog computing is a paradigm that provides services to user requests on edge networks. Devices at the fog layer typically perform networking-related operations such as routers, gateways, bridges, and hubs. The researchers envision these devices to perform both computational and networking tasks simultaneously. Although these tools are resource-constrained compared to cloud servers, the geological spread and decentralized nature help provide reliable services with coverage over a wide area. Fog is the physical location of computing devices much closer to users than cloud servers. Benefits of Fog Computing:  Fog computing is less expensive to work with because the data is hosted and analyzed on local devices rather than transferred to any cloud device.  It helps in facilitating and controlling business operations by deploying fog applications as per the user's requirement.
  • 42.  Fogging provides users with various options to process their data on any physical device. Benefits of Cloud Computing:  It works on a pay-per-use model, where users have to pay only for the services they are receiving for a specified period.  Cloud users can quickly increase their efficiency by accessing data from anywhere, as long as they have net connectivity.  It increases cost savings as workloads can be transferred from one Cloud to another cloud platform. Table of differences between cloud computing and fog computing is given below: Specialty Cloud Computing fog computing Delay Cloud computing has higher latency than fog computing Fog computing has low latency Capacity Cloud computing does not provide any reduction in data while sending or converting data. Fog computing reduces the amount of data sent to cloud computing. Responsiveness The response time of the system is low. The response time of the system is high. Security Cloud computing has less Security compared to Fog Computing Fog computing has high Security. Speed Access speed is high depending on the VM connectivity. High even more compared to Cloud Computing. Data Integration Multiple data sources can be integrated. Multiple Data sources and devices can be integrated. Mobility In cloud computing, mobility is Limited. Mobility is supported in fog computing. Location Awareness Partially Supported in Cloud computing. Supported in fog computing. Number of Server Nodes Cloud computing has Few numbers server nodes. Fog computing has a Large number of server nodes. Geographical Distribution It is centralized. It is decentralized and distributed. Location of service Services provided within the Internet. Services are provided at the edge of the local network. Working environment Specific data center building with air conditioning systems Outdoor (streets, base stations, etc.) or indoor (houses, cafes, etc.) Communication mode IP network Wireless communication: WLAN, WiFi, 3G, 4G, ZigBee, etc. or wired communication (part of the IP networks) Dependence on the quality of core network Requires strong network core. It can also work in a Weak network core. Difference between Fog Computing and Cloud Computing: Information:  In fog computing, data is received from IoT devices using any protocol.  Cloud computing receives and summarizes data from different fog nodes. Structure:  Fog has a decentralized architecture where information is located on different nodes at the source closest to the user.  There are many centralized data centers in the Cloud, making it difficult for users to access information on the networking area at their nearest source. Protection:  Fog is a more secure system with different protocols and standards, which minimizes the chances of it collapsing during networking.  As the Cloud operates on the Internet, it is more likely to collapse in case of unknown network connections. Component:  Fog has some additional features in addition to the features provided by the components of the Cloud that enhance its storage and performance at the end gateway.  Cloud has different parts such as frontend platform (e.g., mobile device), backend platform (storage and servers), cloud delivery, and network (Internet, intranet, intercloud). Accountability:  Here, the system's response time is relatively higher compared to the Cloud as fogging separates the data and then sends it to the Cloud.  Cloud service does not provide any isolation in the data while transmitting the data at the gate, increasing the load and thus making the system less responsive. Application:  Edge computing can be used for smart city traffic management, automating smart buildings, visual Security, self-maintenance trains, wireless sensor networks, etc.  Cloud computing can be applied to e-commerce software, word processing, online file storage, web applications, creating image albums, various applications, etc. Reduces latency:  Fog computing cascades system failure by reducing latency in operation. It analyzes the data close to the device and helps in averting any disaster. Flexibility in Network Bandwidth:  Large amounts of data are transferred from hundreds or thousands of edge devices to the Cloud, requiring fog-scale processing and storage.  For example, commercial jets generate 10 TB for every 30 minutes of flight. Fog computing sends selected data to the cloud for historical analysis and long-term storage. Wide geographic reach:  Fog computing provides better quality of services by processing data from devices that are also deployed in areas with high network density.  On the other hand, Cloud servers communicate only with IP and not with the endless other protocols used by IoT devices. Real-time analysis:  Fog computing analyzes the most time-sensitive data and operates on the data in less than a second, whereas cloud computing does not provide round-the-clock technical support. Operating Expenses:  The license fee and on-premises maintenance for cloud computing are lower than fog computing. Companies have to buy edge device routers.
  • 43. Fog Computing vs. Cloud Computing for IoT Projects: According to Statista, by 2020, there will be 30 billion IoT devices worldwide, and by 2025 this number will exceed 75 billion connected things. These tools will produce huge amounts of data that will have to be processed quickly and permanently. F fog computing works similarly to cloud computing to meet the growing demand for IoT solutions. Fog is even better on some things. This article aims to compare Fog vs. Cloud and tell you more about Fog vs. cloud computing possibilities and their pros and cons. We provide leading-edge IoT development services for companies looking to transform their business.  Cloud Computing We are already used to the technical term cloud, a network of multiple devices, computers, and servers connected to the Internet. Such a computing system can be figuratively divided into two parts:  Frontend- is made up of the client device (computer, tablet, mobile phone).  Backend- consists of data storage and processing systems (servers) that can be located far from the client device and make up the Cloud itself. These two layers communicate with each other using a direct wireless connection. Cloud computing technology provides a variety of services that are classified into three groups:  IaaS (Infrastructure as a Service) - A remote data center with data storage capacity, processing power, and networking resources.  PaaS (Platform as a Service) - A development platform with tools and components to build, test, and launch applications.  SaaS (Software as a Service) - Software tailored to suit various business needs. By connecting your company to the Cloud, you can access the services mentioned above from any location and through various devices. Therefore, availability is the biggest advantage. Plus, there's no need to maintain local servers and worry about downtimes - the vendor supports everything for you, saving you money. Integrating the Internet of Things with the Cloud is an affordable way to do business. Off-premises services provide the scalability and flexibility needed to manage and analyze data collected by connected devices. At the same time, specialized platforms (e.g., Azure IoT Suite, IBM Watson, AWS, and Google Cloud IoT) give developers the power to build IoT apps without major investments in hardware and software.  Advantages of Cloud for IoT Since connected devices have limited storage capacity and processing power, integration with cloud computing comes to the aid of: o Improved performance - faster communication between IoT sensors and data processing systems. o Storage Capacity - Highly scalable and unlimited storage space can integrate, aggregate, and share huge data. o Processing Capabilities - Remote data centers provide unlimited virtual processing capabilities on demand. o Low Cost - The license fee is less than the cost of on-premises equipment and its ongoing maintenance.  Disadvantages of Cloud for IoT Unfortunately, nothing is spotless, and cloud technology has some drawbacks, especially for Internet of Things services.  High latency - More and more IoT apps require very low latency, but the Cloud cannot guarantee this due to the distance between client devices and data processing centers.  Downtimes - Technical issues and network interruptions can occur in any Internet-based system and cause customers to suffer from outages; Many companies use multiple connection channels with automatic failover to avoid problems.  Security and Privacy - your data is transferred via globally connected channels along with thousands of gigabytes of other users' information; No wonder the system is vulnerable to cyber-attacks or data loss; the problem can be partially solved with the help of hybrid or private clouds. AD Cisco coined the term fog computing (or fogging) in 2014, so it is new to the general public. Fog and cloud computing are intertwined. In nature, Fog is closer to Earth than clouds; In the tech world, it's the same; Fog is closer to end-users, bringing cloud capabilities to the ground. The definition may sound like this: Fog is an extension of cloud computing that consists of multiple edge nodes directly connected to physical devices. Such nodes tend to be much closer to devices than centralized data centers so that they can provide instant connections. The considerable processing power of edge nodes allows them to compute large amounts of data without sending them to distant servers. Fog can also include cloudlets - small-scale and rather powerful data centers located at the network's edge. They are intended to support resource-intensive IoT apps that require low latency. The main difference between fog computing and cloud computing is that Cloud is a centralized system, whereas Fog is a distributed decentralized infrastructure. Fog is an intermediary between computing hardware and a remote server. It controls what information should be sent to the server and can be processed locally. In this way, Fog is an intelligent gateway that dispels the clouds, enabling more efficient data storage, processing, and analysis. It should be noted that fog networking is not a separate architecture. It does not replace cloud computing but complements it by getting as close as possible to the source of information. There is another method for data processing similar to fog computing - edge computing. The essence is that the data is processed directly on the devices without sending it to other nodes or data centers. Edge computing is particularly beneficial for IoT projects as it provides bandwidth savings and better data security. The new technology is likely to have the biggest impact on the development of IoT, embedded AI, and 5G solutions, as they, like never before, demand agility and seamless connections.
  • 44. Advantages of fog computing in IoT: The fogging approach has many benefits for the Internet of Things, Big Data, and real-time analytics. The main advantages of fog computing over cloud computing are as follows:  Low latency - Fog tends to be closer to users and can provide a quicker response.  There is no problem with bandwidth - pieces of information are aggregated at separate points rather than sent through a channel to a single hub.  Due to the many interconnected channels - loss of connection is impossible.  High Security - because the data is processed by multiple nodes in a complex distributed system.  Improved User Experience - Quick responses and no downtime make users satisfied.  Power-efficiency - Edge nodes run power-efficient protocols such as Bluetooth, Zigbee, or Z-Wave. Disadvantages of fog computing in IoT: The technology has no obvious disadvantages, but some shortcomings can be named:  Fog is an additional layer in a more complex system - a data processing and storage system.  Additional expenses - companies must buy edge devices: routers, hubs, gateways.  Limited scalability - Fog is not scalable like a cloud. Cloud Computing is the delivery of cloud computing services like servers, storage networks, databases, applications for software Big Data Processing or analytics via the Internet. The most significant difference between cloud services and traditional web-hosted services is that cloud-hosted services are available on demand. We can avail ourselves of as many or as little as we'd like from a cloud service. Cloud-based providers have revolutionized the game using the pay- as-you-go model. This means that the only cost we pay is for services we use, proportion to the number of times our customers or we utilize the services. We can save money on expenditures for buying and maintaining servers in-house as well as data warehouses and the infrastructure that supports them. The cloud service provider handles everything else. There are generally three kinds of clouds:  Public Cloud  Private Cloud  Hybrid Cloud A public cloud is described by cloud-based computing provided by third-party vendors like Amazon Web Services over the Internet and making them accessible to users on the subscription model. One of the major advantages of the cloud public is that it permits customers to pay only the amount they've used in terms of bandwidth, storage processing, or the ability to analyse. Cloud providers can eliminate the cost of infrastructure for buying and maintaining their cloud infrastructures (servers, software, and much more). A private cloud is described as a cloud that provides the services of computing via the Internet or a private internal network to a select group of users. The services are not accessible open to all users. A private cloud is often known as a private cloud or a corporate cloud. Private cloud enjoys certain benefits of a cloud public like:  Self-service  Scalability  Elasticity AD Benefits of Clouds that are private Cloud:  Low latency because of the proximity to Cloud setup (hosted near offices)  Greater security and privacy thanks to firewalls within the company  Blocking of sensitive information from third-party suppliers and users One of the major disadvantages of using a private cloud is that we can't reduce the cost of equipment, staffing, and other infrastructure costs in establishing and managing our cloud. The most effective way to use a private cloud can be achieved through an effective Multi-Cloud and Hybrid Cloud setup. In general, Cloud Computing offers a few business-facing benefits:  Cost  Speed  Security  Productivity  Performance  Scalability Let's discuss multi-Cloud and how it compares to Hybrid Cloud. Hybrid Cloud vs. Multi-Cloud Hybrid Cloud is a combination of private and public cloud computing services. The primary difference is that both the public and private cloud services that are part of the Hybrid Cloud setup communicate with each other. Contrary to this, in a multi-Cloud setup, both the public and private cloud providers are not able to speak to one another. In general, cloud configurations for public and private clouds are utilized for completely different purposes and are separated from one another within the business. Hybrid cloud solutions have advantages that could entice users to choose the hybrid approach. With a private and a public cloud that communicates with one another, we can reap the advantages of both by hosting less crucial elements in a cloud that is public and using the private cloud reserved for important and sensitive information. In a broad sense in the overall picture, from a holistic perspective, Hybrid cloud has more of an execution point of view to take advantage of the benefits that come from both cloud services that are private and public, as well as their interconnection. Contrarily, multi-cloud is a more strategic option than an execution decision. Multi-Cloud is usually not a multi-vendor cloud configuration. Multi-cloud can utilize services from multiple vendors and is a mix between AWS, Azure, and GCP. The primary distinguishing factors that differentiate Hybrid and Multi-Cloud could be:  A Multi-Cloud is utilized to perform a range of tasks. It typically consists of multiple cloud providers.  A hybrid cloud is typically the result of combining cloud services that are private and public, which mix with one another. Multi-Cloud Strategy Multi-Cloud Strategy involves the implementation of several cloud computing solutions simultaneously.
  • 45. Multi-cloud refers to the sharing of our web, software, mobile apps, and other client-facing or internal assets across several cloud services or environments. There are numerous reasons to opt for a multi-cloud environment for our company, including the reduction of dependence on a single cloud service provider and improving fault tolerance. Furthermore, businesses choose cloud service providers that follow an approach based on services. This has a major impact on why companies opt for a multi-cloud system. We'll talk about this in the near future. A Multi-Cloud may be constructed in many ways:  It is a mix of cloud computing services offered by the private cloud to create a multi-cloud cloud, o Setting up our servers in various regions of the globe and creating an online cloud network to manage and distribute the services is an excellent illustration of an all-private multi-cloud configuration.  It could be a mixture of all cloud service providers and o A combination of several cloud service providers, like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform, is an example of a free cloud setup.  It may comprise a combination of both private cloud service providers to make a multi-cloud architecture. o Private cloud providers that use AWS in conjunction with AWS or Azure could fall into this category. If it is optimized for your business, we could enjoy all the benefits of AWS and Azure.  A typical multi-Cloud setup is a mix of two or more cloud providers together with one private cloud to remove the dependence on one cloud services provider. Why has Multi-cloud strategy become the norm? When cloud computing was introduced in a huge way, businesses began to recognize a few issues.  Security: Relying on security services that one cloud service provider provides makes us more susceptible to DDoS as well as other cyber-attacks. If there is an attack on the cloud, the whole cloud would be compromised, and the company could be crippled.  Reliability: If we're relying on just one cloud-based service, reliability is at risk. A cyber-attack, natural catastrophe, or security breach could compromise our private information or result in a loss of data.  Loss of Business: Software-driven businesses are working on regular UI improvements, bug fixes, and patches that have to be rolled out monthly or weekly to their Cloud Infrastructure. In order to implement a single cloud strategy, the business suffers downtime because their cloud services are not accessible to their customers. This can result in the loss of business as well as the loss of money.  Vendor lock-in: Vendor lock-in refers to the situation of a client of one particular service, product, or product in which the customer is unable to easily switch from the product or service to a competitor's service or product. This is usually the case in the event that proprietary software is utilized in a service that isn't compatible with the new service or product vendor or even within the legal bounds of the contract or the law. It is why businesses are forced to commit to a certain cloud provider even if they're dissatisfied with their service. The reason for switching providers can be numerous, including better capabilities and features provided by competitors to lower pricing, and so on. Additionally, moving the data between cloud providers to the next is a hassle since it has to be transferred to the local datacentres before being transferred to the cloud provider. Benefits of a Multi-Cloud Strategy Let's discuss the advantages from the benefits of a Multi-Cloud Strategy that inherently answer the challenges posed by one or more cloud-based service. Many of the problems with a single cloud environment are solved when we consider a multi-cloud perspective.  Flexibility: One of the most important benefits of multi-cloud cloud computing systems is flexibility. There is no lock-in of the vendor customers able to test different cloud providers and play with their capabilities and features. A lot of companies that are tied to a single provider cannot implement new technologies or innovate because the cloud service provider is bound to them to certain compatibility. This is not a problem with a multi-cloud system. we can create a cloud system to sync with our company's goals. Multi-cloud lets us select our cloud services. Each cloud service has its distinct features. Choose the ones that meet our business's requirements the best, and then choose services from a variety of providers to select the best solution for our business.  Security: The most important aspect of multi-cloud is risk reduction. If multiple cloud providers host us, we can reduce the chance of being hacked and losing data in the event of vulnerabilities in our cloud provider. Also, we reduce the chance of injury caused by natural disasters or human error. In the end, we should not put all our eggs in one basket.  Fault Tolerance: One of the biggest issues with using one cloud service provider is that it offers zero fault tolerance. With a multi-cloud system, it is possible to have backups and data redundancies in the right place. Also, we can strategically schedule downtime for deployment or maintenance of our software/applications without letting our clients suffer.  Performance: Each cloud service provider, such as AWS (64plus nations), Azure (140+ countries), or GCP (200plus countries), has been established throughout the world. Based on our location and our workload, we'll be able to choose the best cloud service provider to lower the delay and speed of our operations.  IoT and ML/AI are Emerging Opportunities.: In the age of Machine Learning and Artificial Intelligence growing exponentially, there's a lot of potential for analysis of our data on the cloud and using these capabilities for better decision-making and customer service. The top cloud service providers offer their distinct features. Google Cloud Platform (GCP) for AI, AWS for serverless computing, and IBM for AI/ML are just a couple of options worth considering.  Cost: The cost will always be an important factor when making a purchase decision. Cloud computing is evolving in the time we go through this. The competition is so fierce that providers of cloud services are coming up with a viable pricing solution that we can gain. In a multi-cloud setting, depending on the service or feature we'll use with the service provider, we are able to select the most appropriate option. AWS, Azure, and Google all offer pricing calculators. They help manage costs to aid us in making the right choice.  Governance and Compliance Regulations: The big clients typically will require you to comply with specific local as well as cybersecurity regulations. For example, GDPR compliance or the ISO cybersecurity certification. There is a chance that our business could be affected because a certain cloud service could violate our security certificates, or the cloud provider may not have been certified. We may choose an alternative provider without losing our significant clientele if this happens. Few Disadvantages of Multi-Cloud  Discount on High Volume Purchases: Cloud service providers that are public offer massive discounts when we buy their services in bulk. But, if we have multi-cloud, it is unlikely that we'll get these discounts because the volume we purchase will be split between various service providers.  The Training of Existing Employees or new Hiring: We must prepare our existing staff or recruit new employees to be able to use cloud computing in our company. It will cost us more and time spent in training.  Effective Multi-Cloud Management: Multi-cloud requires efficient cloud management, which requires knowing the workload and business requirements and then dispersing the work among cloud service providers most suitable for the task. For instance, a company might make use of AWS for computing service, Google or Azure for communication and email tools, and Salesforce to manage customer relationships. It requires expertise in the cloud and business domain to comprehend these subtleties.
  • 46. A Service Level Agreement (SLA) is the bond for the performance of the negotiation between a cloud service provider and a client. Earlier, in cloud computing, all service level agreements were negotiated between a customer and a service consumer. With the introduction of large utilities such as cloud computing providers, most service level agreements are standardized until a customer becomes a large consumer of cloud services. Service level agreements are also defined at different levels, which are mentioned below:  Customer-based SLA  Service-based SLA  Multilevel SLA Some service level agreements are enforceable as contracts, but most are agreements or contracts that are more in line with an operating level agreement (OLA) and may not be constrained by law. It's okay to have a lawyer review documents before making any major settlement with a cloud service provider. Service level agreements usually specify certain parameters, which are mentioned below:  Availability of the Service (uptime)  Latency or the response time  Service components reliability  Each party accountability  Warranties If a cloud service provider fails to meet the specified targets of the minimum, the provider will have to pay a penalty to the cloud service consumer as per the agreement. So, service level agreements are like insurance policies in which the corporation has to pay as per the agreement if an accident occurs. Microsoft publishes service level agreements associated with Windows Azure platform components, demonstrating industry practice for cloud service vendors. Each component has its own service level contracts. The two major Service Level Agreements (SLAs) are described below:  Windows Azure SLA - Windows Azure has separate SLAs for computing and storage. For Compute, it is guaranteed that when a client deploys two or more role instances to different fault and upgrade domains, the client's Internet-facing roles will have external connectivity at least 99.95% of the time. In addition, all role instances of the client are monitored, and 99.9% of the time it is guaranteed to detect when the role instance's process does not run and starts properly.  SQL Azure SLA - The SQL Azure client will have connectivity between the database of SQL Azure and the Internet Gateway. SQL Azure will handle a "monthly availability" of 99.9% within a month. The monthly availability ratio for a particular tenant database is the ratio of the time the database was available to customers to the total time in a month. Time is measured in intervals of a few minutes in a 30-day monthly cycle. If SQL Azure Gateway rejects attempts to connect to the customer's database, part of the time is unavailable. Availability is always remunerated for a full month. Service level agreements are based on the usage model. Often, cloud providers charge their pay-per-use resources at a premium and enforce standard service level contracts for just that purpose. Customers can also subscribe to different tiers that guarantee access to a specific amount of purchased resources. Service level agreements (SLAs) associated with subscriptions often offer different terms and conditions. If the client requires access to a particular level of resources, the client needs to subscribe to a service. A usage model may not provide that level of access under peak load condition Cloud infrastructure can span geographies, networks, and systems that are both physical and virtual. While the exact metrics of cloud SLAs can vary by service provider, the areas covered are the same:  Volume and quality of work (including precision and accuracy);  Speed;  Responsiveness; and  Efficiency. The purpose of the SLA document is to establish a mutual understanding of the services, priority areas, responsibilities, guarantees and warranties. It clearly outlines metrics and responsibilities between the parties involved in cloud configuration, such as the specific amount of response time to report or address system failures.  The importance of a cloud SLA Service-level agreements are fundamental as more organizations rely on external providers for critical systems, applications and data. Cloud SLAs ensure that cloud providers meet certain enterprise-level requirements and provide customers with a clearly defined set of deliverables. It also describes financial penalties, such as credit for service time, if the provider fails to meet guaranteed conditions. The role of a cloud SLA is essentially the same as that of any contract -- it's a blueprint that governs the relationship between a customer and a provider. These agreed terms form a reliable foundation upon which the Customer commits to use the cloud providers' services. They also reflect the provider's commitments to quality of service (QoS) and the underlying infrastructure.  What to look for in a cloud SLA The cloud SLA should outline each party's responsibilities, acceptable performance parameters, a description of the applications and services covered under the agreement, procedures for monitoring service levels, and a program for remediation of outages. SLAs typically use technical definitions to measure service levels, such as mean time between failures (MTBF) or average time to repair (MTTR), which specify targets or minimum values for service-level performance. does. The defined level of services must be specific and measurable so that they can be benchmarked and, if stipulated by contract, trigger rewards or penalties accordingly.
  • 47. Depending on the cloud model you choose, you can control much of the management of IT assets and services or let cloud providers manage it for you. A typical compute and cloud SLA expresses the exact levels of service and recourse or compensation that the User is entitled to in case the Provider fails to provide the Service. Another important area is service availability, which specifies the maximum time a read request can take, how many retries are allowed, and other factors. The cloud SLA should also define compensation for users if the specifications are not met. A cloud service provider typically offers a tiered service credit plan that gives credit to users based on the discrepancy between the SLA specifications and the actual service tiers.  Selecting and monitoring cloud SLA metrics Most cloud providers publicly provide details of the service levels that users can expect, and these are likely to be the same for all users. However, an enterprise choosing a cloud service may be able to negotiate a more customized deal. For example, a cloud SLA for a cloud storage service may include unique specifications for retention policies, the number of copies to maintain, and storage space. Cloud service-level agreements can be more detailed to cover governance, security specifications, compliance, and performance and uptime statistics. They should address security and encryption practices for data security and data privacy, disaster recovery expectations, data location, and data access and portability.  Verifying cloud service levels Customers can monitor service metrics such as uptime, performance, security, etc. through the cloud provider's native tooling or portal. Another option is to use third-party tools to track the performance baseline of cloud services, including how resources are allocated (for example, memory in a virtual machine or VM) and security. Cloud SLA must use clear language to define the terms. Such language controls, for example, the inaccessibility of the service and who is responsible - slow or intermittent loading can be attributed to latency in the public Internet, which is beyond the control of the cloud provider. Providers usually specify and waive any downtime due to scheduled maintenance periods, which are usually, but not always, regularly scheduled and re-occurring. AD  Negotiating a cloud SLA Most common cloud services are simple and universal, with some variations, such as infrastructure (IaaS) options. Be prepared to negotiate for any customized services or applications delivered through the cloud. There may be more room to negotiate terms in specific custom areas such as data retention criteria or pricing and compensation/fines. Negotiation power generally varies with the size of the client, but there may be room for more favorable terms. When entering into any cloud SLA negotiation, it is important to protect the business by making the uptime clear. A good SLA protects both the customer and the supplier from missed expectations. For example, 99.9% uptime ("three nines") is a common bet that translates to nine hours of outages per year; 99.9999% ("five nine") means an annual downtime of approximately five minutes. Some mission-critical data may require high levels of availability, such as fractions of a second of annual downtime. Consider several areas or areas to help reduce the impact of major outages. Keep in mind that some areas of Cloud SLA negotiations have unnecessary insurance. Certain use cases require the highest uptime guarantee, require additional engineering work and cost and may be better served with private on-premises infrastructure. Pay attention to where the data resides with a given cloud provider. Many compliance regulations such as HIPAA (Health Insurance Portability and Accountability Act) require data to be held in specific areas, along with certain privacy guidelines. The cloud customer owns and is responsible for this data, so make sure these requirements are built into the SLA and validated by auditing and reporting. Finally, a cloud SLA should include an exit strategy that outlines the provider's expectations to ensure a smooth transition.  Scaling a cloud SLA Most SLAs are negotiated to meet the current needs of the customer, but many businesses change dramatically in size over time. A solid cloud service-level agreement outlines the gaps where the contract is reviewed and potentially adjusted to meet the changing needs of the organization. Some vendors build in notification workflows that are triggered when a cloud service-level agreement is close to breach in order to initiate new negotiations based on changes in scale. This uptime can cover usage exceeding the availability level or norm and can warrant an upgrade to a new service level. "Anything as a service" (XaaS) describes a general category of cloud computing and remote access services. It recognizes the vast number of products, tools, and technologies now delivered to users as a service over the Internet. Essentially, any IT function can be a service for enterprise consumption. The service is paid for in a flexible consumption model rather than an advance purchase or license. What are the benefits of XaaS? XaaS has many benefits: improving spending models, speeding up new apps and business processes, and shifting IT resources to high-value projects.  Expenditure model improvements. With XaaS, businesses can cut costs by purchasing services from providers on a subscription basis. Before XaaS and cloud services, businesses had to buy separate products-software, hardware, servers, security, infrastructure-install them on-site, and then link everything together to form a network. With XaaS, businesses buy what they need and pay on the go. The previous capital expenditure now becomes an operating expense.  Speed up new apps and business processes. This model allows businesses to adopt new apps or solutions to changing market conditions. Using multi-tenant approaches, cloud services can provide much-needed flexibility. Resource pooling and rapid elasticity support mean that business leaders can add or subtract services. When users need innovative resources, a company can use new technologies, automatically scaling up the infrastructure.  Transferring IT resources to high-value projects. Increasingly, IT organizations are turning to a XaaS delivery model to streamline operations and free up resources for innovation. They are also harnessing the benefits of XaaS to transform digitally and become more agile. XaaS gives more users access to cutting-edge technology, democratizing innovation. In a recent survey by Deloitte, 71% of companies report that XaaS now constitutes more than half of their company's enterprise IT. What are the disadvantages of XaaS? There are potential drawbacks to XaaS: possible downtime, performance issues, and complexity.  Possible downtime. The Internet sometimes breaks down, and when this happens, your XaaS provider can be a problem too. With XaaS, there can be issues of Internet reliability, flexibility, provisioning, and management of infrastructure resources. If XaaS servers go down, users will not be able to use them. XaaS providers can guarantee services through SLAs.  Performance issues. As XaaS becomes more popular, bandwidth, latency, data storage, and recovery times can be affected. If too many clients use the same resources, the system may slow down. Apps running in virtualized environments can also be
  • 48. affected. Integration issues can occur in these complex environments, including the ongoing management and security of multiple cloud services.  Complexity effect. Advancing technology for XaaS can relieve IT, workers from day-to-day operational headaches; however, it can be difficult to troubleshoot if something goes wrong. Internal IT staff still needs to stay updated on new technology. The cost of maintaining a high-performance, a robust network can add up - although the overall cost savings of the XaaS model are usually enormous. Nonetheless, some companies want to maintain visibility into their XaaS service provider's environment and infrastructure. Furthermore, a XaaS provider that gets acquired shuts down a service or changes its roadmap can profoundly impact XaaS users. AD What are some examples of XaaS? Because XaaS stands for "anything as a service," the list of examples is endless. Many kinds of IT resources or services are now delivered this way. Broadly speaking, there are three categories of cloud computing models: software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). Outside these categories, there are other examples such as disaster recovery as a service (DRaaS), communications as a service (CaaS), network as a service (NaaS), database as a service (DBaaS), storage as a service (STaaS), desktop as a service (DaaS), and monitoring as a service (MaaS). Other emerging industry examples include marketing as a service and healthcare as a service. NetApp and XaaS NetApp provides several XaaS options, including IaaS, IT as a service (ITaaS), STaaS, and PaaS.  When you differentiate your hosted and managed infrastructure services, you can increase service and platform revenue, improve customer satisfaction, and turn IaaS into a profit center. You can also take advantage of new opportunities to differentiate and expand services and platform revenue, including delivering more performance and predictability from your IaaS services. Plus, NetApp® technology can enable you to offer a competitive advantage to your customers and reduce time to market for deploying IaaS solutions.  When your data center is in a private cloud, it takes advantage of cloud features to deliver ITaaS to internal business users. A private cloud offers characteristics similar to the public cloud but is designed for use by a single organization. These characteristics include:  Catalog-based, on-demand service delivery  Automated scalability and service elasticity  Multitenancy with shared resource pools  Metering with utility-style operating expense models  Software-defined, centrally managed infrastructure  Self-service lifecycle management of services STaaS. NetApp facilitates private storage as a service in a pay-as-you-go model by partnering with various vendors, including Arrow Electronics, HPE ASE, BriteSky, DARZ, DataLink, Faction, Forsythe, Node4, Proact, Solvinity, Synoptek, and 1901 Group. NetApp also seamlessly integrates with all major cloud service providers including AWS, Google Cloud, IBM Cloud, and Microsoft Azure. PaaS. NetApp PaaS solutions help simplify a customer's application development cycle. Our storage technologies support PaaS platforms to:  Reduce application development complexity.  Provide high-availability infrastructure.  Support native multitenancy.  Deliver webscale storage. PaaS services built on NetApp technology enable your enterprise to adopt hybrid hosting services-and accelerate your application-deployment time. The future market for XaaS The combination of cloud computing and ubiquitous, high-bandwidth, global internet access provides a fertile environment for XaaS growth. Some organizations have been tentative to adopt XaaS because of security, compliance and business governance concerns. However, service providers increasingly address these concerns, allowing organizations to bring additional workloads into the cloud.  Resurce Pooling: The next resource we will look at that we can pool is the storage. The big blue box represents a storage system with many hard drives in the diagram below. Each of the smaller white squares represents the hard drives. With my centralized storage, I can slice up my storage however I want and give the virtual machines their own small part of that storage for however much space they require. In the example below, I take a slice of the first disk and allocate that as the boot disk for 'Tenant 1, Server 1'. I take another slice of my storage and provision that as the boot disk for 'Tenant 2, Server 1'. Shared centralized storage makes storage allocation efficient - rather than giving whole disks to different servers, I can give them exactly how much storage they require. Further savings can be made through storage efficiency techniques such as thin provisioning, deduplication, and compression. Check out my Introduction to SAN and NAS Storage course to learn more about centralized storage.
  • 49.  Network Infrastructure Pooling: The next resource that can be pooled is network infrastructure. At the top of the diagram below is a physical firewall. All different tenants will have firewall rules that control what traffic is allowed into their virtual machines, such as RDP for management and HTTP traffic on port 80 if it is a web server. We don't need to give each customer their physical firewall; We can share the same physical firewall between different clients. Load balancers for incoming connections can also be virtualized and shared among multiple clients. In the main section on the left side of the diagram, you can see several switches and routers. Those switches and routers are shared, with traffic going through the same device to different clients.  Service pooling: The cloud provider also provides various services to the customers, as shown on the right side of the diagram. Windows Update and Red Hat Update Server for operating system patching, DNS, etc. Keeping DNS as a centralized service saves customers from having to provide their DNS solution.  Location Independence: As stated by NIST, the customer generally has no knowledge or control over the exact location of the resources provided. Nevertheless, they may be able to specify the location at a higher level of abstraction, such as the country, state, or data center level. For example, let's use AWS again; When I created a virtual machine, I did it in a Singapore data center because I am located in the Southeast Asia region. I would get the lowest network latency and best performance by having mine. With AWS, I know the data center where my virtual machine is located, but not the actual physical server it is running on. It could be anywhere in that particular data center. It can use any personal storage system in the data center and any personal firewall. Those specifics don't matter to the customer. How does resource pooling work? In this private cloud as a service, the user can choose the ideal resource segmentation based on his needs. The main thing to be considered in resource pooling is cost-effectiveness. It also ensures that the brand provides new delivery of services. It is commonly used in wireless technology such as radio communication. And here, single channels join together to form a strong connection. So, the connection can transmit without interference. And in the cloud, resource pooling is a multi-tenant process that depends on user demand. It is why it is known as SaaS or Software as a Service controlled in a centralized manner. Also, as more and more people start using such SaaS services as service providers. The charges for the services tend to be quite low. Therefore, owning such technology becomes more accessible at a certain point than it. In a private cloud, the pool is created, and cloud computing resources are transferred to the user's IP address. Therefore, by accessing the IP address, the resources continue to transfer the data to an ideal cloud service platform. Benefits of resource pooling  High Availability Rate: Resource pooling is a great way to make SaaS products more accessible. Nowadays, the use of such services has become common. And most of them are far more accessible and reliable than owning one. So, startups and entry-level businesses can get such technology.  Balanced load on the server: Load balancing is another benefit that a tenant of resource pooling-based services enjoys. In this, users do not have to face many challenges regarding server speed.  Provides High Computing Experience: Multi-tenant technologies are offering excellent performance to the users. Users can easily and securely hold data or avail such services with high-security benefits. Plus, many pre-built tools and technologies make cloud computing advanced and easy to use.  Stored Data Virtually and Physically: The best advantage of resource pool-based services is that users can use the virtual space offered by the host. However, they also moved to the physical host provided by the service provider.  Flexibility for Businesses: Pool-based cloud-based services are flexible as they can be transformed according to the need of the technology. Plus, users don't have to worry about capitalization or huge investments.  Physical Host Works When a Virtual Host Goes Down: It could be a common technical issue that the virtual host becomes slow or slow. So, in that case, the physical host of the SaaS service provider will start working. Therefore, the user or tenant can get a suitable computing environment without technical challenges. AD Disadvantages of resource pooling  Security: Most service providers offering resource pooling-based services provide a high security features. However, many features can provide a high level of security with such services. But even then, the company's confidential data may pass to a third party, a service provider. And due to any flaw, the company's data may be misused. But even then, it would not be a good idea to rely solely on a third- party service provider.  Non-scalability: It can be another disadvantage of using resource pooling for organizations. Because if they find cheap solutions, they may face challenges while upgrading their business in the future. Also, another element can hinder the whole process and limit the scale of the business.
  • 50.  Restricted Access: In private resource pooling, users have restricted access to the database. In this, only a user with user credentials can access the company's stored or cloud computing data. Since there may be confidential user details and other important documents. Therefore such a service provider can provide tenant port designation, domain membership, and protocol transition. They can also use another credential for the users of the allotted area in cloud computing.  Conclusion: Resource pooling in cloud computing represents the technical phrase. It is used to describe a service provider as providing IT service to multiple customers at a time. And these services are scalable and accessible to businesses as well. Plus, when brands use this kind of technology, they can save a large capitalization investment. Load balancing is the method that allows you to have a proper balance of the amount of work being done on different pieces of device or hardware equipment. Typically, what happens is that the load of the devices is balanced between different servers or between the CPU and hard drives in a single cloud server. Load balancing was introduced for various reasons. One of them is to improve the speed and performance of each single device, and the other is to protect individual devices from hitting their limits by reducing their performance. Cloud load balancing is defined as dividing workload and computing properties in cloud computing. It enables enterprises to manage workload demands or application demands by distributing resources among multiple computers, networks or servers. Cloud load balancing involves managing the movement of workload traffic and demands over the Internet. Traffic on the Internet is growing rapidly, accounting for almost 100% of the current traffic annually. Therefore, the workload on the servers is increasing so rapidly, leading to overloading of the servers, mainly for the popular web servers. There are two primary solutions to overcome the problem of overloading on the server-  First is a single-server solution in which the server is upgraded to a higher-performance server. However, the new server may also be overloaded soon, demanding another upgrade. Moreover, the upgrading process is arduous and expensive.  The second is a multiple-server solution in which a scalable service system on a cluster of servers is built. That's why it is more cost- effective and more scalable to build a server cluster system for network services. Cloud-based servers can achieve more precise scalability and availability by using farm server load balancing. Load balancing is beneficial with almost any type of service, such as HTTP, SMTP, DNS, FTP, and POP/IMAP. It also increases reliability through redundancy. A dedicated hardware device or program provides the balancing service. Different Types of Load Balancing Algorithms in Cloud Computing:  Static Algorithm: Static algorithms are built for systems with very little variation in load. The entire traffic is divided equally between the servers in the static algorithm. This algorithm requires in-depth knowledge of server resources for better performance of the processor, which is determined at the beginning of the implementation. However, the decision of load shifting does not depend on the current state of the system. One of the major drawbacks of static load balancing algorithm is that load balancing tasks work only after they have been created. It could not be implemented on other devices for load balancing.  Dynamic Algorithm: The dynamic algorithm first finds the lightest server in the entire network and gives it priority for load balancing. This requires real-time communication with the network which can help increase the system's traffic. Here, the current state of the system is used to control the load. The characteristic of dynamic algorithms is to make load transfer decisions in the current system state. In this system, processes can move from a highly used machine to an underutilized machine in real time.  Round Robin Algorithm: As the name suggests, round robin load balancing algorithm uses round-robin method to assign jobs. First, it randomly selects the first node and assigns tasks to other nodes in a round-robin manner. This is one of the easiest methods of load balancing. Processors assign each process circularly without defining any priority. It gives fast response in case of uniform workload distribution among the processes. All processes have different loading times. Therefore, some nodes may be heavily loaded, while others may remain under-utilised.  Weighted Round Robin Load Balancing Algorithm: Weighted Round Robin Load Balancing Algorithms have been developed to enhance the most challenging issues of Round Robin Algorithms. In this algorithm, there are a specified set of weights and functions, which are distributed according to the weight values. Processors that have a higher capacity are given a higher value. Therefore, the highest loaded servers will get more tasks. When the full load level is reached, the servers will receive stable traffic.  Opportunistic Load Balancing Algorithm: The opportunistic load balancing algorithm allows each node to be busy. It never considers the current workload of each system. Regardless of the current workload on each node, OLB distributes all unfinished tasks to these nodes. The processing task will be executed slowly as an OLB, and it does not count the implementation time of the node, which causes some bottlenecks even when some nodes are free.  Minimum To Minimum Load Balancing Algorithm: Under minimum to minimum load balancing algorithms, first of all, those tasks take minimum time to complete. Among them, the minimum value is selected among all the functions. According to that minimum time, the work on the machine is scheduled. Other tasks are updated on the machine, and the task is removed from that list. This process will continue till the final assignment is given. This algorithm works best where many small tasks outweigh large tasks. Load balancing solutions can be categorized into two types -  Software-based load balancers: Software-based load balancers run on standard hardware (desktop, PC) and standard operating systems.  Hardware-based load balancers: Hardware-based load balancers are dedicated boxes that contain application-specific integrated circuits (ASICs) optimized for a particular use. ASICs allow network traffic to be promoted at high speeds and are often used for transport-level load balancing because hardware-based load balancing is faster than a software solution. AD Major Examples of Load Balancers -  Direct Routing Request Despatch Technique: This method of request dispatch is similar to that implemented in IBM's NetDispatcher. A real server and load balancer share a virtual IP address. The load balancer takes an interface built with a virtual IP address that accepts request packets and routes the packets directly to the selected server.  Dispatcher-Based Load Balancing Cluster: A dispatcher performs smart load balancing using server availability, workload, capacity and other user-defined parameters to regulate where TCP/IP requests are sent. The dispatcher module of a load balancer can split HTTP requests among different nodes in a cluster. The dispatcher divides the load among multiple servers in a cluster, so
  • 51. services from different nodes act like a virtual service on only one IP address; Consumers interconnect as if it were a single server, without knowledge of the back-end infrastructure.  Linux Virtual Load Balancer: This is an open-source enhanced load balancing solution used to build highly scalable and highly available network services such as HTTP, POP3, FTP, SMTP, media and caching, and Voice over Internet Protocol (VoIP) is done. It is a simple and powerful product designed for load balancing and fail-over. The load balancer itself is the primary entry point to the server cluster system. It can execute Internet Protocol Virtual Server (IPVS), which implements transport-layer load balancing in the Linux kernel, also known as layer-4 switching. Types of Load Balancing You will need to understand the different types of load balancing for your network. Server load balancing is for relational databases, global server load balancing is for troubleshooting in different geographic locations, and DNS load balancing ensures domain name functionality. Load balancing can also be based on cloud-based balancers.  Network Load Balancing: Cloud load balancing takes advantage of network layer information and leaves it to decide where network traffic should be sent. This is accomplished through Layer 4 load balancing, which handles TCP/UDP traffic. It is the fastest local balancing solution, but it cannot balance the traffic distribution across servers.  HTTP(S) load balancing: HTTP(s) load balancing is the oldest type of load balancing, and it relies on Layer 7. This means that load balancing operates in the layer of operations. It is the most flexible type of load balancing because it lets you make delivery decisions based on information retrieved from HTTP addresses.  Internal Load Balancing: It is very similar to network load balancing, but is leveraged to balance the infrastructure internally.Load balancers can be further divided into hardware, software and virtual load balancers.  Hardware Load Balancer: It depends on the base and the physical hardware that distributes the network and application traffic. The device can handle a large traffic volume, but these come with a hefty price tag and have limited flexibility.  Software Load Balancer: It can be an open source or commercial form and must be installed before it can be used. These are more economical than hardware solutions.  Virtual Load Balancer: It differs from a software load balancer in that it deploys the software to the hardware load-balancing device on the virtual machine. Why Load Balancer is Important in Cloud Computing? Here are some of the importance of load balancing in cloud computing.  Offers better performance: The technology of load balancing is less expensive and also easy to implement. This allows companies to work on client applications much faster and deliver better results at a lower cost.  Helps Maintain Website Traffic: Cloud load balancing can provide scalability to control website traffic. By using effective load balancers, it is possible to manage high-end traffic, which is achieved using network equipment and servers. E-commerce companies that need to deal with multiple visitors every second use cloud load balancing to manage and distribute workloads.  Can Handle Sudden Bursts in Traffic: Load balancers can handle any sudden traffic bursts they receive at once. For example, in case of university results, the website may be closed due to too many requests. When one uses a load balancer, he does not need to worry about the traffic flow. Whatever the size of the traffic, load balancers will divide the entire load of the website equally across different servers and provide maximum results in minimum response time.  Greater Flexibility: The main reason for using a load balancer is to protect the website from sudden crashes. When the workload is distributed among different network servers or units, if a single node fails, the load is transferred to another node. It offers flexibility, scalability and the ability to handle traffic better. Because of these characteristics, load balancers are beneficial in cloud environments. This is to avoid heavy workload on a single server. Desktop as a Service (DaaS) is a cloud computing offering where a service provider distributes virtual desktops to end-users over the Internet, licensed with a per-user subscription. The provider takes care of backend management for small businesses that find their virtual desktop infrastructure to be too expensive or resource- consuming. This management usually includes maintenance, backup, updates, and data storage. Cloud service providers can also handle security and applications for the desktop, or users can manage these service aspects individually. There are two types of desktops available in DaaS - persistent and non-persistent.  Persistent Desktop: Users can customize and save a desktop from looking the same as each user logs on. Permanent desktops require more storage than non-permanent desktops, making them more expensive.  Non-persistent desktop: The desktop is erased whenever the user logs out-they're just a way to access shared cloud services. Cloud providers can allow customers to choose from both, allowing workers with specific needs access to a permanent desktop and providing access to temporary or occasional workers through a non-permanent desktop. Benefits of Desktop as a Service (DaaS) Desktop as a Service (DaaS) offers some clear advantages over the traditional desktop model. With DaaS, it is faster and less expensive to deploy or deactivate active end users.  Rapid deployment and decommissioning of active end-users: the desktop is already configured; it needs to be connected to a new device. DAAs can save a lot of time and money for seasonal businesses that experience frequent spikes and declines in demand or employees.  Reduced Downtime for IT Support: Desktop as a Service allows companies to provide remote IT support to their employees, reducing downtime.  Cost savings: Because DAAS devices require much less computing power than a traditional desktop machine or laptop, they are less expensive and use less power.  Increased device flexibility: DaaS runs on various operating systems and device types, supporting the tendency of users to bring their own devices into the office and shifting the burden of supporting desktops across those devices to the cloud service provider Is.  Enhanced Security: The security risks are significantly lower as the data is stored in the data center with DaaS. If a laptop or mobile device is stolen, it can be disconnected from service. Since no data remains on that stolen device, the risk of a thief accessing sensitive data is minimal. Security patches and updates are also easier to install in a DaaS environment as all desktops can be updated simultaneously from a remote location. How does Desktop as a Service (DaaS) work?
  • 52. With Desktop as a Service (DaaS), the cloud service provider hosts the infrastructure, network resources, and storage in the cloud and streams the virtual desktop to the user's device. The user can access the desktop's data and applications through a web browser or other software. Organizations can purchase as many virtual desktops as they want through the subscription model. Because desktop applications stream from a centralized server over the Internet, graphics-intensive applications have historically been difficult to use with DaaS. New technology has changed this, and applications such as Computer-Aided Design (CAD) that require a lot of computer power to display quickly can now easily run on DaaS. When the workload on a server becomes too high, IT administrators can move a running virtual machine from one physical server to another in seconds, allowing graphics-accelerated or GPU-accelerated applications to run seamlessly. Meets. GPU-accelerated Desktop as a Service (GPU-DaaS) has implications for any industry that requires 3D modeling, high-end graphics, simulation, or video production. The engineering and design, broadcast, and architecture industries can benefit from this technology. What are the use cases for DaaS? Organizations can leverage DaaS to address various use cases and scenarios such as:  Users with multiple endpoints: A user can access multiple virtual desktops on a single PC instead of switching between multiple devices or multiple OSes. Some roles, such as software development, may require the user to work from multiple devices.  Contract or seasonal workers: DaaS can help you provision virtual desktops within minutes for seasonal or contract workers. You can also quickly close such desktops when the employee leaves the organization.  Mobile and remote worker: DaaS provides secure access to corporate resources anywhere, anytime, and any device. Mobile and remote employees can take advantage of these features to increase productivity in the organization.  Mergers and acquisition: DaaS simplifies the provision and deployment of new desktops to new employees, allowing IT administrators to quickly integrate the entire organization's network following a merger or acquisition.  Educational institutions: IT administrators can provide each teacher or student with an individual virtual desktop with the necessary privileges. When such users leave the organization, their desktops become inactive with just a few clicks.  Healthcare professionals: Privacy is a major concern in many health care settings. It allows individual access to each healthcare professional's virtual desktop, allowing access only to relevant patient information. With DaaS, IT administrators can easily customize desktop permissions and rules based on the user. How is DaaS different from VDI? Both DaaS and VDI offer a similar result: bringing virtual applications and desktops from a centralized data center to users' endpoints. However, these offerings differ in setup, architecture, controls, cost impact, and agility, as summarized below: Specialty Slave VDI Setup The cloud provider hosts all of the organization's IT infrastructure, including compute, networking, and storage. The provider handles all hardware monitoring, availability, troubleshooting, and upgrade issues. It also manages the VMs that run the OS. Some providers also provide technical support. With VDI, you manage all IT resources on-premises or yourself in a colocation facility. VDI is used for servers, networking, storage, licenses, endpoints, etc. More about this source textSource text required for additional translation information Send feedback Side panels History Saved Contribute Architecture Most DaaS offerings take advantage of the multi-tenancy architecture. Under this model, a single instance of an application-hosted by a server or datacenter-serves multiple "tenants" or customers. The DaaS provider differentiates each customer's services and provides them dynamically. Resource consumption or security of other clients may affect you with multi-tenant architecture if services are compromised. Most VDI offerings are single-tenant solutions where customers operate in a completely dedicated environment. Leveraging the single-tenant architecture in VDI allows IT administrators to gain complete control over its IT resource distribution and configuration. You also don't have to worry about the overuse of resources and any other organization causing service disruption. Control The cloud vendor controls all of its IT infrastructure, including monitoring, configuration, and storage. You may not have complete knowledge of these aspects. Internet connectivity is required to access the control plane of DAAs, making it more vulnerable to breaches and cyber attacks. With VDI deployment, the organization has complete control over its IT resources. Since most VDI solutions leverage a single-tenant architecture, IT administrators can ensure that only permitted users access virtual desktops and applications. Cost There is almost no upfront cost with DaaS offerings as it is subscription-based. The pay-as-you-go pricing structure allows companies to dynamically scale their operations and pay only for the resources consumed. DaaS offerings can be cheaper for small to medium-sized businesses (SMBs) with fluctuating needs. VDI requires a real capital expenditure (CapEx) to purchase or upgrade a server. it is suitable for Enterprise-level organizations that have projected growth and resource requirements. Agility DaaS deployments provide excellent flexibility. For example, you can provision virtual desktops and applications immediately and accommodate temporary or seasonal employees. You can also reduce the resources easily. With DaaS solutions, you can support new technological trends such as the latest GPUs or CPUs or CPU or software innovations. VDI requires considerable efforts to set up and build and maintain complex infrastructure. For example, adding new features can take days or even weeks. Budget can also limit the organization if it wants to buy new hardware to handle the scalability. AD AD How to Choose a DaaS Provider: There are multiple DaaS providers to choose from, including major vendors such as Azure and managed service providers (MSPs). Because of the many options, selecting the appropriate provider can be a challenge.  An appropriate DaaS solution meets all the organization's users' requirements, including GPU-intensive applications. Here are some tips to help you choose the right seller:  If you implement a DaaS solution for an organization with hundreds or thousands of users, make sure it is scalable. A scalable DaaS offering allows you to get on and offboard new users easily.  A great DaaS provider allows you to provision resources based on current workload demands. You don't want to overpay when workload demands vary depending on the day or time of day.  Datacenter Choosing a DaaS provider whose data center is close to the employee results in optimized network infrastructure with low latency. On the other hand, poor location can lead to unstable connections and efficiency challenges.
  • 53.  Security and compliance. If you are in an industry that must comply with prevailing laws and regulations, choose a DaaS provider that meets all security and compliance requirements.  An intuitive and easy-to-use DaaS solution allows employees to get work done. It also frees you from many IT administration responsibilities related to OS and application management.  Like all cloud-based services, DaaS migrates CapEx to an operating expense (OpEx) consumption model. However, not all DaaS providers are created equal when comparing services versus price. Therefore, you should compare the cost with the value of different DaaS providers to get the best service. Top Providers of DaaS in Cloud Computing: Working with DaaS providers is the best option for most organizations as it provides access to managed services and support. Below are the three largest DaaS providers currently available.  Amazon Workspace:Amazon Workspace is an AWS desktop as a service product that you can use to access a Linux or Windows desktop. When using this service, you can choose from various software and hardware configurations and multiple billing types. You can use workstations in multiple AWS regions. Workstations operate on a server-based model. You enumerate predefined OS, storage, and resource bundles when using the Services. The bundle you choose determines the maximum performance you expect and your costs. For example, in one of the standard bundles, you can use Windows 7 or 10, two CPUs, 4GB of memory, and 100GB of storage for $44 per month. The workspace also includes bringing in existing Windows licenses and applications. With this option, you can import your existing Windows VM images and play those images on dedicated hardware. The caveat to fetch your license is that it is only available for Windows 7 SP1 and select Windows 10 editions. Additionally, you will need to purchase at least 200 desktops. Learn more about the AWS DaaS offering in our guide.  VMware Horizon Cloud:VMware Horizon Cloud is a DaaS offering available as a server- or client-based option. These services are provided from a VMware-hosted control plane that enables you to manage and deploy your desktop and applications centrally. With Horizon Cloud, you can access fully managed desktops in three configurations:  Session desktops are ephemeral desktops in which multiple users share resources on a single server.  Dedicated Desktop-Continuous desktop resources are provided to a single user. This option uses a client-based model.  Floating Desktop-Non-persistent desktop associated with a single user. These desktops can provide users with a consistent experience through Horizon Cloud features, such as the User Environment Manager, enabling administrators to maintain settings and user data. This option uses a client-based model. Challenges of data as a service: While DaS offers many benefits, it also poses special challenges.  Unique security considerations: Because DaaS requires organizations to move data to cloud infrastructure and to transfer data over a network, it can pose a security risk that would not exist if the data was persisted on local, behind-the-firewall infrastructure. These challenges can be mitigated by using encryption for data in transit.  Additional compliance steps: For some organizations, compliance challenges can arise when sensitive data is moved to a cloud environment. It does not mean that data cannot be integrated or managed in the cloud, but companies subject to special data compliance requirements should meet those requirements with their DaaS solutions. For example, they may need to host their DaS on cloud servers in a specific country to remain compliant.  Potentially Limited Capabilities: In some cases, DaaS platforms may limit the number of devices available to work with the data. Users can only work with tools that are hosted on or compatible with their DaaS platform instead of being able to use any tool of their choice to set up their data-processing solution. Choosing a DaaS solution that offers maximum flexibility in device selection mitigates this challenge.  Data transfer timing: Due to network bandwidth limitations, it may take time to transfer large amounts of data to the DaaS platform. Depending on how often your organization needs to move data across the DaaS platform, this may or may not be a serious challenge. Data compression and edge computing strategies can help accelerate transfer speeds. Successful DaaS Adoption: DaaS solutions have been slow to catch on compared to SaaS and other traditional cloud-based services. However, as DaaS matures and the cloud becomes central to modern business operations, many organizations successfully leverage DaaS.  Pointbet uses DaaS to scale quickly while remaining compliant: Point bet uses cloud-based data solutions to manage its unique compliance and scaling requirements. The company can easily adjust its operations to meet the fluctuating demand for online gaming and ensure that it operates within local and international government regulations.  DMD Marketing accelerates data operations with DaaS: DMD Marketing Corp. has adopted a cloud-first approach to data management to give its users faster access to their data and, by extension, reduce data processing time. The company can refresh data faster thanks to cloud-based data management, giving them an edge over competitors. How to get started with Data as a Service Although getting started with DaaS may seem intimidating, as DaaS is still a relatively new solution, the process is simple. This is particularly simple because DaaS eliminates most of the setup and preparation work of building an on-premises data processing solution. And because of the simplicity of deploying a DaaS solution and the availability of technical support services from DaaS providers, your company does not need to have specialized personnel for this process. The main steps to get started with DaS include:  Choose a DaaS Solution: Factors to consider when selecting a DaaS offering include price, scalability, reliability, flexibility, and how easy it is to integrate DaaS with existing workflows and ingest data.  Migrate data to a DaaS solution. Depending on how much data you need to migrate and the network connection speed between your local infrastructure and your DaaS, data migration may or may not require a lot of time.  Start leveraging the DaaS platform to deliver faster, more reliable data integration and insights.
  • 54. There has been much discussion about whether cloud computing replaces data centers, costly computer hardware, and software upgrades. Some experts say that although cloud technology is changing how establishments use IT processes, the cloud cannot be seen as a replacement for the data center. However, the industry agrees that consumer and business applications outweigh the importance of cloud services. According to data provided by Cisco, cloud data centers traffic will account for 95 percent of total data centers traffic in 2021. This has resulted in large-scale data centers, which are essentially large public cloud data centers. , Cloud computing is streamlining the operations of today's workplaces. Its three main components are Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Cloud services provide the convenience of not having to worry about issues like increasing the device's storage capacity. Similarly, cloud computing also ensures no loss of data as it comes with backup and recovery features. Edge Computing vs. Cloud Computing: Is Edge Better? With the increasing demand for real-time applications, the adoption of edge computing has increased significantly. Today's technology expects low latency and high speeds to deliver a better customer experience. Although centralized cloud computing systems provide ease of collaboration and access, they are far from data sources. Therefore, it requires data transmission, which causes delays in processing information due to network latency. Thus, one cannot afford cloud computing for every need. Although cloud has some benefits, edge computing has more benefits when compared:  Speed: Edge devices only play a limited role in sending raw information and receiving processed information from the cloud. All of the raw information you work with in the cloud goes through the edge devices, collecting and sending the data. But, Exchange can only be used with applications where time delay is allowed. Therefore, edge computing provides better speed with lower latency, allowing the interpretation of the input data closer to the source. This provides more scope for applications that require real-time services.  Lower connectivity cost and better security: Instead of filtering data at a central data center, edge computing allows organizations to filter data at the source. This results in less transfer of companies' sensitive information between devices and the cloud, which is better for the security of organizations and their customers. Minimizing data movement also reduces costs by eliminating the need for storage requirements.  better data management: According to statistics, connected devices will reach about 20 billion by 2020. Edge computing takes an approach where it deals with certain systems with special needs, freeing up cloud computing to serve as a general-purpose platform. For example, the best route to a destination via a car's GPS would come from analyzing the surrounding areas rather than from the car manufacturer's data centers, far from the GPS. This results in less reliance on the cloud and helps applications perform better. The Internet of Things connects all nearby smart devices to the network. These devices use sensors and actuators to communicate with each other. Sensors sense surrounding movements while actuators respond to sensory activities. The devices can be a smartphone, smart washing machine, smartwatch, smart TV, smart car, etc. Assume a smart shoe that is connected to the Internet. It can collect data on the number of steps it can run. The smartphone can connect to the Internet and view this data. It analyzes the data and provides the user with the number of calories burned and other fitness advice. Another example is a smart traffic camera that can monitor congestion and accidents. It sends data to the gateway. This gateway receives data from that camera as well as other similar cameras. All these connected devices form an intelligent traffic management system. It shares, analyzes, and stores data on the cloud. When an accident occurs, the system analyzes the impact and sends instructions to guide drivers to avoid the accident. Overall, the Internet of Things is an emerging technology, and it will grow rapidly in the future. Similarly, there are many examples in healthcare, manufacturing, energy production, agriculture, etc. One drawback is that there can be security and privacy issues as the devices capture data throughout the day. Which is better IoT or cloud computing? Over the years, IoT and cloud computing have contributed to implementing many application scenarios such as smart transportation, cities and communities, homes, the environment, and healthcare. Both technologies work to increase efficiency in our everyday tasks. Cloud computing collects data from IoT sensors and calculates it accordingly. Although the two are very different paradigms, they are not contradictory technologies; They complement each other. Difference between the Internet of things and cloud computing: Meaning of Internet of things and cloud computing IoT is a network of interconnected devices, machines, vehicles, and other 'things' that can be embedded with sensors, electronics, and software that allows them to collect and interchange data. IoT is a system of interconnected things with unique identifiers and can exchange data over a network with little or no human interaction. Cloud computing allows individuals and businesses to access on-demand computing resources and applications. Internet of Things and Cloud Computing: The main objective of IoT is to create an ecosystem of interconnected things and give them the ability to sense, touch, control, and communicate with others. The idea is to connect everything and everyone and help us live and work better. IoT provides businesses with real-time insights into everything from everyday operations to the performance of machines and logistics and supply chains. On the other hand, cloud computing helps us make the most of all the data generated by IoT, allowing us to connect with our business from anywhere, whenever we want. Applications of Internet of Things and Cloud Computing
  • 55. IoT's most important and common applications are smartwatches, fitness trackers, smartphones, smart home appliances, smart cities, automated transportation, smart surveillance, virtual assistants, driverless cars, thermostats, implants, lights, and more. Real-world examples of cloud computing include antivirus applications, online data storage, data analysis, email applications, digital video software, online meeting applications, etc. Internet of Things vs. Cloud Computing: Internet of things Cloud Computing Iot is a network of interconnected devices that are capable of exchanging data over a network. Cloud computing is the on-demand delivery of IT resources and application via the internet. The main purpose is to create an ecosystem of interconnected things and give them the ability to sense, touch, control, and communicate. The purpose is to allow access to large amounts of computing power virtually, and offering a single system view. The role of IoT is to generate massive amounts of data. Cloud computing provides a way to store IoT data and provides tools to create IoT applications. Summary: While IoT and cloud computing are two different technologies that aim to make our daily lives easier, they are not contradictory technologies; They complement each other. The two work in collaboration to increase efficiency in our daily tasks. The basic concept of IoT is connectivity, in which physical objects or things are connected to the web - from fitness trackers to smart cars and smart home devices. The idea is to connect everything to the Internet and control them from the Internet. Cloud computing helps to manage the IoT infrastructure. The Internet is the worldwide connectivity of hundreds of thousands of computers belonging to many different networks. A web service is a standardized method for propagating messages between client and server applications on the World Wide Web. A web service is a software module that aims to accomplish a specific set of tasks. Web services can be found and implemented over a network in cloud computing. The web service would be able to provide the functionality to the client that invoked the web service. A web service is a set of open protocols and standards that allow data exchange between different applications or systems. Web services can be used by software programs written in different programming languages and on different platforms to exchange data through computer networks such as the Internet. In the same way, communication on a computer can be inter-processed. Any software, application, or cloud technology that uses a standardized Web protocol (HTTP or HTTPS) to connect, interoperate, and exchange data messages over the Internet-usually XML (Extensible Markup Language) is considered a Web service. Is. Web services allow programs developed in different languages to be connected between a client and a server by exchanging data over a web service. A client invokes a web service by submitting an XML request, to which the service responds with an XML response.  Web services functions  It is possible to access it via the Internet or intranet network.  XML messaging protocol that is standardized.  Operating system or programming language independent.  Using the XML standard is self-describing. A simple location approach can be used to detect this. Web Service Components XML and HTTP is the most fundamental web service platform. All typical web services use the following components:  SOAP (Simple Object Access Protocol): SOAP stands for "Simple Object Access Protocol". It is a transport-independent messaging protocol. SOAP is built on sending XML data in the form of SOAP messages. A document known as an XML document is attached to each message. Only the structure of an XML document, not the content, follows a pattern. The great thing about web services and SOAP is that everything is sent through HTTP, the standard web protocol. Every SOAP document requires a root element known as an element. In an XML document, the root element is the first element. The "envelope" is divided into two halves. The header comes first, followed by the body. Routing data, or information that directs the XML document to which client it should be sent, is contained in the header. The real message will be in the body.  UDDI (Universal Description, Search, and Integration): UDDI is a standard for specifying, publishing and searching online service providers. It provides a specification that helps in hosting the data through web services. UDDI provides a repository where WSDL files can be hosted so that a client application can search the WSDL file to learn about the various actions provided by the web service. As a result, the client application will have full access to UDDI, which acts as the database for all WSDL files. The UDDI Registry will keep the information needed for online services, such as a telephone directory containing the name, address, and phone number of a certain person so that client applications can find where it is.  WSDL (Web Services Description Language): The client implementing the web service must be aware of the location of the web service. If a web service cannot be found, it cannot be used. Second, the client application must understand what the web service does to implement the correct web service. WSDL, or Web Service Description Language, is used to accomplish this. A WSDL file is another XML-based file that describes what a web service does with a client application. The client application will understand where the web service is located and how to access it using the WSDL document. How does web service work? The diagram shows a simplified version of how a web service would function. The client will use requests to send a sequence of web service calls to the server hosting the actual web service.
  • 56. Remote procedure calls are used to perform these requests. The calls to the methods hosted by the respective web service are known as Remote Procedure Calls (RPC). Example: Flipkart provides a web service that displays the prices of items offered on Flipkart.com. The front end or presentation layer can be written in .NET or Java, but the web service can be communicated using a programming language. The data exchanged between the client and the server, XML, is the most important part of web service design. XML (Extensible Markup Language) is a simple, intermediate language understood by various programming languages. It is the equivalent of HTML. As a result, when programs communicate with each other, they use XML. It forms a common platform for applications written in different programming languages to communicate with each other. Web services employ SOAP (Simple Object Access Protocol) to transmit XML data between applications. The data is sent using standard HTTP. A SOAP message is data sent from a web service to an application. An XML document is all that is contained in a SOAP message. The client application that calls the web service can be built in any programming language as the content is written in XML. Features of Web Service Web services have the following characteristics:  XML-based: A web service's information representation and record transport layers employ XML. There is no need for networking, operating system, or platform bindings when using XML. At the mid-level, web offering-based applications are highly interactive.  Loosely Coupled: The subscriber of an Internet service provider may not necessarily be directly connected to that service provider. The user interface for a web service provider may change over time without affecting the user's ability to interact with the service provider. A strongly coupled system means that the decisions of the mentor and the server are inextricably linked, indicating that if one interface changes, the other must be updated. A loosely connected architecture makes software systems more manageable and easier to integrate between different structures.  Ability to be synchronous or asynchronous: Synchronicity refers to the client's connection to the execution of the function. Asynchronous operations allow the client to initiate a task and continue with other tasks. The client is blocked, and the client must wait for the service to complete its operation before continuing in synchronous invocation. Asynchronous clients get their results later, but synchronous clients get their effect immediately when the service is complete. The ability to enable loosely connected systems requires asynchronous capabilities.  Coarse Grain: Object-oriented systems, such as Java, make their services available differently. At the corporate level, an operation is too great for a character technique to be useful. Building a Java application from the ground up requires the development of several granular strategies, which are then combined into a coarse grain provider that is consumed by the buyer or service. Corporations should be coarse-grained, as should the interfaces they expose. Building web services is an easy way to define coarse- grained services that have access to substantial business enterprise logic.  Supports remote procedural calls: Consumers can use XML-based protocols to call procedures, functions, and methods on remote objects that use web services. A web service must support the input and output framework of the remote system. Enterprise-wide component development Over the years, JavaBeans (EJBs) and .NET components have become more prevalent in architectural and enterprise deployments. Several RPC techniques are used to both allocate and access them. A web function can support RPC by providing its services, similar to a traditional role, or translating incoming invocations into an EJB or .NET component invocation.  Supports document exchanges: One of the most attractive features of XML for communicating with data and complex entities. A container is a useful unit of software into which application code and libraries and their dependencies can be run anywhere, whether on a desktop, traditional IT, or in the cloud. To do this, containers take advantage of virtual operating systems (OS) in which OS features (in the Linux kernel, which are groups of first names and domains) are used in CPU partitions, memory, and disk access. A container as a Service (CaaS) is a cloud service model that allows users to upload, edit, start, stop, rate, and otherwise manage containers, applications and collections. It enables these processes through tool-based virtualization, a programming interface (API), or a web portal interface. CaaS helps users build rich, secure, segmented applications through local or cloud data centers. Containers and collections are used as a service with this model and installed on-site in the cloud or data centers. CaaS assists development teams in deploying and managing systems efficiently while providing more control of container orchestration than is permitted by PaaS. Containers-as-a-service (CaaS) is part of cloud services where the service provider empowers customers to manage and distribute applications containing containers and collections. CaaS is sometimes regarded as a special infrastructure-as-a-service (IaaS) model for cloud service delivery. Still, where larger assets are containers, there are virtual machines and physical hardware. Advantages of Container as a Service (CaaS):  Containers and CaaS make it easy to deploy and design distributed applications or build small services.
  • 57.  A collection of containers can handle different responsibilities or different coding environments during development.  Network protocol relationships between containers can be defined, and forwarding can be enforced.  CaaS promises that these defined and dedicated container structures can be quickly deployed in cloud capture.  For example, consider a mock software program designed with a microservice design, in which the service plan is organized with a business domain ID. Service domains can be payment, authentication, and a shopping cart.  Using CaaS, these application containers can be sent to a live system instantly.  Enables program performance using log integration and monitoring tools by posting the installed application to the CaaS platform.  CaaS also includes built-in automated measurement performance and orchestration management.  It enables teams to quickly build high visibility and distributed systems for high availability.  Furthermore, CaaS enhances team development with vigor by enabling rapid deployment.  Containers prevent targeted deployment, while CaaS can reduce operational engineering costs by reducing the DevOps resources required to manage the deployment. Disadvantages of Container as a Service (CaaS): Extracting business data from the cloud is dangerous. Depending on the provider, there are limits to the technology available. Security issues:  Containers are considered safer than their Microsoft counterparts but have some risks.  Although they are platform agnostic, containers share the same kernel as the operating system.  It puts containers at risk of being targeted if they are targeted.  As containers are deployed in the cloud via CaaS, the risk increases exponentially. Performance Limits:  Containers are field of view and do not run directly on bare metal.  Something is missing with the bare metal and the extra layer between the application containers and their characters.  Combine this with the net loss of the container associated with the hosting plan; the result is a significant performance loss.  Therefore, businesses face some loss in the functionality of containers even after high-quality hardware is available.  Therefore, it is sometimes referred to use bare-metal programs to test the application's full potential. How does CaaS Works? A Container as a Service is a computing and accessible computer cloud. Used by users to upload, build, manage and deploy container-based applications on cloud platforms. Cloud-based environment connections can be made through a graphical interface (GUI) or API calls. The essence of the entire CaaS platform is an orchestration tool that enables the management of complex container structures. Orchestration tools combine between active containers and enable automated operations. The existing orchestrators in the CaaS framework directly impact the services provided by the service users. What is a Container in Cars? Virtualization has been one of the most important paradigms in computing and software development over the past decade, leading to increased resource utilization and reduced time-to-value for development teams while reducing the duplication required to deliver services. The ability to deploy applications in virtualized environments means that development teams can more easily replicate the conditions of a production environment and operate more targeted applications at a lower cost. It helps to reduce the amount of work done. Virtualization meant that a user could divide his processing power among multiple virtual environments running on the same machine. Still, each environment contained a substantial amount of memory, as the virtual environments each had to run their operating system. To work and require six instances to run. Operating systems on the same hardware can be extremely resource-intensive. Containers emerged as a mechanism to develop better control of virtualization. Instead of virtualizing an entire machine, including the operating system and hardware, containers create a separate context in which an application and its important dependencies such as binaries, configuration files, and other dependencies are in a discrete package. Both containers and virtual machines allow applications to be deployed in virtual environments. The main difference is that the container environment contains only those files that the application needs to run. In contrast, virtual machines contain many additional files and services, resulting in increased resource usage without providing additional functions. As a result, a computer that may be capable of running 5 or 6 virtual machines can run tens or even hundreds of containers. What are Containers used For? One of the major advantages of containers is that they take significantly less time to initiate than virtual machines. Because containers share the Linux kernel, each virtual machine must fire its operating system at start-up. The fast spin-up times for containers make them ideal for large discrete applications with many different parts of services that must be started, run, and terminated in a relatively short time frame. This process takes less time to perform with containers than virtual machines and uses fewer CPU resources, making it significantly more efficient. Containers fit well with applications built in a microservices application architecture rather than the traditional monolithic application architecture. Communicate with another. Whereas traditional monolithic applications tie every part of the application together, most applications today are developed in the microservice model. The application consists of separate microservices or features deployed in containers and shared through an API. The use of containers makes it easy for developers to check the health and security of individual services within applications, turn services on/off in production environments, and ensure that individual services meet performance and CPU usage goals. CaaS vs PaaS, IaaS, and FaaS:  Cars vs. PaaS: Platform as a Service (PaaS) consists of third parties providing a combined platform, including hardware and software. The PaaS model allows end-users to develop, manage and run their applications, while the platform provider manages the infrastructure. In addition to storage and other computing resources, providers typically provide tools for application development, testing, and deployment. CaaS differs from PaaS in that it is a lower-level service that only provides a specific infrastructure component - a container. CaaS services can provide development services and tools such as CI/CD release management, which brings them closer to a PaaS model.  Cars vs. IaaS: Infrastructure as a Service (IaaS) provides raw computing resources such as servers, storage, and networks in the public cloud. It allows organizations to increase resources without upfront costs and less risk and overhead. CaaS differs from IaaS in that it provides an abstraction layer on top of raw hardware resources. IaaS services such as Amazon EC2 provide compute instances, essentially computers with operating systems running in the public cloud. CaaS services run and manage containers on top of these virtual machines, or in the case of services such as Azure Container Instances, allowing users to run containers directly on bare metal resources.
  • 58.  Cars vs. FaaS:Work as a Service (FaaS), also known as serverless computing, is suitable for users who need to run a specific function or component of an application without managing servers. With FaaS, the service provider automatically manages the physical hardware, virtual machines, and other infrastructure, while the user provides the code and pays per period or number of executions. CaaS differs from FAS because it provides direct access to the infrastructure-users can configure and manage containers. However, some CaaS services, such as Amazon Fargate, use a serverless deployment model to provide container services while abstracting servers from users, making them more similar to the FaaS model. What is a Container Cluster in CaaS? A container cluster is a dynamic content management system that holds and manages containers, grouped into pods and running on nodes. It also manages all the interconnections and communication channels that tie containers together within the system. A container cluster consists of three major components:  Dynamic Container Placement: Container clusters rely on cluster scheduling, whereby workloads packaged in a container image can be intelligently allocated between virtual and physical machines based on their capacity, CPU, and hardware requirements. Cluster Scheduler enables flexible management of container-based workloads by automatically rescheduling tasks when a failure occurs, increasing or decreasing clusters when appropriate, and workloads across machines to reduce or eliminate risks from correlated failures spread. Dynamic container placement is all about automating the execution of workloads by sending the container to the right place for execution.  Thinking in Sets of Containers: For companies using CaaS that require large quantities of containers, it is useful to start thinking about sets of containers rather than individuals. CaaS service providers enable their customers to configure pods, a collection of co- scheduled containers in any way they like. Instead of single scheduling containers, users can group containers using pods to ensure that certain sets of containers are executed simultaneously on the same host.  Connecting within a Cluster: Today, many newly developed applications include micro-services that are networked to communicate with each other. Each of these microservices is deployed in a container that runs on nodes, and the nodes must be able to communicate with each other effectively. Each node contains information such as the hostname and IP address of the node, the status of all running nodes, the node's currently available capacity to schedule additional pods, and other software license data. Communication between nodes is necessary to maintain a failover system, where if an individual node fails, the workload can be sent to an alternate or backup node for execution. Why are containers important? With the help of containers, application code can be packaged so that we can run it anywhere.  Helps promote portability between multiple platforms.  Helps in faster release of products.  Provides increased efficiency for developing and deploying innovative solutions and designing distributed systems. Why is CAAS important?  Helps developers to develop fully scaled containers as well as application deployment.  Helps to simplify container management.  Google helps automate key IT tasks like Kubernetes and Docker.  Helps increase the velocity of team development resulting in faster development and deployment. Conclusion: There's a reason many industrialists swear by containers. Ease of operation, resource friendliness, elegance, and portability make it a clear favorite in the coding community. The benefits offered by containers far outweigh any disadvantages. Fault tolerance in cloud computing means creating a blueprint for ongoing work whenever some parts are down or unavailable. It helps enterprises evaluate their infrastructure needs and requirements and provides services in case the respective device becomes unavailable for some reason. It does not mean that the alternative system can provide 100% of the entire service. Still, the concept is to keep the system usable and, most importantly, at a reasonable level in operational mode. It is important if enterprises continue growing in a continuous mode and increase their productivity levels. Main Concepts behind Fault Tolerance in Cloud Computing System  Replication: Fault-tolerant systems work on running multiple replicas for each service. Thus, if one part of the system goes wrong, other instances can be used to keep it running instead. For example, take a database cluster that has 3 servers with the same information on each. All the actions like data entry, update, and deletion are written on each. Redundant servers will remain idle until a fault tolerance system demands their availability.  Redundancy: When a system part fails or goes downstate, it is important to have a backup type system. The server works with emergency databases that include many redundant services. For example, a website program with MS SQL as its database may fail midway due to some hardware fault. Then the redundancy concept has to take advantage of a new database when the original is in offline mode. Techniques for Fault Tolerance in Cloud Computing  Priority should be given to all services while designing a fault tolerance system. Special preference should be given to the database as it powers many other entities.  After setting the priorities, the Enterprise has to work on mock tests. For example, Enterprise has a forums website that enables users to log in and post comments. When authentication services fail due to a problem, users will not be able to log in. Then, the forum becomes read-only and does not serve the purpose. But with fault-tolerant systems, healing will be ensured, and the user can search for information with minimal impact. Major Attributes of Fault Tolerance in Cloud Computing  None Point of Failure: The concepts of redundancy and replication define that fault tolerance can occur but with some minor effects. If there is no single point of failure, then the system is not fault-tolerant.  Accept the fault isolation concept: the fault occurrence is handled separately from other systems. It helps to isolate the Enterprise from an existing system failure.
  • 59. Existence of Fault Tolerance in Cloud Computing  System Failure: This can either be a software or hardware issue. A software failure results in a system crash or hangs, which may be due to Stack Overflow or other reasons. Any improper maintenance of physical hardware machines will result in hardware system failure.  Incidents of Security Breach: There are many reasons why fault tolerance may arise due to security failures. The hacking of the server hurts the server and results in a data breach. Other reasons for requiring fault tolerance in the form of security breaches include ransomware, phishing, virus attacks, etc. Take-Home Points Fault tolerance in cloud computing is a crucial concept that must be understood in advance. Enterprises are caught unaware when there is a data leak or system network failure resulting in complete chaos and lack of preparedness. It is advised that all enterprises should actively pursue the matter of fault tolerance. If an enterprise is in growing mode even when some failure occurs, a fault tolerance system design is necessary. Any constraints should not affect the growth of the Enterprise, especially when using the cloud platform. Principles of Cloud Computing Studying the principles of cloud computing will help you understand the adoption and use of cloud computing. These principles reveal opportunities for cloud customers to move their computing to the cloud and for the cloud vendor to deploy a successful cloud environment. The National Institute of Standards and Technology (NIST) said cloud computing provides worldwide and on-demand access to computing resources that can be configured based on customer demand. NSIT has also introduced the 5-4-3 Principle of Cloud Computing which includes five distinctive features of cloud computing, four deployment models, and three service models. Five Essential Characteristics Features The essential characteristics of cloud computing define the important features for successful cloud computing. If any feature is missing from the defining feature, fortunately, it is not cloud computing. Let us now discuss what these essential features are:  On-demand Service: Customers can self-provision computing resources like server time, storage, network, applications as per their demands without human intervention, i.e., cloud service provider.  Broad Network Access: Computing resources are available over the network and can be accessed using heterogeneous client platforms like mobiles, laptops, desktops, PDAs, etc.  Resource Pooling: Computing resources such as storage, processing, network, etc., are pooled to serve multiple clients. For this, cloud computing adopts a multitenant model where the computing resources of service providers are dynamically assigned to the customer on their demand. The customer is not even aware of the physical location of these resources. However, at a higher level of abstraction, the location of resources can be specified.  Sharp elasticity: Computing resources for a cloud customer often appear limitless because cloud resources can be rapidly and elastically provisioned. The resource can be released at an increasingly large scale to meet customer demand. Computing resources can be purchased at any time and in any quantity depending on the customers' demand.  Measured Service: Monitoring and control of computing resources used by clients can be done by implementing meters at some level of abstraction depending on the type of Service. The resources used can be reported with metering capability, thereby providing transparency between the provider and the customer. Principles to Scale Up Cloud Computing This section will discuss the principles that leverage the Internet to scale up cloud computing services.  Federation: Cloud resources are always unlimited for customers, but each cloud has a limited capacity. If customer demand continues to grow, the cloud will have to exceed its potential, for which the form federation of service providers enables collaboration and resource sharing. A federated cloud must allow virtual applications to be deployed on federated sites. Virtual applications should not be location-dependent and should be able to migrate easily between sites. Union members should be independent, making it easier for competing service providers to form unions.  Freedom: Cloud computing services should provide end-users complete freedom that allows the user to use cloud services without depending on a specific cloud provider. Even the cloud provider should be able to manage and control the computing service without sharing internal details with customers or partners.  Isolation: We are all aware that a cloud service provider provides its computing resources to multiple end-users. The end-user must be assured before moving his computing cloud that his data or information will be isolated in the cloud and cannot be accessed by other members sharing the cloud.  Elasticity: Cloud computing resources should be elastic, which means that the user should be free to attach and release computing resources on their demand.  Business Orientation: Companies must ensure the quality of service providers offer before moving mission-critical applications to the cloud. The cloud service provider should develop a mechanism to understand the exact business requirement of the customer and customize the service parameters as per the customer's requirement.  Trust: Trust is the most important factor that drives any customer to move their computing to the cloud. For the cloud to be successful, trust must be maintained to create a federation between the cloud customer, the cloud vendor, and the various cloud providers. So, these are the principles of cloud computing that take advantage of the Internet to enhance cloud computing. A cloud provider considers these principles before deploying cloud services to end-users.
  • 60. We trace the roots of cloud computing by focusing at the advancement of technologies in hardware (multi-core chips, virtualization), Internet technologies (Web 2.0, web services, service-oriented architecture), distributed computing (grids or clusters) and system management (data center automation, autonomous computing). Some of the technologies are marked in the early stages of their development; A specification process was followed, leading to maturity and universal adoption as a result. The emergence of cloud computing is linked to these technologies. We take a closer look at the technologies which is the basis of cloud computing that give a canvas of the cloud ecosystem. Cloud computing Internet technologies have so many roots. They help the computers to increase their capability and make them more powerful. In cloud computing, there are three main types of services which are IaaS - Infrastructure as a Service, PaaS - Platforms as a service and SaaS - Software as a Service. There are four types of cloud depending on the platform which are free, public, hybrid, and platform. What is Cloud Computing? "Cloud computing contains many servers that host the web services and data storage. The technology allows the companies to eliminate the requirement for costly and powerful systems." Company data will be stored on low-cost servers, and employees can easily access the data by a normal network. In the traditional data system, the company maintains the physical hardware, which costs a lot, while cloud computing supply a virtual platform. In a virtual platform, every server hosts the applications, and the data is handled by a distinct provider. Therefore, we should to pay them. The development of cloud computing is tremendous with the advancement of Internet technologies. And it is a new concept for low capitalization firms. Most of the companies are switching to cloud computing to provide the flexibility, accuracy, speed, and low cost to their customer. Cloud computing has much of applications, Like as infrastructure management, application execution, and also data access management tool. There are four roots of cloud computing which are given below: Root 1: Internet Technologies: The first one is Internet Technologies that includes service-oriented architecture, and Web 2.0, and also the web services. Internet technologies are commonly accessible by the public. People access content and run applications that depend on network connections. Cloud computing relies on centralized storage, networks and bandwidth. However, the Internet is not a network - it is highly multiplexed and centralized management. Therefore, anyone can host the number of websites anywhere in worldwide. Because of network servers, a lot of websites can be created. Service-Oriented Architecture is a self-contained module designed for business functions. It is provided for authentication services business management and event logging, also saves us a lot of paperwork and time. Web services such as XML and HTTP provide web delivery services by common mechanisms. It is an universal concept of web service globally. Web 2.0 services are more convenient for the users, and they do not need to know much about programming and coding concepts to work. Information technology companies provide services in which people can access the services on a platform. Predefined templates and blocks make it easy to work with, and they can work together via a centralized cloud computing system. Examples of Web 2.0 services are hosted services such as Google Maps, micro blogging sites such as Twitter, and social sites such as Facebook. Root 2: Distributed Computing The second root of cloud computing is distributed computing, that includes the grid, utility computing, and cluster. To understand it more easily, here's an example, computer is a storage area, and save documents in the form of files or pictures. Each document stored in a computer has some specific location, on a hard disk or stored on the Internet. When someone visits the website on the Internet, that person browses by downloading the files. Users can access files at a location after processing; it can send the file back to the server. So, it is known as the distributed computing of the cloud. People can access it from anywhere in overseas. All resources in memory space, processor speed and hard disk space are used with the help of the route. The company using the technology never faces any problem and will always be in competition with other companies too. Root 3: Hardware: The third one is the hardware by the roots of cloud computing, that includes multi-core chips and virtualization. When we talk about the hardware, it is virtual cloud computing and people do not need it more. Computers require hardware like Random access memory, CPU, , Read Only Memory and motherboard to store, process, analyze and manage the data and information. There are no hardware devices because in cloud computing all the apps are managed by the internet. If you are using huge amount of data, it becomes so difficult for your computer to manage the continuous increase in data. The cloud stores the data on its own computer slightly than the computer that holds the data. Virtualization allows the people to access the resources from virtual machines in cloud computing. It makes it cheaper for customers to use the cloud services. Furthermore, in the Service Level Agreement based cloud computing model, each customer gets their virtual machine called a Virtual Private Cloud (VPC). The single cloud computing platform which distribute the hardware, software and operating systems. Root 4: System Management: The fourth root of cloud computing contains autonomous cloud and data center automation here. System management handles operations to improve productivity and efficiency of the root system. To achieve it, the system management ensures that all the employees have an easy access to the necessary data and information. Employees can change the configuration, receive/retransmit information and perform other related tasks from any location. It makes for the system administrator to respond to any user demand. In addition, the administrator can restrict or deny access for different users. In the autonomous system, the administrator task becomes easier as the system is autonomous or self-managing. Additionally, data analysis is controlled by sensors. System responses perform many functions such as optimization, configuration, and protection based on the data. Therefore, human involvement is low here, but here the computing system handles most of the work. Difference between roots of cloud computing The most fundamental differences between utilities and clouds are in storage, bandwidth, and power availability. In a utility system, all these utilities are provided through the company, whereas in a cloud environment, it is provided through the provider you work with. You might be using a file-sharing service to upload the pictures, documents, and files to the server which work remotely. You need many physical storage devices to hold the data with access to electricity and the Internet. In addition, the physical components required the file sharing service and access to the Internet by providing thwe third-party service provider's data center. Many different Internet technologies can make up the infrastructure of a cloud.
  • 61. For example, if any internet service provider has lower speed of internet, then they can transfer their data without getting the better infrastructure of hardware. Conclusion The cloud is a collection of the four roots running on the remote server. Many organizations are moving towards the technology as they manage huge amount of memory, hardware and other resources. The potential of the technology is enormous as it is increasing the overall efficiency, security, reliability, and flexibility of businesses. What is a Data Center? A data center - also known as a data center or data center - is a facility made up of networked computers, storage systems, and computing infrastructure that businesses and other organizations use to organize, process, store large amounts of data. And to broadcast. A business typically relies heavily on applications, services, and data within a data center, making it a focal point and critical asset for everyday operations. Enterprise data centers increasingly incorporate cloud computing resources and facilities to secure and protect in-house, onsite resources. As enterprises increasingly turn to cloud computing, the boundaries between cloud providers' data centers and enterprise data centers become less clear. How do Data Centers work? A data center facility enables an organization to assemble its resources and infrastructure for data processing, storage, and communication, including:  systems for storing, sharing, accessing, and processing data across the organization;  physical infrastructure to support data processing and data communication; And  Utilities such as cooling, electricity, network access, and uninterruptible power supplies (UPS). Gathering all these resources in one data center enables the organization to:  protect proprietary systems and data;  Centralizing IT and data processing employees, contractors, and vendors;  Enforcing information security controls on proprietary systems and data; And  Realize economies of scale by integrating sensitive systems in one place. AD Why are data centers important? Data centers support almost all enterprise computing, storage, and business applications. To the extent that the business of a modern enterprise runs on computers, the data center is business. Data centers enable organizations to concentrate their processing power, which in turn enables the organization to focus its attention on:  IT and data processing personnel;  computing and network connectivity infrastructure; And  Computing Facility Security. What are the main components of Data Centers? Elements of a data center are generally divided into three categories:  Calculation  enterprise data storage  networking A modern data center concentrates an organization's data systems in a well-protected physical infrastructure, which includes:  Server;  storage subsystems;  networking switches, routers, and firewalls;  cabling; And  Physical racks for organizing and interconnecting IT equipment. Datacenter Resources typically include:  power distribution and supplementary power subsystems;  electrical switching;  UPS;  backup generator;  ventilation and data center cooling systems, such as in-row cooling configurations and computer room air conditioners; And  Adequate provision for network carrier (telecom) connectivity. It demands a physical facility with physical security access controls and sufficient square footage to hold the entire collection of infrastructure and equipment. How are Datacenters managed? Datacenter management is required to administer many different topics related to the data center, including:  Facilities Management. Management of a physical data center facility may include duties related to the facility's real estate, utilities, access control, and personnel.  Datacenter inventory or asset management. Datacenter features include hardware assets and software licensing, and release management.  Datacenter Infrastructure Management. DCIM lies at the intersection of IT and facility management and is typically accomplished by monitoring data center performance to optimize energy, equipment, and floor use.  Technical support. The data center provides technical services to the organization, and as such, it should also provide technical support to the end-users of the enterprise.  Datacenter management includes the day-to-day processes and services provided by the data center.
  • 62. The image shows an IT professional installing and maintaining a high-capacity rack-mounted system in a data center. Datacenter Infrastructure Management and Monitoring: Modern data centers make extensive use of monitoring and management software. Software, including DCIM tools, allows remote IT data center administrators to monitor facility and equipment, measure performance, detect failures and implement a wide range of corrective actions without ever physically entering the data center room. The development of virtualization has added another important dimension to data center infrastructure management. Virtualization now supports the abstraction of servers, networks, and storage, allowing each computing resource to be organized into pools regardless of their physical location. Action Network, storage and server virtualization can be implemented through software, giving software-defined data centers traction. Administrators can then provision workloads, storage instances, and even network configurations from those common resource pools. When administrators no longer need those resources, they can return them to the pool for reuse. Energy Consumption and Efficiency Datacenter designs also recognize the importance of energy efficiency. A simple data center may require only a few kilowatts of energy, but enterprise data centers may require more than 100 megawatts. Today, green data centers with minimal environmental impact through low- emission building materials, catalytic converters, and alternative energy technologies are growing in popularity. Data centers can maximize efficiency through physical layouts, known as hot aisle and cold isle layouts. The server racks are lined up in alternating rows, with cold air intakes on one side and hot air exhausts. The result is alternating hot and cold aisles, with the exhaust forming a hot aisle and the intake forming a cold aisle. Exhausts are pointing to air conditioning equipment. The equipment is often placed between the server cabinets in the row or aisle and distributes the cold air back into the cold aisle. This configuration of air conditioning equipment is known as in-row cooling. Organizations often measure data center energy efficiency through power usage effectiveness (PUE), which represents the ratio of the total power entering the data center divided by the power used by IT equipment. However, the subsequent rise of virtualization has allowed for more productive use of IT equipment, resulting in much higher efficiency, lower energy usage, and reduced energy costs. Metrics such as PUE are no longer central to energy efficiency goals. However, organizations can still assess PUE and use comprehensive power and cooling analysis to understand better and manage energy efficiency. Datacenter Level: Data centers are not defined by their physical size or style. Small businesses can operate successfully with multiple servers and storage arrays networked within a closet or small room. At the same time, major computing organizations -- such as Facebook, Amazon, or Google -- can fill a vast warehouse space with data center equipment and infrastructure. In other cases, data centers may be assembled into mobile installations, such as shipping containers, also known as data centers in a box, that can be moved and deployed. However, data centers can be defined by different levels of reliability or flexibility, sometimes referred to as data center tiers. In 2005, the American National Standards Institute (ANSI) and the Telecommunications Industry Association (TIA) published the standard ANSI/TIA-942, "Telecommunications Infrastructure Standards for Data Centers", which defined four levels of data center design and implementation guidelines. Each subsequent level aims to provide greater flexibility, security, and reliability than the previous level. For example, a Tier I data center is little more than a server room, while a Tier IV data center provides redundant subsystems and higher security. Levels can be differentiated by available resources, data center capabilities, or uptime guarantees. The Uptime Institute defines data center levels as:  Tier I. These are the most basic types of data centers, including UPS. Tier I data centers do not provide redundant systems but must guarantee at least 99.671% uptime.  Tier II.These data centers include system, power and cooling redundancy and guarantee at least 99.741% uptime.  Tier III. These data centers offer partial fault tolerance, 72-hour outage protection, full redundancy, and a 99.982% uptime guarantee.  Tier IV. These data centers guarantee 99.995% uptime - or no more than 26.3 minutes of downtime per year - as well as full fault tolerance, system redundancy, and 96 hours of outage protection. Most data center outages can be attributed to these four general categories.
  • 63. Datacenter Architecture and Design Although almost any suitable location can serve as a data center, a data center's deliberate design and implementation require careful consideration. Beyond the basic issues of cost and taxes, sites are selected based on several criteria: geographic location, seismic and meteorological stability, access to roads and airports, availability of energy and telecommunications, and even the prevailing political environment. Once the site is secured, the data center architecture can be designed to focus on the structure and layout of mechanical and electrical infrastructure and IT equipment. These issues are guided by the availability and efficiency goals of the desired data center tier. Datacenter Security Datacenter designs must also implement sound security and security practices. For example, security is often reflected in the layout of doors and access corridors, which must accommodate the movement of large, cumbersome IT equipment and allow employees to access and repair infrastructure. Fire fighting is another major safety area, and the widespread use of sensitive, high-energy electrical and electronic equipment precludes common sprinklers. Instead, data centers often use environmentally friendly chemical fire suppression systems, which effectively oxygenate fires while minimizing collateral damage to equipment. Comprehensive security measures and access controls are needed as the data center is also a core business asset. These may include:  Badge Access;  biometric access control, and  video surveillance. These security measures can help detect and prevent employee, contractor, and intruder misconduct. What is Data Center Consolidation? There is no need for a single data center. Modern businesses can use two or more data center installations in multiple locations for greater flexibility and better application performance, reducing latency by locating workloads closer to users. Conversely, a business with multiple data centers may choose to consolidate data centers while reducing the number of locations to reduce the cost of IT operations. Consolidation typically occurs during mergers and acquisitions when most businesses no longer need data centers owned by the subordinate business. What is Data Center Colocation? Datacenter operators may also pay a fee to rent server space in a colocation facility. A colocation is an attractive option for organizations that want to avoid the large capital expenditure associated with building and maintaining their data centers. Today, colocation providers are expanding their offerings to include managed services such as interconnectivity, allowing customers to connect to the public cloud. Because many service providers today offer managed services and their colocation features, the definition of managed services becomes hazy, as all vendors market the term slightly differently. The important distinction to make is:  The organization pays a vendor to place their hardware in a facility. The customer is paying for the location alone.  Managed services. The organization pays the vendor to actively maintain or monitor the hardware through performance reports, interconnectivity, technical support, or disaster recovery. What is the difference between Data Center vs. Cloud? Cloud computing vendors offer similar features to enterprise data centers. The biggest difference between a cloud data center and a typical enterprise data center is scale. Because cloud data centers serve many different organizations, they can become very large. And cloud computing vendors offer these services through their data centers. Large enterprises such as Google may require very large data centers, such as the Google data center in Douglas County, Ga. Because enterprise data centers increasingly implement private cloud software, they increasingly see end-users, like the services provided by commercial cloud providers. Private cloud software builds on virtualization to connect cloud-like services, including:  system automation;  user self-service; And  Billing/Charge Refund to Data Center Administration. The goal is to allow individual users to provide on-demand workloads and other computing resources without IT administrative intervention. Further blurring the lines between the enterprise data center and cloud computing is the development of hybrid cloud environments. As enterprises increasingly rely on public cloud providers, they must incorporate connectivity between their data centers and cloud providers. For example, platforms such as Microsoft Azure emphasize hybrid use of local data centers with Azure or other public cloud resources. The result is not the elimination of data centers but the creation of a dynamic environment that allows organizations to run workloads locally or in the cloud or move those instances to or from the cloud as desired. Evolution of Data Centers The origins of the first data centers can be traced back to the 1940s and the existence of early computer systems such as the Electronic Numerical Integrator and Computer (ENIAC). These early machines were complicated to maintain and operate and had cables connecting all the necessary components. They were also in use by the military - meaning special computer rooms with racks, cable trays, cooling mechanisms, and access restrictions were necessary to accommodate all equipment and implement appropriate safety measures. However, it was not until the 1990s, when IT operations began to gain complexity and cheap networking equipment became available, that the term data center first came into use. It became possible to store all the necessary servers in one room within the company. These specialized computer rooms gained traction, dubbed data centers within organizations. At the time of the dot-com bubble in the late 1990s, the need for Internet speed and a constant Internet presence for companies required large amounts of networking equipment required large facilities. At this point, data centers became popular and began to look similar to those described above. In the history of computing, as computers get smaller and networks get bigger, the data center has evolved and shifted to accommodate the necessary technology of the day.
  • 64. A data center can be described as a facility/location of networked computers and associated components (such as telecommunications and storage) that help businesses and organizations handle large amounts of data. These data centers allow data to be organized, processed, stored, and transmitted across applications used by businesses. Types of Data Center: Businesses use different types of data centers, including:  Telecom Data Center: It is a type of data center operated by telecommunications or service providers. It requires high-speed connectivity to work.  Enterprise data center: This is a type of data center built and owned by a company that may or may not be onsite.  Colocation Data Center: This type of data center consists of a single data center owner's location, providing cooling to multiple enterprises and hyper-scale their customers.  Hyper-Scale Data Center: This is a type of data center owned and operated by the company itself. Difference between Cloud and Data Center: S.No Cloud Data Center 1. Cloud is a virtual resource that helps businesses store, organize, and operate data efficiently. Data Center is a physical resource that helps businesses store, organize, and operate data efficiently. 2. The scalability of the cloud required less amount of investment. The scalability of the Data Center is huge in investment compared to the cloud. 3. Maintenance cost is less as compared to service providers. Maintenance cost is high because the developers of the organization do the maintenance. 4. The organization needs to rely on third parties to store its data. The organization's developers are trusted for the data stored in the data centers. 5. The performance is huge compared to the investment. The performance is less than the investment. 6. This requires a plan for optimizing the cloud. It is easily customizable without any hard planning. 7. It requires a stable internet connection to provide the function. This may or may not require an internet connection. 8. The cloud is easy to operate and is considered a viable option. Data centers require experienced developers to operate and are not considered a viable option. Resilience computing is a form of computing that distributes redundant IT resources for operational purposes. In this computing, IT resources are pre-configured so that these sources are needed at processing time; Can be used in processing without interruption. The characteristic of flexibility in cloud computing can refer to redundant IT resources within a single cloud or across multiple clouds. By taking advantage of the flexibility of cloud-based IT services, cloud consumers can improve both the efficiency and availability of their applications. Fixes and continues operation. Cloud Resilience is a term used to describe the ability of servers, storage systems, data servers, or entire networks to remain connected to the network without interfering with their functions or losing their operational capabilities. For a cloud system to remain resilient, it needs to cluster the servers, has redundant workloads, and even rely on multiple physical servers. High-quality products and services will accomplish this task. The three basic strategies that are used to improve a cloud system's resilience are:  Testing and Monitoring: An independent method ensures that equipment meets minimum behavioural requirements. It is important for system failure detection and resource reconfiguration.  Checkpoint and Restart: Based on such conditions, the state of the whole system is saved. System failures represent a phase of restoration to the most recent corrected checkpoint and recovery of the system.  Replication: The essential components of a device are replicated, using additional resources (hardware and software), ensuring that they are usable at any given time. With this strategy, the additional difficulty is the state synchronization task between replicas and the main device. Security with Cloud Technology Cloud technology, used correctly, provides superior security to customers anywhere. High-quality cloud products can protect against DDoS (Distributed Denial of Service) attacks, where a cyberattack affects the system's bandwidth and makes the computer unavailable to the user. Cloud protection can also use redundant security mechanisms to protect someone's data from being hacked or leaked. In addition, cloud security allows one to maintain regulatory compliance and control advanced networks while improving the security of sensitive personal and financial data. Finally, having access to high-quality customer service and IT support is critical to fully taking advantage of these cloud security benefits. Advantages of Cloud Resilience The permanence of the cloud is considered a way of responding to the "crisis". It refers to data and technology. The infrastructure, consisting of virtual servers, is built to handle sufficient computing power and data volume variability while allowing ubiquitous use of various devices, such as laptops, smartphones, PCs, etc. All data can be recovered if the computer machine is damaged or destroyed and guarantees the stability of the infrastructure and data. Issues or Critical aspects of Resiliency A major problem is how cloud application resilience can be tested, evaluated and defined before going live, so that system availability is protected against business objectives. Traditional research methods do not effectively reveal cloud application durability problems for many factors. Heterogeneous and multi-layer architectures are vulnerable to failure due to the sophistication of the interactions of different software entities. Failures are often asymptomatic and remain hidden as internal equipment errors unless their visibility is due to special circumstances. Poor scheduling of production usage patterns and the architecture of cloud applications result in unexpected 'accidental' behaviour, especially hybrid and multi-cloud. Cloud layers can have different stakeholders managed by different administrators, resulting in unexpected configuration changes during application design that cause interfaces to break.
  • 65. Security in cloud computing is a major concern. Proxy and brokerage services should be employed to restrict a client from accessing the shared data directly. Data in the cloud should be stored in encrypted form. Security Planning Before deploying a particular resource to the cloud, one should need to analyze several aspects of the resource, such as:  A select resource needs to move to the cloud and analyze its sensitivity to risk.  Consider cloud service models such as IaaS, PaaS,and These models require the customer to be responsible for Security at different service levels.  Consider the cloud type, such as public, private, community, or  Understand the cloud service provider's system regarding data storage and its transfer into and out of the cloud.  The risk in cloud deployment mainly depends upon the service models and cloud types. Understanding Security of Cloud  Security Boundaries The Cloud Security Alliance (CSA) stack model defines the boundaries between each service model and shows how different functional units relate. A particular service model defines the boundary between the service provider's responsibilities and the customer. The following diagram shows the CSA stack model: Key Points to CSA Model  IaaS is the most basic level of service, with PaaS and SaaS next two above levels of services.  Moving upwards, each service inherits the capabilities and security concerns of the model beneath.  IaaS provides the infrastructure, PaaS provides the platform development environment, and SaaS provides the operating environment.  IaaS has the lowest integrated functionality and security level, while SaaS has the highest.  This model describes the security boundaries at which cloud service providers' responsibilities end and customers' responsibilities begin.  Any protection mechanism below the security limit must be built into the system and maintained by the customer.  AD Although each service model has a security mechanism, security requirements also depend on where these services are located, private, public, hybrid, or community cloud.  Understanding data security Since all data is transferred using the Internet, data security in the cloud is a major concern. Here are the key mechanisms to protect the data. o access control o audit trail o certification o authority The service model should include security mechanisms working in all of the above areas.  Separate access to data Since the data stored in the cloud can be accessed from anywhere, we need to have a mechanism to isolate the data and protect it from the client's direct access. Broker cloud storage is a way of separating storage in the Access Cloud. In this approach, two services are created: o A broker has full access to the storage but does not have access to the client. o A proxy does not have access to storage but has access to both the client and the broker. o Working on a Brocade cloud storage access system o When the client issues a request to access data: o The client data request goes to the external service interface of the proxy. o The proxy forwards the request to the broker. o The broker requests the data from the cloud storage system. o The cloud storage system returns the data to the broker. o The broker returns the data to the proxy. o Finally, the proxy sends the data to the client. All the above steps are shown in the following diagram:
  • 66. Encoding Encryption helps to protect the data from being hacked. It protects the data being transferred and the data stored in the cloud. Although encryption helps protect data from unauthorized access, it does not prevent data loss. Why is cloud security architecture important? The difference between "cloud security" and "cloud security architecture" is that the former is built from problem-specific measures while the latter is built from threats. A cloud security architecture can reduce or eliminate the holes in Security that point-of-solution approaches are almost certainly about to leave. It does this by building down - defining threats starting with the users, moving to the cloud environment and service provider, and then to the applications. Cloud security architectures can also reduce redundancy in security measures, which will contribute to threat mitigation and increase both capital and operating costs. The cloud security architecture also organizes security measures, making them more consistent and easier to implement, particularly during cloud deployments and redeployments. Security is often destroyed because it is illogical or complex, and these flaws can be identified with the proper cloud security architecture. Elements of cloud security architecture The best way to approach cloud security architecture is to start with a description of the goals. The architecture has to address three things: an attack surface represented by external access interfaces, a protected asset set that represents the information being protected, and vectors designed to perform indirect attacks anywhere, including in the cloud and attacks the system. The goal of the cloud security architecture is accomplished through a series of functional elements. These elements are often considered separately rather than part of a coordinated architectural plan. It includes access security or access control, network security, application security, contractual Security, and monitoring, sometimes called service security. Finally, there is data protection, which are measures implemented at the protected- asset level. A complete cloud security architecture addresses the goals by unifying the functional elements. Cloud security architecture and shared responsibility model The security and security architectures for the cloud are not single-player processes. Most enterprises will keep a large portion of their IT workflow within their data centers, local networks, and VPNs. The cloud adds additional players, so the cloud security architecture should be part of a broader shared responsibility model. A shared responsibility model is an architecture diagram and a contract form. It exists formally between a cloud user and each cloud provider and network service provider if they are contracted separately. Each will divide the components of a cloud application into layers, with the top layer being the responsibility of the customer and the lower layer being the responsibility of the cloud provider. Each separate function or component of the application is mapped to the appropriate layer depending on who provides it. The contract form then describes how each party responds.
  翻译: