Do you think the cloud is just some other dudes computer or running somewhere else?

Do you think the cloud is just some other dudes computer or running somewhere else?

Do you think the cloud is just someone else's computer, running somewhere else?

Well, I'd love to break it down for you; it is the infrastructure, ladies and gents.

This particular infrastructure, if it is not a cloud, can be running in a data center somewhere, on-site or off-site, in a closet, a laptop, or a desktop :)


Before breaking down the second part of this valuable acronym, the AAS, I'd like to spend a moment on the I part.

As AAS stands for "as a service," the billing method, it is just another term for how you're going to consume it.

There are other kinds of services available in the market, such as SAAS, PAAS, XAAS, regardless of the different models that SMBs are capable of consuming. But what I would like to discuss specifically today is the infrastructure itself, which essentially falls into three main categories.


  • Starting with compute as the first layer, where the processors are in place, and where the actual lifting and computing get done.
  • The second piece of the puzzle would be the GPU: specialized processors designed to handle graphics and image processing. The storage kind of falls into three main categories and lots of smaller ones, as there are many different types of storage.

The most commonly known would be Object Storage, which is on the lower end of performance yet relatively inexpensive, for general-purpose storage, photos, documents, etc.

We have Block and File Storage, which I will discuss in further articles in the upcoming weeks.

  • The third piece would be the HPC "High-Performance Computing." There is a binding piece I didn't count as part of the triangle, tying all of the above components together: the network piece, the essential way of communication, which is how the compute talks to the other compute.

Piece of cake, isn't it?

I guess not :)


I might have to dig deeper and break it down into smaller pieces, as I myself wasn't able at a certain point to wrap my head around this complex environment.

I am constantly researching how businesses are planning their next steps before the digital train takes off and leaves everyone miles behind.

Going back to compute, this is your typical web server or application server; it can really be whatever general-purpose computing needs you have.

GPU is where it gets more specific and critical; this is the graphics processor, a very high-speed processor utilized in conjunction with a traditional processor for specific types of workloads.

This is going to be your MACHINE LEARNING & AI.


Now Let's revisit HPC once again to elaborate, because why not?

When you have specific types of workloads that require the intervention of an HPC due to very specific requirements, it goes hand in hand with the GPU, providing increased processing power and faster performance.

It all goes back to the NETWORK; imagine it as a pipe that can be small at an early stage, according to your current rent load.

This exact pipe will become bigger with more data that you might need to push through it; the larger the data and the larger the bandwidth, hence the network traffic that will be measured to assess over a set period of time, ranging from data per second, minute, week, to data per month.


To tie all of this together, I would like to give you an example that requires some specialty components.

I'm going to talk a little bit about an AI workload that is set to do automatic visual recognition of pictures.

Let's say you have a billion pictures in object storage that you are going to use to train your model.

This model is essentially running on GPU servers, so you take those billion pictures, "Huge number, I mean a billion?"

But How to push these pictures?

Simply through a really big pipe, that's your network pipe, up into the GPU server, but this GPU server doesn't have storage inherent to it.

So that GPU server is actually going to write that data into block storage.

Remember the types of storage: Object, Block, File. This data will be treated back and forth, back and forth until the model is done.


Once it is trained, it will take all the data that was pushed up to the GPU with all the results, and it will all be written back down into object storage.

Why, though?

Simply because it is cost-efficient; it is a good archiving solution, pushing tons of data through these network pipes while they're turned on to object storage. Once you're done, you can just get rid of them.

How lovely!

Oh, I almost forgot AAS, the second part of the IAAS acronym: "Shared." Yes, the word, as commonly known, is shared.

Offerings that are consumed as services are generally speaking shared, and so by shared, I mean they're multi-tenant; many people use the same offering. So you guys can just carve it up and make it available to multiple different customers simultaneously.

For now, I think I should leave some room for a more in-depth overview of the remaining parts of the AAS, as it is truly a wide scope that deserves to be covered in multiple articles in the upcoming weeks.

Till then!

#infrastructure #cloud #cloudsecurity #cloudstorage #compute #computing #cloudcomputing #smbs #digitaltransformation #saas #paas #iaas #xaas #tekrevol #network #ai #artificialintelligence #machinelearing #modeltraining #languageprocessing #dataprocessing #uae #dubai #abudhabi #techindustry #customsoftwaredevelopment #softwaredevelopment

To view or add a comment, sign in

More articles by Rami Khairallah

Insights from the community

Others also viewed

Explore topics