Fawg Computing.
I recently had a call with an industry colleague who mentioned he was involved in helping a business move their enterprise and field applications to ‘fogging.’ My first response was, “what’s taffaughin?” My friend, who shares a similar Texas twang said, “No, to fahgen!” For those of you not from ‘round here, he is saying “TO FOGGING.” It’s a cloud thing. A very low to the ground cloud thing. Although I’m sure somewhere in Western Europe there’s a great band named “Tofahgen.” Or maybe a chocolate bar.
Fog Computing, or “fogging”, is a distributed infrastructure in which certain application processes or services are managed at the edge of the network by a smart device, but others are still managed in the cloud. Cloud Computing, is defined as a group of computers and servers connected together over the Internet to form a network. As more and more organizations are beginning to adopt the Internet of Things, enact NFV and SDN solutions, and add geographically dispersed data centers and applications, the need for large amounts of data to be accessed more quickly, and locally, is ever-growing. Thus fog computing is essentially a middle layer between the cloud and the hardware to enable more efficient data processing, analysis and storage, which is achieved by reducing the amount of data which needs to be transported to the cloud.
One of the largest drivers towards fog computing is the Internet of Things. In 2016, Business Insider reported that nearly $6 trillion will be spent on IoT solutions over the course of the next five years and by 2020 34 billion devices will be connected to the Internet, 24 billion of which will be IoT devices. IoT necessitates that strategic data and resulting decisions based on that data are best processed at the edge of a network instead of adding latency by using a central location for data.
IoT requires of their host networks increased CPU power at the edge (including memory), storage of frequently used and access data at the edge, and the ability to send all the aggregated and composite data result to the cloud. Fog computing pushes the routine and mundane data and data processing (think “Data Decision Making”) out closer to the edge (think “User” such as an IoT device or a grouping of devices) so that the larger cloud (think “mothership”) doesn’t have to be bothered (think “using bandwidth”) with the mundane.
How is this done? With a fog machine! Okay, not really, but there are “fog nodes” that sit near all the IoT devices and then “fog aggregation nodes” which gather the results and send up to the cloud.
Wow – flashback!! I’m fresh out of college and I’m talking in words like “3270 terminal emulation” and “DEC VAX” emulation with concepts like client and host. Well, in some ways, fog computing is a little like the concept of those gold ol’days (without any punch cards, however, or freakin’ missing semi-colons causing infinite loops). The difference, however, is the intelligence, flexibility, and scalability of the ‘fog nodes’ on the edge which can be optimized by performance, application, real-time analytics, and the like – and all in a matter of milliseconds.
Also, using the underlying concepts of cloud computing, the edge ‘fog nodes’ are still available to today’s advance and robust data analytics, vertical and horizontal elasticity, and rapid application introduction. The result is real-time awesomeness without bandwidth traffic jams.
All told, we are getting closer and closer to The Matrix. In reality, early ‘grid’ designs from 20 years ago have come to fruition in many ways with fog computing and the result is speed and efficiency for us all. So yes, it’s good to ‘be in a fog’ or, as we say ‘round here, “a fawg.” Fawgin' Awesome!