The web is always evolving, growing ever more complex and powerful as the industry adds new technologies and solutions. Consumer data processing and storage demands are higher than ever before, and thus web architectures must pivot and innovate in order to match these demands.
One such innovation is the concept of fog computing. Often synonymously tied to edge computing, fog computing is arguably the next frontier for accelerating IoT connections, offering increased speed, performance, and a bevy of other benefits.
Fog computing is set to bring about a revolution in how data is transferred between IoT devices — greater flexibility in node position, bandwidth limitations, and co-processing means that items once considered in the “futurist” camp are now theoretically possible. The idea of pushing VR or AR functionality, for instance, quickly becomes a greater possibility when more processing and data bandwidth is available to devices with closer proximity and lower bandwidth. This is, of course, just a single possibility for this amazing new technology, which is why there is so much talk about fog implementation in the API infosphere.
Key to implementing fog computing, though, is understanding it. Today, we discuss just that. We’ll define fog computing, highlight both its pros and cons, and discuss the difference between fog computing and the intimately related concept, edge computing.
Tracking the Computing Evolution to the Fog
First and foremost, what is fog computing? In the most simple terms, fog computing is a hybrid approach of classical architecture design concepts and modern cloud architecture.
Classical: Centralized Servers, Simple Client-Server Communication
The classical architecture paradigm is this — centralized servers receive requests from remote clients, perform calculations and other functions, and then pass the results back to the clients. Interpretation is in the realm of the device, with transmission being handled by third parties.
This is how computing worked in a server environment for a long time, and for good reason. Centralizing computational tasks adds an increased level of security, making the only weak avenue that of data transmission. The system is exceedingly simple to setup and manage, and thus produces low overhead.
There’s some issues inherent in this, though. Depending entirely on the centralized servers means inefficiency. Data must be stored on the device or on the server, and all functionality is relegated either to the local device or to the remote server with little oversight and control.
Cloud Computing: Shared, Virtual Instances
Cloud computing introduced a new paradigm — instead of centralizing these resources and having a strictly delineated relationship, cloud computing establishes virtualized instances where remote clients can tie into their own custom environments for processing. One way to consider cloud computing is to think of it as “using someone else’s computer.” Data can be stored “in the cloud” on remote storage, and accessed when needed for hybrid computations on both the device and the server itself.
This sounds great, but there’s some obvious issues. The system is much more complex, and thus introduces greater overhead than any classically designed system by far. Having instanced events means that the centralized servers must be larger with greater access to ever-increasing amounts of storage. While this means greater scalability, it introduces a feedback loop demanding greater dependence on data center solutions.
Edge Computing: Closer Devices, Shared Processing
Edge computing is a concept in which all devices on the network share, in some way, the computational load of the network as a whole. The idea is that by bringing resources closer to the consumer and allowing computational effort to be spread to all related devices, you bring the actual data origination to the “edge” of the network.
Fog Computing: Niche, Local Storage and Processing
Fog computing, then, is a hybrid approach of cloud and the classical approach. Cloud functionality in the form of virtualized instances is added to the classical architecture, bringing the actual processing and manipulation to closer nodes on the vast network. Fog computing is concerned mostly with proximity – ensuring the data is processed, stored, and manipulated closer to the actual data requester.
What fog computing does is essentially a relay race – process data closer to the requester, leverage resources where available, and optimize this functionality so as to spin up instances and processes when needed based on the location of the data request, not dependent on the location of the centralized server.
Fog computing is different from edge computing in a subtle, but important way, and it has everything to do with where the computational power is located. In fog computing, computational power is centered on fog “nodes” and IoT gateways; in edge computing, the entire network itself functions as a computational powerhouse, distributing the share between all devices and the resultant automation controllers.
Fog Computing Pros
Fog computing has some pretty obvious benefits. As determined by Cisco’s research paper, by moving the data processing and delivery closer to the requesting devices through the expansion of a geographic network footprint, and operating in a node-based fog architecture rather than in either cloud or traditional systems, response times to customers can be improved almost exponentially. Proximity is a huge part of network efficiency, and the effect of moving data towards the consumer along a network path through such fog nodes simply can’t be overstated. Increased network efficiency through proximity marries many of the benefits of the classic architecture while avoiding the drawbacks of such.
As part of this movement towards network nodes, congestion on the network is likewise drastically reduced. In the classical architectural paradigm, the entirety of processing is being demanded in a very centralized system. This means that clients making requests often have to wait in line for their turn, depending on resources that are constantly in high demand. Essentially, classic architectures are glorified bottlenecks in all but a few specific use cases, where data is trapped in a looping hold pattern. Fog computing gets around this by distributing this functionality across a much, much wider space.
While the cloud distribution can indeed be negated by some solutions, it’s problematic, as it also means that network response as a whole suffers, even with multiplexing and other load balancing solutions. With fog computing, you largely avoid this, spreading out the responsibility to multiple nodes. You still have bottlenecks, yes, but you’re taking the concept of load balancing and applying it to the extreme, resulting in quicker data delivery and better network efficiency.
Fog computing also boasts some impressive scaling abilities. Working with nodes means that, as more nodes are necessary, dormant nodes can be spun up and utilized based on proximity to the data request. Part and parcel to this design is also a boost in security – data is encoded as it moves towards the edge of the network, and changing node relations and presences means that your attack surface is ever changing. This results in a secure encoding process with no clear methodology of attack.
Fog Computing Cons
Not everything about fog computing is perfect, though. The biggest drawback is the fact that fog computing adds a ridiculous amount of complexity into a network, and thus adds a certain amount of overhead in business terms. This effectively means that, unless you need fog computing, you can actually come out worse for wear by implementing it.
Additionally, implementing fog computing means that you’re introducing an ever increasing number of points of failure. While security is benefited in a way by this constant change, the maintenance and predictability aspect is not. In traditional solutions, you have a single point of failure with centralized effort to repair, maintain, and identify potential issues. By spreading out the computational load, you’re spreading out that effort and responsibility, which can make for stress on the entire process.
This ultimately means there is a huge potential for loss of privacy that is baked into fog computing from its very nature – and while there are certainly ways to handle this, it’s something that must be kept in mind during adoption.
That’s not to say fog computing isn’t a great solution, it’s simply to say that this is very much a case of “if you think you don’t need this, you probably don’t”.
The Role of APIs in Fog Computing
What, then, is the role of the API in this grand scheme? APIs have always been the go-between for centralized systems or nodes on a decentralized system and the consumer, and this is just as true in fog computing. Perhaps more than any other architecture, APIs are needed to manage and optimize fog computing nodes, and play a vital role in developing the backbone that allow such a system.
Each node is effectively an endpoint accessible by the client, all managed by a greater system – in this regard, the API can almost be considered the same as any physical element of a network, acting as gateways and switches to manage the flow of data and assign proper node handling.
Consider fog computing like a giant city. Each neighborhood has a variety of stores, but where you live will determine the store you visit. APIs are the roads on which you are driving, creating pathways for those who live in the city and establishing traffic flows for optimal transport. They’re the stop lights, managing traffic and assuring that congestion is kept at a minimum. And, most importantly, they’re the rules by which you drive, assuring proper methodologies and practices are adhered to by those using the system.
APIs don’t just play a mechanical role, of course – they’re also responsible for the optimization of traffic. Proper API structuring with sensible endpoints, load balancing, location verification, and basic authentication/authorization will result in drastic increases in efficiency when it comes to node navigation, and as the fog network grows more complex, this role will only increase.
The cloud is still predicted to maintain its market share for some time. That being said, the movement to the fog will occur gradually – as systems become more complex and require more than cloud can provide, fog computing and the APIs in that space will very quickly replace the cloud as the go-to solution.
Potential Use Cases
Fog computing is very powerful, but as said earlier, utilizing fog computing when there’s no real use case can be a burden rather than a positive. To that end, there are a few situations in which fog computing absolutely makes sense.
Principally speaking, fog computing’s major use case, and arguably the entire purpose for its creation, is the increase in data processing and delivery efficacy. By moving these to fog nodes, you improve the network exponentially – these are qualities that are absolutely required for many traffic heavy applications. Swarm computing, A.I., multimedia rendering and encoding; all of these are computation and transit heavy, and thus can majorly benefit from fog computing.
With an increase in efficacy also comes an extension in functionality, specifically in the Internet of Things space. Because the IoT often involves hundreds of devices in various locations being interacted with for various purposes on a wide network, tying them together in more localized clusters can provide more efficient data transfer and massively improved speed. This type of communication makes high-data communication between mobile phones, wearable health monitoring devices, bluetooth vehicle dashes, and augmented reality devices not only feasible, but more powerful. Eliminating the latency, high overhead, and device-dependent processing of traditional architectures makes all of this theoretically possible.
Some have compared fog computing to a type of neural network – nodes upon nodes, tying into a powerful decentralized computational system, sharing data in the most logical method possible. The implications are massive, and this is part of why fog computing is going to play such a massive role in these burgeoning fields.
As a side effect of being intimately tied to work performed by IT giant CISCO, there are some rather huge players in the industry backing this new standard. CISCO has been instrumental in establishing the fundamental concepts of edge computing, and by extension, fog computing, and these efforts have resulted in some movement towards an open standards framework.
This open standards framework has also been heavily pushed by the Open Fog Consortium, a so-called “public-private ecosystem” whose stated goal is to “accelerate the adoption of fog.” This organization has some serious pedigree behind it, as well – the Open Fog Consortium was founded in November of 2015 by ARM, Cisco, Dell, Intel, Microsoft, and the Princeton University Edge Computing Laboratory.
Fog computing isn’t just an Internet thing, either – the concept has serious real-world implications as far reaching as the United States Navy. SPAWAR, a US Navy division, is using the concepts described here to create a scalable system they term as a Disruption Tolerant Mesh Network. What this means, in layman terms, is a network that can resist intrusion and attack by distributing access to mobile, static, and strategic resources using a node system. This can be thought of akin to a hydra – cut off one head, more grow back, take down a node, and more will respond.
This is essentially how many experts think smart drone swarms will behave in the future, as well. Distributing processing amongst nodes and allowing for each node to take on a proximity-based authority allows for small nano-drones to swarm, with potential uses including pollination of threatened flowering plants, mosquito and other insect control, and even missile defense systems with high altitude micro swarms capable of acting as a sort of mobile chaff.
Part of the problem with fog computing though is that it’s still a very nascent technology. Just like cloud computing, it has a long way to go before it can fully reach its potential in terms of both functionality and implementation. It’s novel and innovative, and it has the potential to be truly disruptive, but more planning, research, and real-world testing needs to be implemented before full commitment can be advocated.