India - Flag India

Please confirm your currency selection:

Indian Rupee
Incoterms:FCA (Shipping Point)
Duty, customs fees and taxes are collected at time of delivery.
Payment accepted in Credit cards only

US Dollars
Incoterms:FCA (Shipping Point)
Duty, customs fees and taxes are collected at time of delivery.
All payment options available

Bench Talk for Design Engineers

Bench Talk


Bench Talk for Design Engineers | The Official Blog of Mouser Electronics

Distributed Analytics Beyond the Cloud Charles Byers

Distributed Analytics Theme Image

(Source: everything possible/

Analytics is a very general term for correlating and digesting raw data to produce more useful results. Analytics algorithms could be as simple as data reduction or averaging on a stream of sensor readings, or as complex as the most sophisticated artificial intelligence or machine learning (AI/ML) systems. Today, analytics are commonly performed in the cloud because it is the most scalable and cost-effective solution. However, in the future, analytics will be increasingly distributed across the cloud, edge computing, and endpoint devices to take advantage of their improved latency, network bandwidth, security, and reliability. Here, we’ll discuss some of the architectures and tradeoffs associated with distributing analytics beyond the boundaries of the traditional cloud.

How Distributed Analytics Adds Value

Simple analytics involve data reduction, correlation, and averaging, resulting in an output data stream much smaller than the input data. Consider the system that supplies fresh water to a large building. It might be valuable to know the pressures and flows at various points in the system to optimize the pumps and monitor the consumption. This could involve an array of pressure and flow sensors spread around the distribution piping. Software periodically interrogates the sensors, adjusts the pump settings, and creates a consumption report for the building managers. But, the raw readings from the sensors could be misleading, for example, because of a momentary pressure drop when a fixture is flushed. Analytics algorithms can average the readings from a given sensor over time, and combine and correlate the readings from multiple sensors to create a more accurate, and more useful picture of the conditions in the pipes. All of these readings could be sent to analytics based in the cloud, but it would be a much more efficient architecture if the sensors did some of the averaging themselves, and local edge computers did the correlation and reporting. That’s distributed analytics, and it can improve the efficiency, accuracy, and cost of many analytics systems.

Analytics becomes more complicated when AI/ML techniques are employed. AI/ML usually operates in two phases:

  • A model building phase where large amounts of data are distilled to produce a model for the AI/ML system
  • Inference phase where that model is applied to data flowing in a system to generate the desired results, often in real-time

In today’s systems, the models are almost always built in large server farms or the cloud, often as an off-line process. Then, the resulting AI/ML models are packed and shipped to different systems that run the inference phase of the models on live data, generating the desired results. The inference phase can run in the cloud, but recently has been moving toward the edge to improve latency, network bandwidth, reliability, and security. Tradeoffs are worth considering when deciding which level of compute resource to use for each phase.

Inference Phase of AI/ML

The inference phase of AI/ML is relatively easy to distribute across multiple peer-level processors or up and down a hierarchy of processing layers. If the models are pre-computed, the data upon which the AI/ML algorithms operate can be split across multiple processors and operated on in parallel. Splitting the workload between multiple peer-level processors provides capacity, performance, and scale advantages because more compute resources can be brought to bear as the workload increases. It also can improve system reliability because if one processor fails, adjacent processors are still available to complete the work. Inference can also be split between multiple levels of a hierarchy, perhaps with different parts of the algorithm operating on different levels of the processor. This allows the AI/ML algorithms to be split in logical ways, allowing each level of the hierarchy to perform the most efficient subset of the algorithm. For example, in a video analytics AI/ML system, the intelligence in the camera could perform adaptive contrast enhancement, hand off this data to edge computers to perform feature extraction, send it to neighborhood data centers to perform object recognition, and finally the cloud could perform high-level functions such as threat detection or heat-map generation. This can be a highly efficient partitioning.

Learning Phase of AI/ML Algorithms

The learning phase of AI/ML algorithms is harder to distribute. The problem is context size. To prepare a model, the AI/ML system takes large batches of training data and digests them with various complex learning-phase algorithms to generate a model that is relatively easy to execute in the inference phase. If only a portion of the training data is available on a given compute node, the algorithms are going to have trouble generalizing the model. That is why training is most often done in the cloud, where memory and storage are virtually unlimited. However, certain scenarios require the training algorithms to be distributed across multiple peer-level compute nodes or up and down the cloud to edge hierarchy. In particular, learning at the edge enables the learning process to collect lots of training data from nearby sensors, and act upon it without cloud involvement–which improves latency, reliability, security, and network bandwidth. Advanced distributed-learning algorithms are under development to address these challenges.


AI/ML is an important future capability of nearly all electronic systems. Understanding the options for how the inference and training capabilities of these systems can be partitioned across a hierarchy of compute resources is key to our future success.

« Back

CHARLES C. BYERS is Associate Chief Technology Officer of the Industrial Internet Consortium, now incorporating OpenFog. He works on the architecture and implementation of edge-fog computing systems, common platforms, media processing systems, and the Internet of Things. Previously, he was a Principal Engineer and Platform Architect with Cisco, and a Bell Labs Fellow at Alcatel-Lucent. During his three decades in the telecommunications networking industry, he has made significant contributions in areas including voice switching, broadband access, converged networks, VoIP, multimedia, video, modular platforms, edge-fog computing and IoT. He has also been a leader in several standards bodies, including serving as CTO for the Industrial Internet Consortium and OpenFog Consortium, and was a founding member of PICMG's AdvancedTCA, AdvancedMC, and MicroTCA subcommittees.

Mr. Byers received his B.S. in Electrical and Computer Engineering and an M.S. in Electrical Engineering from the University of Wisconsin, Madison. In his spare time, he likes travel, cooking, bicycling, and tinkering in his workshop. He holds over 80 US patents.

All Authors

Show More Show More
View Blogs by Date