India - Flag India

Please confirm your currency selection:

Indian Rupee
Incoterms:FCA (Shipping Point)
Duty, customs fees and taxes are collected at time of delivery.
Payment accepted in Credit cards only

US Dollars
Incoterms:FCA (Shipping Point)
Duty, customs fees and taxes are collected at time of delivery.
All payment options available

Bench Talk for Design Engineers

Bench Talk


Bench Talk for Design Engineers | The Official Blog of Mouser Electronics

Edge Computing and the Internet of Things Mike Parks

In the first two blogs of this series, which included the Wi-Fi Mesh and the Internet of Things and RESTful APIs and Moving Data Across the Internet of Things, we looked at the challenges and solutions involved with the communication between devices that make up the Internet of Things (IoT). Even with forthcoming 5G networks and the capabilities it offers—including faster speeds, mobility focus, scalability, and the fact it is a software-defined standard allowing incremental upgrades—the IoT is poised to swamp these networks with volumes of data that is unprecedented. Therefore, part of the solution to a functional IoT is to be judicious in what data is transmitted to the network. If devices can perform more processing on the raw data locally at the endpoint, then less raw data needs to be transmitted. This concept is referred to as edge computing—or more infrequently fog computing.

At its core, edge computing simply means processing raw sensor data as close to the endpoint that generated the data without going to the cloud to use the heavy computing capability of high-end servers. Artificial Intelligence (AI) algorithms comprise an evolving set of software—with Machine Learning (ML and neural network powered Deep Learning (DL) being leading methods in achieving AI—that will enable much of the innovation needed to do this local Herculean data processing. These algorithms are computationally hungry. For embedded systems, this presents a challenge. Embedded systems have historically prioritized low-cost, low-power, and small footprint over the memory and processing horsepower these next-generation algorithms must have in order to be effective.

Enter the humble Field Programmable Gate Array (FPGA). FPGAs are not new. In fact, they were invented in the 1980s. The founders of Xilinx brought the first commercially viable FPGA, the XC2064, to market in 1985.

What is changing is that price and performance are getting to levels where they are attractive options for even low-cost devices that are powering the endpoints of the IoT. Lower density FPGAs—e.g., FPGAs with few configurable logic blocks—make an attractive option for IoT for a variety of reasons:


  • FPGAs make parallel processing possible. This capability is ideal for achieving high performance from a neural network.
  • FPGAs offer flexibility as they are reconfigurable. This is crucial as AI/DL/ML algorithms are still very much a work-in-progress; thus, they are constantly being improved. Being able to update means even fielded hardware can take advantage of future improvements.
  • Even lower density FPGAs pack more and more logic and I/O blocks in smaller and smaller footprints. The improvements also come with commensurate cost drops too. This is crucial for consumer-oriented IoT devices that must contend with tight profit margins.
  • Energy efficiency of low-density FPGAs is ideal for IoT use cases. This is crucial for battery-powered and other low-power applications. Just think that the latest Android and iOS device in your back pocket has the ability to do facial recognition and augmented reality (AR) thanks to these algorithms.
  • FPGAs are getting better design tools, making them easier to use in embedded systems. Empowering engineers to improve the development cycle means a faster time to market. Even the maker-oriented Arduino platform has gotten in on the FPGA market with their recently released MKR Vidor 4000 and a soon-to-be-released cloud-based, block programming interface.


It should be noted that getting data back to cloud services that can accumulate and crunch the collective data does have benefits. Teaching and improving ML and AL algorithms require access to copious amounts of data. Data the IoT is more than happy to deliver. Teaching AI algorithms requires lots of processing horsepower that big iron can still do better. But once the AI algorithms are improved and implemented as an upgrade to IoT endpoints, then the embedded devices become smarter. Allowing them to make better decisions when exposed to a new scenario in the real world. Just as humans continuously learn and improve their mental skill sets.

« Back

Michael Parks, P.E. is the owner of Green Shoe Garage, a custom electronics design studio and technology consultancy located in Southern Maryland. He produces the S.T.E.A.M. Power podcast to help raise public awareness of technical and scientific matters. Michael is also a licensed Professional Engineer in the state of Maryland and holds a Master’s degree in systems engineering from Johns Hopkins University.

All Authors

Show More Show More
View Blogs by Date