AI Moves to the Network’s Edge

Harnessing increased compute power to address AI applications requires storage, cloud and infrastructure planning.

Harnessing increased compute power to address AI applications requires storage, cloud and infrastructure planning.

As edge computing technology evolves, the number of functions performed on the periphery of the network also grows. Paralleling—and enabling—this trend, edge system providers are starting to incorporate an increasing variety of artificial intelligence (AI) technologies into their offerings. These developments represent a departure from early implementations, where the role of edge computing was mostly to ingest, store, filter and send data to cloud systems. 

ARM’s machine learning processor architecture aims to provide three key features: efficiency of convolutional computations, efficient data movement and programmability. Image courtesy of ARM.

Today, edge computing systems enabled by AI technologies perform an increasing number of advanced analytic tasks, mitigating the necessity of forwarding processed data to the cloud for analysis. This latest incarnation of edge computing reduces latency, power consumption and cost while improving reliability and data security. The infusion of AI into edge devices promises a new class of applications across a broad spectrum of industries, and redefining the role of the cloud in the process.

Forces Driving Change

A number of factors have come together to bring about these computing changes. Chief among these are the emergence of the internet of things (IoT) and the ever-growing volume of data that its constituent devices generate. Together, these factors have increased demands for local intelligence that can perform the complex analysis required to extract full value from the IoT.

At the same time, technological advances, accompanied by more affordable pricing, have added momentum to this transition. As a result, edge system designers are incorporating more and more compute, store and analytic capabilities into devices residing close to the data sources.

Running AI on Edge Computing Systems

Users can see the advances in compute technology by looking at the specialized hardware introduced by leading chipmakers. These include processors ranging from ARM’s Mali-G76 graphics processing unit (GPU) and NXP Semiconductors’ i.MX RT1050 crossover processor to the more recent CEVA NeuPro family of AI processors. Armed with improvements in compute capability, form factor and energy efficiency, these specialized processing units can now support AI-enabled edge-based applications, such as natural language processing, intelligent sensor data interpretation and secure device provisioning.

In gauging the extent of chipmakers’ ability to meet compute resource demands for supporting edge AI, it is important to remember the diversity required to serve this environment. No single architecture or processor type can support every AI application or workload. As a result, current efforts to deploy AI applications on the edge sometimes must tap the strength of both local and cloud-based compute resources.

“Some applications utilize vast amounts of data, and their sheer scale means that moving them to the edge is not currently a logical step,” says Dennis Laudick, vice president of marketing for the machine learning group at ARM. “Big data use cases typically require big compute power, so for some tasks, it doesn’t make sense to run them at the edge.”

Processor Juggling Act

The challenges facing design engineers implementing AI technologies at the network’s edge touch on numerous issues. A key factor is the selection of processing resources to meet the application requirements.

Here, the designer’s job is complicated by the fact that AI and machine learning (ML) processing requirements vary significantly according to the network and workload. As a result, engineers must juggle an assortment of factors when selecting the processing unit.

The designer first must identify the use cases served by the processing unit; determine which neural network works best in those situations; choose the most appropriate hardware; and define the desired performance level.

Designers must also come to terms with space constraints and power consumption issues. Because space is often in short supply and edge devices tend to be battery-powered, anything that consumes too much of either is a non-starter. These limitations, however, do not preclude AI and ML functions.

“It’s more than possible to deploy neural networks on low-power processors found in, say, always-on systems, such as those based on microcontrollers,” says Laudick. “This gives you access to a number of AI/ML workloads—from speech recognition to image classification—within limited memory, compute resources and power budgets.

As complex as all these issues are, the decision-making process is further confounded by the variety of hardware options available to the designer. “Selecting the right solution for your application can be a series of tradeoffs—from microcontroller units for cost- and power-constrained embedded IoT systems and CPUs for moderate performance with general-purpose programmability, to GPUs for faster performance with graphics-intensive applications and neural processing units for intensive ML processing,” says Laudick. “Selecting a solution that provides optimal performance—while matching power and space constraints—can appear complex.”

To meet these challenges, some designers turn to heterogeneous platforms. These systems combine more than one kind of processor or core—such as CPUs, GPUs, digital signal processors, field-programmable gate arrays (FPGAs) and neural processing units. This approach aims to benefit from the best characteristics of each component, grouping dissimilar coprocessors that have specialized processing capabilities to handle particular tasks. Equally important, it enables the system to support a variety of AI workloads, delivering optimal performance and reduced power consumption for each task.

Leveraging New Tools and Models

In addition to seeing the birth of new breeds of processors, the industry has witnessed the introduction of tools and model libraries from open source communities that aim to ease the pain of developing AI-enabled edge systems. Major semiconductor vendors have also added features to their products that support the incorporation of AI capabilities.

NXP’s i.MX 8M Mini heterogeneous applications processor combines CPU and GPU processing, taking advantage of the characteristics of an assortment of processing units to support a greater variety of artificial intelligence workloads. Image courtesy of NXP Technologies. Click to enlarge.

“Leading edge computing systems support importing models developed through various modeling frameworks and tools, such as Spark ML and R Studio, through the use of predictive markup model language,” says Keith Higgins, vice president of marketing at FogHorn Systems. “PMML leverages any existing data science development investment already spent by the customer.”

Although these developments further the cause of AI technologies on edge systems, the new tools and models take the designer only so far. “The challenge becomes training the models,” says Markus Levy, director of AI and machine learning technologies at NXP Technologies. “While models and tools exist on the market today—open source and commercial—leveraging new tools and models and training a device to work within parameters to do what you want it to do will be the challenge. In other words, system developers must do the final stages of training based on the data that is totally relevant to their own application.”

Another challenge encountered today is determining the compute resources required to create and train the ML model. To meet these challenges, designers sometimes must rely on the power of the cloud.

“The cloud plays a critical role in ML model creation and training, especially for deep learning models because significant compute resources are required,” says Higgins. “Once the model is trained in the cloud, it can be ‘edgified’ and pushed to the edge. This enables an iterative, closed-loop, edge-to-cloud ML approach, where edge inferences are continually being sent to the cloud to tune the model further, and updated models are pushed back to the edge.”

Fast Storage for Fast Processors

Discussions of deploying AI on edge systems almost immediately gravitate to the demands on, and capabilities of, processing systems. Unfortunately, these discussions all too often overshadow the importance of storage in AI applications. Remember that unless storage systems are optimized to keep pace with processing units, the exchange of data between the two systems becomes a bottleneck. This drives home the importance of selecting just the right combination of compute, storage and memory technologies.

In the past, AI technologies like machine-learning systems tended to rely on traditional compute-storage architectures. Current systems, however—empowered by GPUs, FPGAs, and NPUs—can process data much faster. At the same time, the data sets used for ML training have grown larger as AI technology has evolved.

To meet these new challenges, designers have increasingly relied on flash storage. Flash offers low latency and high throughput, and many industry watchers believe that the technology holds great promise for AI storage.

Specifically, flash boasts latency measured in microseconds, as opposed to disk array latencies, which often fall in the millisecond range. Flash chips also take up much less space because they can be packed together much more densely than rotating drives. Flash also consumes less power, which can make a big difference in cost at large scales.

Designers can do one more thing to align compute and storage operating speeds: implementing non-Volatile Memory Express, or NVMe. This open logical device interface specification accelerates the transfer of data between compute resources and solid-state drives like flash memory over a computer’s high-speed peripheral component interconnect express bus. NVMe technology promises to deliver higher I/O per second, offers performance across multiple cores for quick access to critical data and opens the door for scalability for future performance.

“NVMe isn’t ubiquitous yet, but it’s on its way,” says Bruce Kornfeld, chief marketing officer and senior vice president of product management at StorMagic. “Edge sites with solid-state disks using NVMe will dramatically increase the scale of AI, machine learning and big data applications able to run at edge sites.”

Edge AI Casts a Long Shadow

There are countless examples of applications of AI on the network’s edge, and companies come up with new ways of applying the technologies each day. Some people more readily recognize the technologies’ influence in the automotive, mobile, healthcare and smart home systems. In these cases, edge AI enables the control and interface functions that have captured people’s imagination, such as facial recognition, voice activation, natural language processing and gesture recognition.

The impact of edge AI, however, is even broader. One area likely to reap significant benefits from this technology is the manufacturing sector. 

“AI can address many challenges in manufacturing, particularly in production lines, with identifying defects,” says Tom Bradicich, vice president and general manager of servers and IoT systems at Hewlett Packard Enterprise. “Throughout a production line, there could be video cameras, sensors and beacons—all collecting data and monitoring the action. Adding AI to this situation would enable that production line to identify a defective product at the earliest point, and to communicate with other components of the machine and with the people operating the machinery to address the issue. As more and more products are turned out, technologies like machine learning can be applied to get smarter and recognize even more minute errors in products to improve overall quality and consistency.”

An example of this is seen where AI in edge computing systems helps to predict anomalies on the manufacturing floor. These anomalies can include imminent failure, security breaches or just suboptimal performance. For instance, vibration could be monitored and analyzed to determine when a rotating turbine needs to be serviced.

Edge AI also opens the door for better security and privacy protection. “Localizing processing, functions and capabilities also protects private credentials, mutual device authentication and secure device management that spans applications into smart governance, infrastructure and beyond,” says NXP’s Levy.

But the migration of AI to the network’s edge is just getting started. “Who knows what the future holds,” says ARM’s Laudick. “We’re witnessing a dramatic shift in the capabilities of compute processing power and a huge amount of activity in adapting AI algorithms and technology to more power-efficient devices, so more tasks than ever before can be deployed at the edge of the network.”

About the Author

Tom Kevan's avatar
Tom Kevan
Tom Kevan is a freelance writer/editor specializing in engineering and communications technology.
Follow Robotics 24/7 on Facebook
Follow Robotics 24/7 on Linkedin


Email Sign Up

Get news, papers, media and research delivered
Stay up-to-date with news and resources you need to do your job. Research industry trends, compare companies and get market intelligence every week with Robotics 24/7. Subscribe to our robotics user email newsletter and we'll keep you informed and up-to-date.


Robot Technologies