This paper will discuss how deploying Artificial intelligence at the edge can improve the efficiency and cost-effectivness of IoT implementations.
Artificial intelligence (AI) might feel far away, but many of us experience AI every single day in applications like speech to text virtual assistance or fingerprint recognition on smartphones. AI capabilities in IoT applications help to identify patterns and detect variations in IoT edge devices carrying sensors for environmental parameters like temperature or pressure.
Traditionally, simple embedded edge devices collect this data from sensors in the application environment and stream the data to AI systems built on cloud infrastructures to perform analytics and draw inferences. Yet, as the need for real-time decision making in IoT implementations grows, so do connectivity or processing needs—and it is not always possible to stream all data to the cloud for AI processing. This paper will discuss how deploying AI at the edge can improve the efficiency and cost-effectivness of IoT implementations.
Exploring AI in an IoT solution
AI technology comprises of several variants like machine learning, predictive analytics and neural networks. Data collected from the edge devices are labeled, and then data engineers who have specialized skills in creating software solutions around big data prepare pipelines to feed into the data models. Data scientists with skills across mathematics, statistics and programming languages like C and C++ create AI models using machine learning algorithms fine-tuned for various known applications. These models can be finally expressed in different ways, like neural networks, decision trees or inference rule sets.
Machine learning is either supervised or unsupervised learning. While unsupervised learning (based on inputs without output variables) can help developers learn more about the data, supervised learning is the basis for most applied machine learning. In the training phase of supervised machine learning, huge data streams are mined to look out for meaningful patterns or inferences using multiple computations to arrive at a prediction.
At the AI application stage, data collected from the edge devices are fed to models selected from the available data models using standard library frameworks like Tensorflow. The modeling step requires a considerable amount of processing power, usually available in a central location which can be a cloud site or a large data center.
In the deploy phase, things get interesting. For instance, the software packages with the dependencies for the chosen models can be accessed by edge devices from a shared repository without as much reliance on the cloud. In areas like health monitoring, wearable devices which need unsupervised machine learning adapted to the user can benefit immensely from edge computing. Plus, applications that are customized to take inferences on the spot without prior learning often need very high processing power, a need well-suited for AI at the edge.
In most cases, technical or energy constraints make it impossible to stream all the data to the cloud where the AI resides. There are use cases like audio or video recognition where patterns and inferences have to be recognized instantaneously, and the communication latency is prohibitive. There are instances where the deployment does not provide stable connectivity. Therefore, there needs to be a scalable hybrid architecture where the required models are built on the cloud, and the inference task is performed at the edge. This approach sends fewer data to the central location making it bandwidth efficient while improving latency and responsiveness.
How to deploy edge AI
The basic components of a typical edge AI model include both the hardware and software for capturing sensor data, software for training the model for application scenarios as well as the application software that runs the AI model on the IoT device. A micro-service software that is running on the edge device is responsible for initiating the AI package residing on the edge device upon request by the user. Within the edge device, the feature selections and transformations defined during the training phase are used. The models are customized to the appropriate feature set, which can be extended to include aggregations and engineered features.
Intelligent edge devices are deployed in battery operated applications in areas with low bandwidth and intermittent network connectivity. Edge device manufacturers are building sensors with integrated processing and memory capabilities and widely used low-speed communication protocols like BLE, Lora, and NB-IoT in tiny footprints and low power consumption.
The benefits of AI at the edge
While the complexity of such designs may make the edge expensive, the benefits far outweigh the related costs.
Apart from being highly responsive in real time, edge-based AI has significant advantages such as greater security built into the edge devices and lesser data flowing up and down the network. It is highly flexible, as customized solutions are built for each application. Since the inferences are pre-built into the edge devices, it needs fewer skills to operate and maintain.
Edge computing also allows developers to distribute computing across the network by transferring some sophisticated activities to edge processors in the local network like routers, gateways, and servers. They provide very good operational reliability as data is stored and intelligence is derived locally helping deployment in areas of intermittent connectivity or without a network connection.
Ordinarily, building a machine learning model to solve a challenge is complex. Developers have to manage vast amounts of data for model training, choose the best algorithm to implement, and manage the cloud services to train the model. Application developers then deploy the model into a production environment using programming languages like Python. The smart edge device manufacturer will find it extremely difficult to invest in resources to execute an AI implementation on edge from scratch.
However, devices like Avnet’s SmartEdge Agile have various types of sensors attached, with built-in AI software stacks. The associated software platforms and development studios, like Branium and Microsoft’s Azure Sphere, are capable of supervised and unsupervised machine learning with a database of ready AI algorithms to choose from and deploy models to the device without writing a single line of code. The users also can create multiple widgets where they can view the values from the sensors in real-time and also can save these data for future use.
It’s true that artificial intelligence adds complexity to an already complex space with the Internet of Things. Double that when you add edge AI. However, with the right platforms and partners, developers can navigate this complexity and bring innovation that leaves speech to text and fingerprint recognition in the dust.
Dig deeper into what AI at the edge has to offer developers—and see if the technology is right for your deployment.