BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

2020 And The Dawn Of AI Learning At The Edge

Forbes Technology Council
POST WRITTEN BY
Max Versace

Getty

With countless predictions about what’s in store for artificial intelligence in 2020, I’m eager to see what will come true and what will fall by the wayside. I think that one of the more paradigm-changing predictions will be moving AI’s learning ability to the edge.

Under the hood of AI’s generic name, a variety of approaches are hidden, spanning from huge models that crunch data on a distributed cloud infrastructure to tiny, edge-friendly AI that analyze and mine data on small processors.

From my academic research at Boston University to cofounding Neurala, I have always been keenly aware of the difference between these two types of AI—let’s call them "heavy" and "light" AI. Heavy AI requires hefty compute substrates to run, while light AI can do what heavy AI is capable of but on smaller compute power.

The introduction of commodity processors such as GPUs—and later their portability—has made it technically and economically viable to bring AI/deep learning/DNN/neural network algorithms to the edge in a multitude of industries.

Bandwidth, latency, cost and just plain logic dictate the era of edge AI and will help make our next technology jump a reality. But before we can do so, it is important to understand the technology’s nuances, because making AI algorithms run on the small compute edge has a few. In fact, there are at least two processes at play: inference, or "predictions" generated by an edge (e.g., I see a normal frame vs. one with a possible defect), and edge learning—namely, using the acquired information to change, improve, correct and refine the edge AI. This is a small, often overlooked difference with huge implications.

Living At The Edge

I first realized this difference between inference/predictions and edge learning while working with NASA back in 2010. My colleagues and I implemented a small brain emulation to control a Mars Rover-like device with AI that needed to be capable of running and learning at the edge.

For NASA, it was important that a robot be capable of learning "new things" completely independently of any compute power available on Earth. A data bottleneck, latency and a plethora of other issues meant they needed to explore different breeds of AI than what had been developed at that time. They needed algorithms that had the ability to digest and learn—namely, adapt the behavior of AI and the available data—without requiring huge amounts of compute power, data and time.

Unfortunately, traditional deep neural network (DNN) models were just not up to par, so we went on to build our own AI that would meet these requirements. Dubbed "lifelong deep neural network" (Lifelong-DNN), this new approach to DNNs had the ability to learn throughout its lifetime (versus traditional DNNs that can only learn once, before deployment).

Learn At The Edge Or Die

One of the biggest challenges when it comes to the implementation of AI today is its inflexibility and lack of adaptability. AI algorithms can be trained on huge amounts of data, when available, and can be fairly robust if all data is captured for their training beforehand. But unfortunately, this is not how the world works.

We humans are so adaptable because our brains have figured out that lifelong learning (learning every day) is key, and we can’t rely solely on the data we are born with. That’s why we do not stop learning after our first birthday: We continuously adapt to changing environments and scenarios we encounter throughout our lives and learn from them. As humans, we do not discard data, we use it constantly to fine-tune our own AI.

Humans are a primary example of edge learning-enabled machines. In fact, if human brains acted in the same way as a DNN, our knowledge would be restricted to our college years. We would go about our 9-to-5s and daily routines only to wake up the next morning without having learned anything new.

The Learning-Enabled, AI-Powered Edge

Traditional DNNs are the dominant paradigm in today’s AI, with fixed models that need to be trained before deployment. But novel approaches such as Lifelong-DNN would enable AI-powered compute edges not only to understand the data coming to them but also adapt and learn.

So, if you too would like to harness the power of the edge, here is my advice. First off, you need to abandon the mindset (and restriction) that AI can only be trained before deployment. From there, a new need arises: a way for users to interact with the edge and add knowledge. This implies the need to visualize newly collected data and for the user to be able to select which ones to add. This can be done either manually by a user or automatically.

For instance, in a manufacturing scenario, a quality control specialist may reject a product coming out of a machine and, by doing so, provide AI a new clue that the product or the part of it that was just built has to be considered faulty. So, updating your AI training protocols to allow for the integration of continual training workflows, where AI is updated based on new clues, is a must for organizations and individuals looking to leverage this new breed of AI.

AI that learns at the edge is a paradigm-shifting technology that will finally empower AI to truly serve its purpose: shifting intelligence to the compute edge where it is needed, at speeds, latency and costs that make it affordable for every device.

Going forward, learning-enabled edges will survive natural selection in an increasingly competitive AI ecosystem. May the fittest AI survive! 

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?