Time to think out the box on Edge AI

Mark Rankilor, Head of Product at Kynisys

You don’t have to be a philosopher to know that life’s opportunities rarely come in a gift-wrapped box. It’s an age-old lesson, but one that the technology industry wants us to forget.

These days, everything seems to be fully-automated and intuitive, plug-and-play and out-the-box; a containerised, end-to-end solution that requires a minimum of training to use.

And more often than not, these labour-saving systems do exactly what they promise. With some technologies, however, this oven-ready approach simply doesn’t work. Artificial intelligence provides a prime example — particularly when deployed at the edge. Since Edge AI use cases, requirements, data, devices and underlying infrastructure are utterly unique to each business, you can’t simply buy a black box, plug it in and start expecting it to deliver its full potential.

That doesn’t mean that Edge AI requires businesses to pour millions into R&D, revamping network architecture and hiring new technical talent. All it takes is the power of community — and a little bit of old-fashioned thinking outside the box.

Putting AI where it’s needed

To understand why Edge AI requires us to ditch the black box mentality, first we must look at how edge deployments differ so drastically from centralised AI systems.

Many AI applications take a stupendous amount of processing power to deliver the required insight and capabilities. With centralised AI, an organisation will spend weeks collating data with which to create the neural network architecture at the heart of the solution, and then testing and training this “brain” in an ongoing cycle of iterative improvement. But most — if not all — of these neural networks are designed to run in high-specification cloud environments where they can harness the gigantic processing power of multiple, industrial-grade CPUs or GPUs.

There are a multitude of AI use cases where this centralised model can deliver incredible results, but there are just as many — from driverless cars to smart security systems — where the intelligence needs to be where it’s needed: right at the edge of the network rather than at the core. There are huge challenges in separating the “brain” from the devices gathering the data and using the intelligence that’s created. These include the huge bandwidth requirements for ferrying, say, 4K video from a smart security camera back to the data centre, data protection / GDPR issues, or the imperatives of ultra-low latency to application performance.

Building bespoke intelligence

These are just some of the reasons why so many AI applications can only realistically be deployed at the edge. The problem for organisations is that standard AI frameworks are designed to run in well-resourced, centralised cloud networks. Shoehorning that capability into an edge device such as a tablet, camera or sensor is clearly a massive challenge — and not just because one is designing applications to run effectively on the much more limited processors.

Ideally, an enterprise will build a bespoke architecture around the most capable components within each device in the network. With the right software running in the most suitable component — a GPU, for example — businesses can achieve speeds and power efficiencies that are higher by an order of magnitude compared to running it on a device’s CPU. The complexity of orchestrating so many different hardware components, however, together with the lack of standardisation across devices, means that neural network architects and machine learning developers often default to the lowest common denominator — the CPU — severely limiting the capability of the whole system. That’s on top of the backbreaking work of data capture, testing, creating dashboards, and training the whole system.

How can organisations, limited by budget and struggling to acquire the required skills, surmount these difficulties to build bespoke AI architectures that will deliver maximum value? The answer is as simple as a single word: community.

Standing on the shoulders of giants

What’s frustrating about this DIY approach to edge AI is that many organisations may be simultaneously working on — and solving — many of the same problems. Teams around the world could be working on similar use cases or working out the same integration challenges, with this multiplication of effort resulting in a colossal waste of resources.

There is a better way. If we can’t deploy edge AI applications with a standard black box, why don’t we make it easy for anyone to build their own?

With so many experts working hard to overcome the individual challenges described above, the most elegant solution is to create a community where people can choose components, architectures, models and even data sets that have been successfully tested “in the wild”. Enterprises can then assemble their own bespoke edge AI applications, piecing the various components and systems together like Lego, and run simulations before it goes live to ensure that the whole system performs to its utmost capability.

In tech, we talk a great deal about synergies. But the exigencies of edge AI demand a more collaborative, cross-industry approach. By creating a community and pooling people’s hard work to create a fully-integrated set of tools and devices, businesses will no longer have to reinvent the wheel. By standing on the shoulders of giants, they can save countless hours, weeks and months on trial-and-error, and simply begin deploying edge AI applications that simply work.

Just another example of how good things happen when you start to think outside the box.

Discover more here: https://www.kynisys.ai/