As a student pursuing a doctorate in systems design engineering at the University of Waterloo, Alexander Wong didn’t have enough money for the hardware he needed to run his experiments in computer vision. So he invented a technique to make neural network models smaller and faster.

He was giving a presentation, and somebody said, ‘Hey, your doctorate work is cool, but you know the real secret sauce is the stuff that you created to do your doctorate work, right?’” recalls Sheldon Fernandez.

DarwinAI
Employees of DarwinAI, an artificial intelligence software startup based in Waterloo, Ontario, gather with company CEO Sheldon Fernandez (seated, center, in the jacket). Credit: DarwinAI

Fernandez is the CEO of DarwinAI, the Waterloo, Ontario-based startup now commercializing that secret sauce. Wong is the company’s chief scientist. And Intel is helping the company multiply the performance of its remarkable software, from the data center to edge applications.

“We use other forms of artificial intelligence to probe and understand a neural network in a fundamental way,” says Fernandez, describing DarwinAI’s playbook. “We build up a very sophisticated understanding of it, and then we use AI a second time to generate a new family of neural networks that’s as good as the original, a lot smaller and can be explained.”

That last part is critical: A big challenge with AI, says Fernandez, is that “it’s a black box to its designers.” Without knowing how an AI application functions and makes decisions, developers struggle to improve performance or diagnose problems.

An automotive customer of DarwinAI, for instance, was troubleshooting an automated vehicle with a strange tendency to turn left when the sky was a particular shade of purple. DarwinAI’s solution — which it calls Generative Synthesis — helped the team recognize how the vehicle’s behavior was affected by training for certain turning scenarios that had been conducted in the Nevada desert, coincidentally when the sky was that purple hue (read DarwinAI’s recent deep dive on explainability).

Another way to think about Generative Synthesis, Fernandez explains, is to imagine an AI application that looked at a house designed by a human being, noted the architectural contours, and then designed a completely new one that was stronger and more reliable. “Because it’s AI, it sees efficiencies that would just never occur to a human mind,” Fernandez says. “That’s what we are doing with neural networks.” (A neural network is an approach to break down sophisticated tasks into a large number of simple computations.)

Intel is in the business of making AI not only accessible to everyone, but also faster and easier to use. Through the Intel AI Builders program, Intel has worked with DarwinAI to pair Generative Synthesis with the Intel® Distribution of OpenVINO™ toolkit and other Intel AI software components to achieve order-of-magnitude gains in performance.

In a recent case study, neural networks built using the Generative Synthesis platform coupled with Intel® Optimizations for TensorFlow were able to deliver up to 16.3 times and 9.6 times performance increases on two popular image recognition workloads (ResNet50 and NASNet, respectively) over baseline measurements for an Intel Xeon Platinum 8153 processor.

“Intel and DarwinAI frequently work together to optimize and accelerate artificial intelligence performance on a variety of Intel hardware,” says Wei Li, vice president and general manager of Machine Learning Performance at Intel.

The two companies’ tools are “very complementary,” Fernandez says. “You use our tool and get a really optimized neural network and then you use OpenVINO and the Intel tool sets to actually get it onto a device.”

This combination can deliver AI solutions that are simultaneously compact, accurate and tuned for the device where they are deployed, which is becoming critical with the rise of edge computing.

“AI at the edge is something we’re increasingly seeing,” says Fernandez. “We see the edge being one of the themes that is going to dominate the discussion in the next two, three years.”

Previous Red Hat Accelerates AI/ML Workflows And Delivery Of AI-Powered Intelligent Applications With Red Hat OpenShift
Next AI Can Help With The COVID-19 Crisis - But The Right Human Input Is Key