I have also played with specialized machine learning (ML) acceleration hardware for computer vision with projects like Google's AIY Vision Kit and Intel's Movidius on the client side. Edge ML is powerful, but it has performance limits compared to scalable server-side approaches where you can add more and bigger processors. In a previous project using WebRTC and the Tensorflow Object Detection API, I looked at server-side computer vision (CV) using generic CPU resources. It would have been much more effient to run this with GPU acceleration, or even TPU acceleration, but that requires actually having that specialized hardware. Of course you can access that in the cloud, but with addition cost and platform dependencies. Wouldn't it be nice to get hardware acceleration without specialized hardware?
Intel offers native hardware acceleration on its CPU chips through various projects. I have been wanting to play around with the Intel Open WebRTC Toolkit (OWT) server since they demoed some of its CV capabilities at Kranky Geek, but these kinds of projects typically require a serious time commitment. Fortunately Intel decided to sponsor some of my time to install and evaluate the project for a webrtcHacks post.