Over the next few years we will see a rapid growth of AI acceleration chips for Intelligent Edge, an innovation that is fuelling the widespread use of fast Machine Learning models on low latency customer devices. These VPU/NPU (vision processing unit/neural processing unit) acceleration chips will soon be available in all devices, just like GPUs (graphics processing unit). The use of these chips, together with intelligent cloud solutions, will enable the development of interesting use cases capable of creating value in several areas.
In Valorem Reply, we are already exploiting the potential offered by these worlds, during the phases associated with the creation of a Machine Learning model. Cloud models are trained to harness the power of cloud computing, together with the ability to scale resources during the most intense training phase. Subsequently, the Deep Learning algorithm is evaluated on edge devices with higher latency, so as to provide results in real time.