Key takeaways:
- ONNXRuntime is the best inference package for Transformer networks;
- Nvidia Triton, together with ONNXRuntime is the best solution for GPU inference;
- Optimization matters. It’s quite easy to unlock a >10X performance gain in 2022.
About this session
Transformer networks have taken the NLP world by storm, powering everything from sentiment analysis to chatbots. However, the sheer size of these networks presents new challenges for deployment, such as how to provide acceptable latency and unit economics.
The de-identification tasks Private AI services rely heavily on Transformer networks and involve processing large amounts of data. In this talk, I will go over the challenges we faced and how we managed to improve the latency and throughput of our Transformer networks, allowing our system to process Terabytes of data easily and cost-effectively.
Watch the full session:
Speaker bio: Pieter Luitjens is the Co-founder & CTO of Private AI. He worked on software for Mercedes-Benz and developed the first deep learning algorithms for traffic sign recognition deployed in cars made by one of the most prestigious car manufacturers in the world. He has over 10 years of engineering experience, with code deployed in multi-billion dollar industrial projects. Pieter specializes in ML edge deployment & model optimization for resource-constrained environments.
Contact us to request Pieter as a guest speaker.
Sign up for our Community API
The “get to know us” plan. Our full product, but limited to 75 API calls per day and hosted by us.
Get Started Today