Nvidia announces PyTriton

Lakados

[H]F Junkie
Joined
Feb 3, 2014
Messages
8,947
https://developer.nvidia.com/nvidia-triton-inference-server

This native support for NVIDIA Triton™ in Python enables rapid prototyping and testing of machine learning models with performance and efficient hardware utilization. A single line of code brings up Triton Inference Server, providing benefits such as dynamic batching, concurrent model execution, and support for GPU and CPU from within the Python code. This eliminates the need to set up model repositories and convert model formats. Existing inference pipeline code can be used without modification.
 
Back
Top