Triton Inference Server Client
/Engine/Plugins/NNE/NNERuntimeORT/Source/ThirdParty/Onnxruntime/
The software is part of the Onnxruntime library, which is used to run neural network inference through the onnx runtime backend and also to optimize ML models.
https://github.com/triton-inference-server/server/blob/main/LICENSE
Licencees
P4
Git
/Engine/Source/ThirdParty/Licenses