What's new in Sentis 2.6
This is a summary of the changes from Sentis 2.5 to Sentis 2.6.
Added
- Official support for ONNX opset versions up to version
25. - Functional methods for the
SwishandRMSNormoperators, including support for thealphaargument on theSwishoperator. - Support for
Bufferin PyTorch model import. - Generic truncation in the Tokenizer, with support for
longestfirst,onlyfirst, andonlysecondstrategies. - Compatibility with Fast Enter Play Mode for CoreCLR.
- Improved analytics and error reporting when importing models with unsupported operators.
Updated
- The random number generator now uses Unity's
Mathematics.Randominstead ofSystem.Random. - Improved documentation for the
TensorandFunctionalAPIs. - Updated documentation for Cubic interpolation mode support limitations.
Fixed
- Corrected behavior of
ReduceL1,ReduceL2,ReduceSumSquare, andReduceLogSumoperators whennoop_with_empty_axesistrueandaxesare empty. - Resolved an issue with the
Interpolateoperator when using thescaleFactorargument. - Fixed a case where GPU allocations (
ComputeTensorData) were used for tensors without a corresponding backend. - Prevented crashes when closing the editor while in Play mode.
- Fixed a memory leak in PyTorch model import.
- Resolved a GPU crash for convolution with padding on GPU compute.
- Fixed an issue with the
Splitoperator when importing.sentisfiles.
What's new in Sentis 2.5
This is a summary of the changes from Sentis 2.4 to Sentis 2.5.
Added
PyTorchmodel import to directly import PyTorch files (.pt2) to Sentis without using ONNX.LRN (LocalResponseNormalization)operator is now implemented on all backends.3D MaxPoolandAveragePooloperators are now implemented on all backends.- Sentis Importer now allows users to specify dynamic dimensions as static on Sentis model import, same as we do for ONNX.
- Tokenizer now parses Hugging Face models.
- Wider coverage of all the components of the Tokenizer.
Updated
- Model Visualizer now supports background loading of models.
- Resize operator on CPU no longer uses main (mono) thread path.
- All model converters use switch-case instead of if-else cascade for improved performance
- Mono APIs are migrated to CoreCLR-compatible APIs
Fixed
- Editor crash when quitting in Play Mode.
- Memory leak in FuseConstantPass was fixed.
Clipoperator no longer need CPU fallback for min/max parameters.Modoperator fix on some platform with float operands.- Faulty optimization pass was corrected.
- Fix in existing burst code for 2D pooling vectorization calculations.
TopKissue onGPUComputewhen dimension is specified.- Many fixes to the Tokenizer.
What's new in Sentis 2.4
Sentis is the new name for this package.
This is a summary of the changes from Inference Engine 2.3 to Sentis 2.4.
Added
- Tokenizer API for tokenization and detokenization of strings with language models.
- LiteRT model import to directly import .tflite files to Sentis without using ONNX.
- Spectral operators to enable audio models.
- Many new operators corresponding to LiteRT and torch operators with functional API and optimization passes.
Updated
- Import of ONNX models has been greatly sped up and optimized to match the ONNX specification.
Fixed
- Many small import, inference and documentation issues.
What's new in Inference Engine 2.3
This is a summary of the changes from Inference Engine 2.2 to Inference Engine 2.3.
Added
- Model Visualizer for inspecting models as node-based graphs inside the Unity Editor.
GatherNDandPowoperators now supportTensor<int>inputs more widely.ConvTransposeandConstantoperators now support more input arguments.
What's new in Inference Engine 2.2
Inference Engine is the new name for the Sentis package.
This is a summary of the changes from Sentis 2.1 to Inference Engine 2.2.
For information on how to upgrade, refer to the Upgrade Guide.
Added
- Dynamic input shape dimensions support at import time for better model optimization.
- Custom input and output names for models created with the functional API.
- The model stores the shapes and data types of intermediate and output tensors and displays them in the Model Asset Inspector.
- New
Mishoperator. - Improved shape inference for model optimization.
Updated
ScatterElementsandScatterNDoperators now supportminandmaxreduction modes.DepthToSpaceandSpaceToDepthnow support integer tensors.TopKsupports integer tensors.Functional.OneHotnow allows negative indices.RoiAlignnow supports thecoordinate_transformation_modeparameter.- Reduction operators return correct results when reducing a tensor along an axis of length 0.
Reshapeoperator can now infer unknown dimensions even when reshaping a length 0 tensor like in PyTorch.- Improved documentation for Model Asset Inspector.
Removed
- Obsolete Unity Editor menu items.
- Slow CPU support for 4-dimensional and higher
Convolutionlayers.
Fixed
- Out-of-bounds errors for certain operators on
GPUComputebackend. - The
TextureConvertermethods now correctly performs sRGB to RGB conversions. - Incorrect graph optimizations for certain models.
- Issues with negative padding values in pooling and convolutions.
- Accurate handling of large and small integer values in the
GPUPixelbackend. - Proper destruction of allocated render textures in the
GPUPixelbackend. LeakyRelunow supportsalphagreater than 1 on all platforms.- Fixed Async behaviour for CPU tensor data.