Method ReduceL1
ReduceL1(TensorFloat, TensorFloat, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceL1 operation: f(x1, x2 ... xn) = |x1| + |x2| + ... + |xn|.
Declaration
public override void ReduceL1(TensorFloat X, TensorFloat O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorFloat | X | The input tensor. |
| TensorFloat | O | The output tensor to be computed and filled. |
| ReadOnlySpan<int> | axes | The axes along which to reduce. |
| bool | keepdim | Whether to keep the reduced axes in the output tensor. |
Overrides
ReduceL1(TensorInt, TensorInt, ReadOnlySpan<int>, bool)
Reduces an input tensor along the given axes using the ReduceL1 operation: f(x1, x2 ... xn) = |x1| + |x2| + ... + |xn|.
Declaration
public override void ReduceL1(TensorInt X, TensorInt O, ReadOnlySpan<int> axes, bool keepdim)
Parameters
| Type | Name | Description |
|---|---|---|
| TensorInt | X | The input tensor. |
| TensorInt | O | The output tensor to be computed and filled. |
| ReadOnlySpan<int> | axes | The axes along which to reduce. |
| bool | keepdim | Whether to keep the reduced axes in the output tensor. |