Interface IJobEntityBatch
IJobEntityBatch is a type of IJob that iterates over a set of ArchetypeChunk instances, where each instance represents a contiguous batch of entities within a chunk.
Namespace: Unity.Entities
Syntax
[JobProducerType(typeof(JobEntityBatchExtensions.JobEntityBatchProducer<>))]
public interface IJobEntityBatch
Remarks
Schedule or run an IJobEntityBatch job inside the OnUpdate() function of a
SystemBase implementation. When the system schedules or runs an IJobEntityBatch job, it uses
the specified EntityQuery to select a set of chunks. These selected chunks are divided into
batches of entities. A batch is a contiguous set of entities, always stored in the same chunk. The job
struct's Execute
function is called for each batch.
When you schedule or run the job with one of the following methods:
- ScheduleSingle<T>(T, EntityQuery, JobHandle),
- ScheduleParallel<T>(T, EntityQuery, JobHandle),
- or Run<T>(T, EntityQuery)
all the entities of each chunk are processed as
a single batch. The ArchetypeChunk object passed to the Execute
function of your job struct provides access
to the components of all the entities in the chunk.
Use ScheduleParallelBatched<T>(T, EntityQuery, Int32, JobHandle) to divide each chunk selected by your query into (approximately) equal batches of contiguous entities. For example, if you use a batch count of two, one batch provides access to the first half of the component arrays in a chunk and the other provides access to the second half. When you use batching, the ArchetypeChunk object only provides access to the components in the current batch of entities -- not those of all entities in a chunk.
In general, processing whole chunks at a time (setting batch count to one) is the most efficient. However, in cases where the algorithm itself is relatively expensive for each entity, executing smaller batches in parallel can provide better overall performance, especially when the entities are contained in a small number of chunks. As always, you should profile your job to find the best arrangement for your specific application.
To pass data to your Execute function (beyond the Execute
parameters), add public fields to the IJobEntityBatch
struct declaration and set those fields immediately before scheduling the job. You must always pass the
component type information for any components that the job reads or writes using a field of type,
ArchetypeChunkComponentType<T>. Get this type information by calling the appropriate
GetArchetypeChunkComponentType<T>(Boolean) function for the type of
component.
For more information see Using IJobEntityBatch.
[GenerateAuthoringComponent]
public struct ExpensiveTarget : IComponentData
{
public Entity entity;
}
public class BatchedChaserSystem : SystemBase
{
private EntityQuery query; // Initialized in Oncreate()
[BurstCompile]
private struct BatchedChaserSystemJob : IJobEntityBatch
{
// Read-write data in the current chunk
public ArchetypeChunkComponentType<Translation> PositionTypeAccessor;
// Read-only data in the current chunk
[ReadOnly]
public ArchetypeChunkComponentType<Target> TargetTypeAccessor;
// Read-only data stored (potentially) in other chunks
[ReadOnly]
//[NativeDisableParallelForRestriction]
public ComponentDataFromEntity<LocalToWorld> EntityPositions;
// Non-entity data
public float deltaTime;
public void Execute(ArchetypeChunk batchInChunk, int batchIndex)
{
// Within Execute(), the scope of the ArchetypeChunk is limited to the current batch.
// For example, these NativeArrays will have Length = batchInChunk.BatchEntityCount,
// where batchInChunk.BatchEntityCount is roughly batchInChunk.Capacity divided by the
// batchesInChunk parameter passed to ScheduleParallelBatched().
NativeArray<Translation> positions = batchInChunk.GetNativeArray<Translation>(PositionTypeAccessor);
NativeArray<Target> targets = batchInChunk.GetNativeArray<Target>(TargetTypeAccessor);
for (int i = 0; i < positions.Length; i++)
{
Entity targetEntity = targets[i].entity;
float3 targetPosition = EntityPositions[targetEntity].Position;
float3 chaserPosition = positions[i].Value;
float3 displacement = (targetPosition - chaserPosition);
positions[i] = new Translation { Value = chaserPosition + displacement * deltaTime };
}
}
}
protected override void OnCreate()
{
query = this.GetEntityQuery(typeof(Translation), ComponentType.ReadOnly<Target>());
}
protected override void OnUpdate()
{
var job = new BatchedChaserSystemJob();
job.PositionTypeAccessor = this.GetArchetypeChunkComponentType<Translation>(false);
job.TargetTypeAccessor = this.GetArchetypeChunkComponentType<Target>(true);
job.EntityPositions = this.GetComponentDataFromEntity<LocalToWorld>(true);
job.deltaTime = this.Time.DeltaTime;
int batchesPerChunk = 4; // Partition each chunk into this many batches. Each batch will be processed concurrently.
this.Dependency = job.ScheduleParallelBatched(query, batchesPerChunk, this.Dependency);
}
}
Methods
Execute(ArchetypeChunk, Int32)
Implement the Execute
function to perform a unit of work on an ArchetypeChunk representing
a contiguous batch of entities within a chunk.
Declaration
void Execute(ArchetypeChunk batchInChunk, int batchIndex)
Parameters
Type | Name | Description |
---|---|---|
ArchetypeChunk | batchInChunk | An object providing access to a batch of entities within a chunk. |
Int32 | batchIndex | The index of the current batch within the list of all batches in all chunks found by the job's EntityQuery. If the batch count is one, this list contains one entry for each selected chunk; if the batch count is two, the list contains two entries per chunk; and so on. Note that batches are not processed in index order, except by chance. |
Remarks
The chunks selected by the EntityQuery used to schedule the job are the input to your Execute
function. If you use ScheduleParallelBatched<T>(T, EntityQuery, Int32, JobHandle)
to schedule the job, the entities in each matching chunk are partitioned into contiguous batches based on the
batchesInChunk
parameter, and the Execute
function is called once for each batch. When you use one of the
other scheduling or run methods, the Execute
function is called once per matching chunk (in other words, the
batch count is one).