Identify and Combine Time-Related Events

The Combine ESP engine compares two tracks to identify combinations of events. The engine produces an output track that contains exactly one record for every record in the first input track. Each of the output records contains all related records from the second input track. The records from the first input track are output even if there are no related events in the second track.

One use case for this engine is combining records from analysis engines before running a transformation task. For example, face detection produces a record for each detected face. To blur several faces that appear simultaneously, you could combine the relevant records from face detection with the each ingested image before running the blur transformation task.

To identify and combine time-related events in two tracks

  1. Create a new configuration or open an existing configuration to send to Media Server with the process action. Alternatively, you can modify the Media Server configuration file (mediaserver.cfg).

  2. In the [EventProcessing] section, add a new task by setting the EventProcessingEngineN parameter. You can give the task any name, for example:

    [EventProcessing]
    EventProcessingEngine0=Combine
  3. Create a new configuration section for the task, and set the following parameters:

    Type The ESP engine to use. Set this parameter to combine.
    Input0 The first input track. This track must be an output track produced by another task.
    Input1 The second input track. This track must be an output track produced by another task.
    MaxTimeInterval The maximum difference in time (in milliseconds) between a record in the first track to a record in the second, for the records to be considered as related. If you are processing images or documents this parameter is ignored.
    MinTimeInterval (Optional) The minimum difference in time (in milliseconds) between a record in the first track to a record in the second, for the records to be considered as related. The default value is the negative of the MaxTimeInterval value, meaning that the event in the second track can occur before the event in the first track (up to the specified number of milliseconds). If you are processing images or documents this parameter is ignored.

    For more details about these parameters, including the values that they accept, refer to the Media Server Reference.

  4. (Optional) To add custom logic that discards pairs of records unless they meet additional conditions, set the LuaScript parameter so that Media Server runs a Lua script to filter the results. For information about writing the script, see Write a Lua Script for an ESP Engine.

    LuaScript The path and file name of a Lua script to run.
  5. Save and close the configuration file. If you modified the Media Server configuration file, you must restart Media Server for your changes to take effect.

Example

The following example runs face detection on an image file and blurs all of the faces that appear in the image. The combine task is used to combine the regions identified by face detection with the original image record produced by the ingest engine.

[Ingest]
IngestEngine=Image

[Image]
Type=image

[Analysis]
AnalysisEngine0=FaceDetect

[FaceDetect]
Type=FaceDetect
FaceDirection=any
Orientation=any

[EventProcessing]
EventProcessingEngine0=Combine

[Combine]
Type=combine
Input0=Image_1
Input1=FaceDetect.Result

[Transform]
TransformEngine0=Blur

[Blur]
Type=Blur
Input=Combine.Output

[Encoding]
EncodingEngine0=ToDisk

[ToDisk]
Type=ImageEncoder
ImageInput=Blur.Output
OutputPath=./_outputEncode/%token%.jpg

_HP_HTML5_bannerTitle.htm