Introduction

Mirasys VCA (Video Content Analytics) comprises a set of real-time video analytics solutions that utilizes advanced image processing algorithms to turn video into actionable intelligence. At the core of the product is an advanced object recognition and tracking engine that continually tracks moving and stationary targets. The tracking engine features built-in robustness to environmental nuisance conditions such as changing illumination, moving foliage, rippling water, etc.

Mirasys VCA is a generic name for a suite of video analytics add-on product options that include functionality such as:

·       Motion object tracking: Motion-data based object highlighting and tracking, auto-zoom functionality. The motion-data is produced by server-based, hermeneutic motion detection.

·       Tripwire counting: In addition to motion object tracking functionality, line counting for over-head installed cameras, and Spotter client-based counter visualization.

·       Object behaviour/attributes detection: In addition to the above-mentioned functionality, to continuously track and classify moving and stationary targets and features a full suite of rule-based filters including such as: enter, exit, appear, disappear, stopped objects, directionality constraints, object counting, loitering, object type and object speed. Multiple filters and rules are supported on any combination of multiple overlapping detection zones. In addition to an advanced people tracking engine optimized for tracking people in cluttered indoor scenes such as retail scenarios. Includes specific high accuracy counting functions optimized for use in busy scenes.

·       Related analytics options: Available as separate applications, products or through project-based integrations:

·       Camera-based (built-in, edge) analytics support for selected camera manufacturers and their functionality through manufacturer-specific integration connectors.

·       Audio analytics technologies, which refers to software for extraction of information and meaning from audio signals, such as detecting sounds of breaking glass, etc.

·       Facial recognition technologies, which refers to software or camera feature for automatically identifying or verifying the identity, age, gender, etc., of a person from video footage.

·       Number plate recognition technologies (ANPR/LPR), which refers to software or camera feature for automatically identifying vehicle or container numbers.Find here the complete documentation for the Mirasys VCA configuration.

Getting Started Process

This user guide documents each topic in detail. However, to get started quickly, the essential topics are listed below. The following steps should be executed for each server:

·       Decide upon the VCA functionality that meets your requirements. For guidance or consult your Mirasys representative or check the Mirasys VCA training.

·       Acquire and install a Mirasys VMS system and the related software license key with other required features enabled. (See the “Mirasys VMS Installation Guide” and the “Mirasys VMS Administration Guide” for details.)

·       Add and configure the video cameras you intend to use for VCA and enable the VCA capability in the camera settings. (See the “Mirasys VMS Administrator Guide” for details regarding enabling hermeneutic motion detection.)

·       Export the VCA core HW GUID and obtain the VCA activation license code from Mirasys and activate Mirasys VCA with these licenses.

·       Calibrate each camera in VCA settings if object classification is required.

·       Configure the detection zone and rules for each camera.

·       If required configure alarms based on the VCA events. (See section 6. Using VCA to Trigger Alarms)

·       Verify VCA functionality visualization using the Spotter for Windows application. (See section “Mirasys VCA Visualization”.)

Activating Mirasys VCA

  1. Open System Manager, export the VCA Core HW GUID from your target server and send it to Mirasys to receive the VCA license.
  2. Add and configure the cameras you intend to use for VCA and enable the VCA capability in the camera settings

  3. Navigate to VCA settings -> Main menu -> Settings -> Licenses

  4. After you have received VCA license paste it in Activation code field and click Add New License.

  5. After insert of the license, you should see something similar to this.

  6. At View Channels page, click a thumbnail to view the channel and configure VCA core related settings

  7. .After clicking on a channel, a full view of the channel's video stream is displayed along with any configured zones and rules and the channel settings menu open.

  8. VCA configuration

Zones

Zones are the detection areas on which VCAcore operate. To detect a specific behaviour, a zone must be configured to specify the area where a rule applies.

Adding a Zone

Zones can be added in multiple ways:

Double-click anywhere on the video display.

Click the Create Zone button in the zone settings menu.

Right-click or tap-hold to display the context menu and select the add zone icon

The Context Menu

Right-clicking or tap-holding (on mobile devices) displays a context menu which contains commands specific to the current context.

The possible actions from the context menu are:

Adds a new zone. Deletes an existing zone.

Deletes an existing zone.

Adds a node to a zone.

Deletes an existing node from a zone.

Positioning Zones

To change the position of a zone, click and drag the zone to a new position. To change the shape of a zone, drag the nodes to create the required shape. New nodes can be added by double-clicking on the edge of the zone or clicking the add node icon  from the context menu.

Zone Specific Settings

The zone configuration menu contains a range of zone-specific configuration parameters:

Name: The name of the zone, which appears in event notifications.

Type: The type of the zone. Can be one of:

Detection: A zone which detects tracked objects and to which rules can be applied.

Non-detection: A zone which specifies an area that should be excluded from VCAcore analysis. Objects are not detected in non-detection zones. Useful for excluding areas of potential nuisance alarms from a scene (e.g. waving trees, flashing lights, etc).

Shape: The shape of the zone. Can be one of:

Polygon: A polygonal detection area with at least three nodes. Rules apply to the whole area.

Line: A single- or multi-segment line with at least two nodes. Rules apply to the length of the line.

Colour: The colour of the zone.

Configure Rules: A shortcut button to navigate directly to the rules configuration page

Deleting a Zone

Zones can be deleted in the following ways:

Select the zone and click the Delete Zone button from the zone settings menu.

Select the zone, display the context menu and select the delete zone icon

Rules

VCAcore's rules are used to detect specific events in a video stream. There are three rule types which can be utilized to detect events and trigger actions:

Basic Inputs / Rule: An algorithm that will trigger when a particular behavior or event has been observed e.g. Presence. Basic inputs can be used to trigger an action.

Filters: A filter that will trigger if the object which has triggered the input rule / logical rule meets the filter requirements e.g. is moving as a specific speed. Filters can be used to trigger an action.

Conditional Rule: A logical link between one or more inputs to allow the detection of more complex behaviors e.g. AND. Conditional rules can be used to trigger an action.

Within VCAcore, rule configurations can be as simple as individual basic inputs attached to a zone used to trigger an action. Alternatively, rules can be combined into more complex logical rule configurations using conditional rules and filters. The overarching goal of the rules in VCAcore is to help eliminate erroneous alerts being generated by providing functions to prevent unwanted behavior from triggering an action.

More detail on the differences between these concepts is outlined below:

Basic Inputs

A basic input or rule can only be used to trigger an action or as an input to another rule type. Basic inputs always require a zone, and potentially some additional parameters. A basic input can be used on its own to trigger an action, although they are often used as an input to other filters or conditional rules.

The complete list of basic inputs are:

·       Abandoned

·       Appear

·       Deep Learning Presence

·       Direction

·       Disappear

·       Dwell

·       Enter

·       Exit

·       Presence

·       Stopped

·       Tailgating

·       Counting Line

Filters

A filter cannot trigger an action on its own as it requires another basic input, filter or conditional rule to trigger. An example of this is the Object rule.

The complete list of filters are:

·       Speed Filter

·       Object Filter

·       Color Filter

·       Deep Learning Filter

Due to the nature of the deep learning algorithm which power the Deep Learning Filter, it can not be used as an input to another filter or logical rule.

Conditional Rules

A conditional input, like a filter, is one that cannot trigger an action on its own. It requires the input of another basic input, conditional rule or filter to be meaningful. An example of this is the AND rule. The AND rule requires two inputs to compare in order to function.

The complete list of conditional rules are:

And

Continuously

Counter

Or

Previous

General Concepts

Object Display

As rules are configured, they are applied to the channel in real time allowing feedback on how they work. Objects which have triggered a rule are annotated with a bounding box and a trail. Objects can be rendered in two states:

·       Non-alarmed: Default rendered in yellow. A detected object which does not meet any criteria to trigger a rule and raise an event.

·       Alarmed: Default rendered in red. A detected object which has triggered one or more rules. Causes an event to be raised.

As seen below, when an event is raised, the default settings render details of the event in the lower half of the video stream. Object class annotations in this example are generated through calibrated.

Object Trails

The trail shows the history of where the object has been. Depending on the calibration the trail can be drawn from the centroid or the mid-bottom point of the object. (See Advanced Settings for more information).

Trail Importance

The trail is important for determining how a rule is triggered. The intersection of the trail point with a zone or line determines whether a rule is triggered or not. The following image illustrates this point: the blue vehicle's trail intersects with the detection zone and is rendered in red. Conversely, while the white vehicle intersects the detection zone, its trail does not (yet) intersect and hence it has not triggered the rule and is rendered in yellow.

Rules Configuration

Rules are configured on a per channel basis by opening the rules menu when viewing the channel. Configuration is possible in two forms: the docked mode, in which both the rules and the video stream are visible or expanded view in which a graph representation is provided to visualize the way the rules are connected.

The rules page opens in the 'docked' mode, alongside the live video stream.

The user may click on the expand button (next to the Add Rule) button to switch to the expanded view. Please note that the rules graph is only visible in the expanded view.

In the expanded view, the user can add rules, and use the Rules Editor to connect the rules to one another.

Adding Rules

The first steps to defining a rule configuration is to add the basic inputs, configure the respective parameters and link to a zone. Click the Add Rule button and select the desired rule from the drop menu.

 

To delete a rule click the corresponding delete icon.

Please note rules of any type cannot be deleted if they serve as an input to another rule. In this case the other rule must be deleted first.

Basic Inputs

Below are the currently supported basic inputs, along with a detailed description of each.

Presence

A rule which fires an event when an object is first detected in a particular zone.

Note: The Presence rule encapsulates a variety of different behavior, for example the Presence rule will trigger in the same circumstances as an Enter and Appear rule. The choice of which rule is most appropriate will be dependent on the scenario.

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"Presence #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Zone

The zone this rule is associated with

None


Deep Learning Presence

This filter requires an additional VCA license from Mirasys.
A rule which fires an event when an object is first detected in a particular zone and is classified as a certain class by the deep learning filter model.

Classification settings are configured in the Deep Learning page. See Deep Learning Filter for an in depth description on how the filter works.

Note: The Deep Learning Presence rule encapsulates a variety of different behavior, for example the rule will try to trigger in the same circumstances as an Enter and Appear rule. The choice of which rule is most appropriate will be dependent on the scenario. Additionally, the deep learning presence rule cannot be used as an input to any other rule type. As such it must work in isolation.

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"Deep Learning Presence #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Zone

The zone this rule is associated with

None

Direction

The direction rule detects objects moving in a specific direction. Configure the direction and acceptance angle by moving the arrows on the direction control widget. The primary direction is indicated by the large central arrow. The acceptance angle is the angle between the two smaller arrows.

Objects that travel in the configured direction (within the limits of the acceptance angle), through a zone or over a line, trigger the rule and raise an event.

The following image illustrates how the white car moving in the configured direction triggers the rule whereas the other objects do not.

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"Direction #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Zone

The zone this rule is associated with

None

Angle

Primary direction angle, 0 - 359. 0 references up.

0

Acceptance

Allowed variance each side of primary direction that will still trigger rule

0


Dwell

A dwell rule triggers when an object has remained in a zone for a specified amount of time. The interval parameter the time the object has to remain in the zone before an event is triggered.

The following image illustrates how the person detected in the zone is highlighted red as they have dwelt in the zone for the desired period of time. The two vehicles have not been present in the zone for long enough yet to trigger the dwell rule.

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"Direction #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Zone

The zone this rule is associated with

None

Interval

Period of time in seconds)

1

Stopped

The stopped rule detects objects which are stationary inside a zone for longer than the specified amount of time. The stopped rule requires a zone to be selected before before being able to configure an amount of time.

Note: The stopped rule does not detect abandoned objects. It only detects objects which have moved at some point and then become stationary.

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"Stopped #"

Zone

The zone this rule is associated with

None

Time

Period of time before a stopped object triggers the rule

0

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active


Enter and Exit

The enter rule detects when objects enter a zone. In other words, when objects cross from the outside of a zone to the inside of a zone.

Conversely, the exit rule detects when an object leaves a zone: when it crosses the border of a zone from the inside to the outside.

Note: Enter and exit rules differ from appear and disappear rules, as follows:

·       Whereas the enter rule detects already-tracked objects crossing the zone border from outside to inside, the appear rule detects objects which start being tracked within a zone (e.g. appear in the scene through a door).

·       Whereas the exit rule detects already-tracked objects crossing the zone border from inside to outside, the disappear rule detects objects which stop being tracked within the zone (e.g. leave the scene through a door).

Graph View

Form View

Configuration Enter

Property

Description

Default Value

Name

A user-specified name for this rule

"Enter #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Zone

The zone this rule is associated with

None

Configuration Exit

Property

Description

Default Value

Name

A user-specified name for this rule

"Exit #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Zone

The zone this rule is associated with

None

Appear and Disappear

The appear rule detects objects that start being tracked within a zone, e.g. a person who appears in the scene from a doorway.

Conversely, the disappear rule detects objects that stop being tracked within a zone, e.g. a person who exits the scene through a doorway.

Note: The appear and disappear rules differ from the enter and exit rules as detailed in the enter and exit rule descriptions.

Graph View

Form View

Configuration Appear

Property

Description

Default Value

Name

A user-specified name for this rule

"Appear #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Zone

The zone this rule is associated with

None

Configuration Disappear

Property

Description

Default Value

Name

A user-specified name for this rule

"Disappear #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Zone

The zone this rule is associated with

None

Abandoned and Removed Object

The abandoned and removed object rule triggers when an object has been either left within a defined zone, e.g. a person leaving a bag on a train platform, or when an object is removed from a defined zone. The abandoned rule has a duration property which defines the amount of time an object must have been abandoned for or removed for, to trigger the rule.

Below is a sample scenario where a bag is left in a defined zone resulting in the rule triggering.

Below is a similar example scenario where the bag is removed from the defined zone resulting in the rule triggering.

 

Note: The algorithm used for abandoned and removed object detection is the same in each case, and therefore cannot differentiate between objects which have been abandoned or removed. This arises because the algorithm only analyses how blocks of pixels change with respect to a background model which is constructed over time.

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"Abandoned #"

Zone

The zone this rule is associated with

None

Duration

Period of time a object must have been abandoned or removed before the rule triggers

0

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Tailgating

The tailgating rule detects objects which cross through a zone or over a line within quick succession of each other.

In this example, object 1 is about to cross a detection line. Another object (object 2) is following closely behind. The tailgating detection threshold is set to 5 seconds. That is, any object crossing the line within 5 seconds of an object having already crossed the line will trigger the object tailgating rule.

Object 2 crosses the line within 5 seconds of object 1. This triggers the tailgating filter and raises an event.

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"Tailgating #"

Zone

The zone this rule is associated with

None

Duration

Maximum amount of time between first and second object entering a zone to trigger the rule

0

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active


Counting Line

A counting line is a detection filter optimized for directional object counting (e.g. people or vehicles) in busier detection scenarios. Examples of such applications may include:

People counting with overhead cameras in a retail environment.

Vehicle counting with overhead cameras on public highways.

In some scenes, such as entrances with camera installed overhead, the counting line typically will generate a higher accuracy count than using the a fore mentioned counters connected to a presence rule.

An event is generated every time an object crosses the line in the configured direction. If multiple objects cross the line together, multiple corresponding events are generated. These events can be directly used to trigger actions if the Can Trigger Actions property is checked.

Counting lines are attached to zones configured with a Line shape. See Zones for more information. If a counting line is configured with a zone not defined with a Line shape, the zone property will be automatically changed (it will not be possible to change the zone shape back until the counting line stops referencing the zone in question).

Counting lines have a specified direction indicated by the arrow in the UI (direction A or B), the direction of this arrow is governed by the configured zone. Each instance of the rule counts in a single direction. To count in both directions a second counting line rule must be added to the same zone with the opposite direction selected. An example rule graph of a two way counting line configured with a counter is provided to illustrate this below.

NOTE: The maximum number of counting line filters that can be applied per video channel is 5.

Calibrating the Counting Line

In order to generate accurate counts, the counting line requires calibration. Unlike the object tracking function engine, this cannot be performed at a general level for the whole scene using the 3D Calibration tool. This is because the counting line is not always placed on the ground plane; it may be placed at any orientation at any location in the scene. For example, a counting line could be configured vertically with a side-on camera view.

Instead of the 3D calibration tool, the counting line has its own calibration setting. Two bars equidistant from the center of the line represent the width of the expected object. This allows the counting line to reject noise and also count multiple objects.

To calibrate the counting line:

·       Select the counting line rule.

·       Check the Enable width calibration option.

·       Drag the calibration markers to adjust the distance between the calibration markers until the distance is approximately the size of the objects to be counted. Alternatively, move the Width slider to achieve the same result.

·       The calibration width is displayed within the counting line rule and can be edited directly to change the calibration width.

·       The small markers on either side of the big markers indicate the minimum and maximum width which is counted as a single object.

NOTE: if the Width slider is set to zero then the Enable width calibration checkbox is automatically disabled.

Counting Line Calibration Feedback

To enable the user to more accurately configure the calibration for the counting line, the widths of detected objects are displayed as an overlay next to the counting line when objects pass over it. By default, this display option is enabled. However, if it does not appear, ensure that the option is enabled on the Burnt-in Annotation settings.

The calibration feedback is rendered as black and white lines on either side of the counting line on the Zones configurations page. Each line represents an object detected by the counting algorithm. The width of the line shows the width of the object detected by the line. The last few detections are displayed for each direction with the latest one appearing closest to the counting line.

Each detection is counted as a number of objects based on the current width calibration. This is displayed as follows:

·       Black line: Event not counted

·       Solid white line: Event counted as one object

·       Broken white line: Event counted as multiple objects indicated by the number of line segments.

Using the feedback from the calibration feedback annotation, the width calibration can be fine tuned to count the correct sized objects and filter out spurious detections.

Shadow Filter

The counting line features a shadow filter which is designed to remove the effects of object shadows affecting the counting algorithm. Shadows can cause inaccurate counting results by making an object appear larger than its true size or by joining two or more objects together. If shadows are causing inaccurate counting, the shadow filter should be enabled by selecting the Shadow Filter check box for the line. It is recommended that the shadow filter only be enabled when shadows are present because the algorithm can mistake certain parts of an object for shadows and this may lead to worse counting results. This is especially the case for objects that have little contrast compared to the background (e.g. people wearing black coats against a black carpet).

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

Line_Counter

Zone

The zone this rule is associated with

None

Direction

Enable counting in the 'A' or 'B' direction (one direction per counting line)

None

Enable Width Calibration

Width calibration to allow more accurate counting

None

Width

Width calibration value

0

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active


Typical Logical Rule Combination

The below example has two line counters, **Line_Counter A** and **Line_Counter B** attached to the zone **Center Line** each with differing directions selected. **Line_Counter A** is configured to increment the counter, whilst **Line_Counter B** is configured to decrement the counter value.

Only the counter rule **Counter** is set to **Can Trigger Actions**, meaning only this component of the logical rule will be available as a source for actions. In this case an action using this rule as a source will trigger every time the counter changes.

Filters

Below is a list of the currently supported filters, along with a detailed description of each.

When filters are used to trigger an action the rule type property is propagated from the filter input. For example, if the input to the speed filter is a presence rule, then actions generated as a result of the speed filter will have a presence event type.

Speed Filter

The speed filter provides a way to check if the speed of an object which has triggered an input is moving within the range of speeds defined by a lower and upper boundary.

Note: The channel must be calibrated in order for the speed filter to be available.

Commonly this rule is combined with a presence rule, an example rule graph is provided to illustrate this below. The following image illustrates how such a rule combination triggers on the car moving at 52 km/h but the person moving at 12 km/h falls outside the configured range (25-100 km/h) and thus does not trigger the rule.

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"Speed #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Input

The input rule

None

Min Speed

The minimum speed (km/h) an object must be going to trigger the rule

0

Max Speed

The maximum speed (km/h) an object can be going to trigger the rule

0

Typical Logical Rule Combination

The below example logical rule checks if an object triggering the presence rule Presence Rule attached to zone Centre, is also travelling between 25 and 100 km/h as specified by the speed rule Speed Filter 25-100 km/h.

Only the Speed Filter is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions. Additionally, any action generated by the speed filter will have the event type Presence.

Object Filter

The object classification filter provides the ability to filter out objects, which trigger a rule, if they are not classified as a certain class (e.g. person, vehicle).

The object classification filter must be combined with another rule(s) to prevent unwanted objects from triggering an alert, an example rule graph is provided to illustrate this below.

The previous image illustrates how object classification filter configured with Vehicle class, includes only Vehicle objects. The person in the zone is filtered out since the Person class is not selected in the filter list.

Note: the channel must be calibrated for the object classification filter to be available.

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"Object Filter #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Input

The input rule

None

Classes

The object classes allowed to trigger an alert

None

Typical Logical Rule Combination

The below example logical rule checks if the object triggering the presence rule Presence Rule attached to zone Centre, is also classified as a Vehicle as specified by the Object Filter Vehicle Filter.

Only the Object filter is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions. Additionally, any action generated by the speed filter will have the event type Presence.

Colour Filter

The colour filter utilizes the Colour Signature algorithm and provides the ability to filter out objects based on whether that object contains a certain colour component.

The colour signature algorithm is responsible for grouping every pixel from a detected object into one of 10 colour bins. The colour filter allows you to select one or more of these colour bins and will trigger if the subject object is made up of one or more of those selected colours.

The below image shows an example tracked object with the colour signature annotations enabled. Here the top four colours which make up more than 5% of the object are represented by the colour swatch attached to the object. In this case a person being tracked in the scene with high visibility safety clothing. Here the colour filter is set to trigger on Yellow, detecting the person but ignoring the shadow.

Typically, the colour classification filter would be combined with another rule(s) to prevent unwanted objects from triggering an alert, an example rule graph is provided to illustrate this below.

The previous image illustrates how object classification filter configured with Vehicle class, includes only Vehicle objects. The person in the zone is filtered out since the Person class is not selected in the filter list.

Note: the channel must have the Colour Signature enabled for the colour filter to work.

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"Object Filter #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Input

The input rule

None

Colours

The colours allowed to trigger an alert

All Unchecked


Typical Logical Rule Combination

The below example logical rule checks if the object triggering the presence rule Train line attached to zone Centre, also contains the colour Yellow as one of the top four colours by percentage.

Only the Colour filter is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions. Additionally, any action generated by the speed filter will have the event type Presence.

Deep Learning Filter

This filter requires an additional VCA license from Mirasys.
The deep learning filter provides the ability to filter out objects, which trigger a rule, if they are not classified as a certain class by the deep learning model.

The deep learning filter settings are configured in the Deep Learning page. See Deep Learning Filter for an in depth description on how the filter works.

Typically the deep learning filter would be combined with another rule(s) to prevent unwanted objects from triggering an alert, an example rule graph is provided to illustrate this below. Please note that the deep learning filter cannot be used as an input to any other rule type. As such it must be the last rule in a graph.

The previous image illustrates how the deep learning filter configured with just vehicle class (Confidence Threshold 0.5), only triggers on the vehicle object. The person in the zone is filtered out since the person class Allowed setting is not enabled in the Deep Learning configuration page.

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"DL Filter #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Input

The input rule

None


Typical Logical Rule Combination

The below example logical rule checks if the object triggering the presence rule Presence Rule attached to zone Centre, is one of the classes of interest defined in the Deep Learning settings page (see above settings page image).

Only the deep learning filter is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions. Additionally, any action generated by the speed filter will have the event type Presence.

Counter

This counters are only visible in the VCA configuration. To use counters in the Spotter please refer to the Spotter manual.

Counters can be configured to count the number of times a rule is triggered, for example the number of people crossing a line. The counter rule is designed to be utilized in two ways:

Increment / Decrement: whereby a counter is incremented by the attached rule(s) (+1 for each rule trigger) and decremented by another attached rule(s) (-1 for each rule trigger).

Occupancy: whereby the counter reflects the number of objects that are currently triggering the attached rule(s).

More than one rule can be assigned to any of a counter's three inputs. This allows, for example, the occupancy of two presence rules to be reflected in a single counter or more than one entrance / exit gate to reflect in a single counter, an example rule graph is provided to illustrate this below.

Broadly speaking a single counter should not be used for both purposes occupancy and increment / decrement.

Note: events created by a counter will not trigger the Deep-Learning Filter, even if enabled on the channel.

Positioning Counters

When added, a counter object is visualised on the video stream as seen below. The counter can be repositioned by grabbing the 'handle' beneath the counter name and moving the counter to the desired location.

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"Counter #"

Increment

The rule which, when triggered, will add one to the counter

None

Decrement

The rule which, when triggered, will subtract one from the counter

None

Occupancy

Sets counter to current number of the rule's active triggers*

None

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Reset Counter

A button allowing the counter value to be reset to 0

None

E.g. if a presence rule is set as the occupancy target and two objects are currently triggering that presence rule, the counter will show the value of '2'.

Typical Logical Rule Combination

The below counter example increments a counter based on two enter rules Enter Centre and Enter Top attached to the zones Centre and Top respectively, this means that when either of these enter rules triggers the counter will be incremented by + 1. The counter also decrements based on the exit rule Exit which will subtract 1 from the counter each time an object exits the zone Centre.

Only the counter rule Counter is set to Can Trigger Actions, meaning only this component of the logical rule will be available as a source for actions. In this case an action using this rule as a source will trigger every time the counter changes.

Conditional Rule Types

Below is a list of the currently supported conditional rules, along with a detailed description of each.

And

A logical operator that combines two rules and only fires events if both inputs are true.

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"And #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Input A

The first input

None

Input B

The second input

None

Per Target

Fire one event per tracked object

Active

If we consider a scene with two presence rules, connected to two separate zones, connected by an AND rule, the table below explains the behavior of the Per Target property. Note that object here refers to a tracked object, as detected by the VCA tracking engine.

State

Per Target

Outcome

Object A in Input A, Object B in input B

On

Two events generated, one for each object

Object A in Input A, Object B in input B

Off

Only one event generated

Additionally, it is important to note that if the rule fires when Per Target is switched off, it will not fire again until it is 'reset', i.e. until the AND condition is no longer true.

Continuously

A logical operator that fires events when its input has occurred continuously for a user-specified time.

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"Continuously #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Input

The input rule

None

Per Target

Fire one event per tracked object. See description below for more details

Active

Interval

The time in milliseconds

1000 ms

Considering a scene with one zone, a presence rule associated with that zone, and a Continuously rule attached to that presence rule, when the Per Target property is on, the rule will generate an event for each tracked object that is continuously present in the zone. When it is off, only one event will be generated by the rule, even if there are multiple tracked objects within the zone. Additionally, when Per Target is off, the rule will only generate events when there is change of state - i.e. the rule condition changes from true to false or vice versa. When Per Target is off, the state will change when:

·       Any number of objects enter the zone in question and remain in the zone

·       All objects leave the zone in question

 

Or

A logical operator that combines two rules and fires events if either input is true.

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"Or #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Input A

The first input

None

Input B

The second input

None

Per Target

Fire one event per tracked object

Active

If we consider a scene with two presence rules, connected to two separate zones, connected by an OR rule, the table below explains the behaviour of the Per Target property.

State

Per Target

Outcome

Object A in Input A, No object in input B

On

Two events generated, one for each object

No object in Input A, Object B in input B

On

Only one event generated (for Object B)

Object A in Input A, No object in input B

On

Only one event generated (for Object A)

Object A in Input A, No object in input B

Off

Only one event generated

No object in Input A, Object B in input B

Off

Only one event generated

Object A in Input A, No object in input B

Off

Only one event generated

Additionally, it is important to note that if the rule fires when Per Target is switched off, it will not fire again until it is 'reset', i.e. until the OR condition is no longer true.

Previous

A logical operator that triggers for input events which were active at some point in a past window of time. This window is defined by between the current time and the period before the current time (specified by the interval parameter value).

Graph View

Form View

Configuration

Property

Description

Default Value

Name

A user-specified name for this rule

"Previous #"

Can Trigger Actions

Specifies whether events generated by this rule trigger actions

Active

Input

The input rule

None

Per Target

Fire one event per tracked object

Active

Interval

The time in milliseconds

1000 ms


Combined Rule Examples
Double-knock Rule

The 'double-knock' logical rule triggers when an object enters a zone which had previously entered another defined, zone within a set period of time. The time interval on the 'Previous' rule in the graph decides how much time can elapse between the object entering the first and then second zone. The graph for a double-knock logical rule is as follows:

The rule may be interpreted as follows: 'An object is in Zone 2, and was previously in Zone 1 in the last 1000 milliseconds'. This rule can be used as a robust way to detect entry into an area. Since the object has to enter two zones in a specific order, it has the ability to eliminate false positives that may arise from a simple Presence rule.

Presence in A or B

This rule triggers when an object is present in either Zone A or Zone B. Its graph is as follows:

A typical use case for this rule is having multiple areas where access is prohibited, but the areas cannot be easily covered by a single zone. Two zones can be created, associated with two separate presence rules, and they can then be combined using an Or rule.

Usage notes

·       Use the expanded view to build the Logical Rules graph, checking the graph view for correctness.

·       Use the docked view to evaluate and tweak the graph once its overall structure is correct.

·       The Per Target property on a rule will affect all rules below it.

·       The Logical Rules Editor prevents the deletion of logical rules that have rules depending on them.

Calibration

Camera calibration is required in order for VCAcore to classify objects into different object classes. Once a channel has been calibrated, VCAcore can infer real-world object properties such as speed, height and area and classify objects accordingly.

Camera calibration is split into the following sub-topics:

·       Enabling Calibration

·       Calibration Controls

·       Calibrating a Channel

·       Advanced Calibration Parameters

Enabling Calibration

By default calibration is disabled. To enable calibration on a channel, check the Enable Calibration checkbox.

Calibration Controls

The calibration page contains a number of elements to assist with calibrating a channel as easily as possible. Each is described below.

3D Graphics Overlay

During the calibration process, the features in the video image need to be matched with a 3D graphics overlay. The 3D graphics overlay consists of a green grid that represents the ground plane. Placed on the ground plane are a number of 3D mimics (people-shaped figures) that represent the dimensions of a person with the current calibration parameters. The calibration mimics are used for verifying the size of a person in the scene and are 1.8 metres tall.

The mimics can be moved around the scene to line up with people (or objects which are of a known, comparable height) to a person.

Mouse Controls

The calibration parameters can be adjusted with the mouse as follows: - Click and drag the ground plane to change the camera tilt angle. - Use the mouse wheel to adjust the camera height. - Drag the slider to change the vertical field of view.

Note: The sliders in the control panel can also be used to adjust the camera tilt angle and height.

Control Panel Items

·       The control panel (shown on the right hand side in the image above) contains the following controls:

·       Height: Adjusts the height of the camera

·       Tilt: Adjusts the tilt angle of the camera

·       VFOV: Adjusts the vertical field of view of the camera. Note: A correct value for the camera vertical field of view is important for accurate calibration and classification.

·       Horizon: Enables/disables the horizon display. Useful to line up against a horizon in a deep scene.

·       Grid: Enables/disables the ground plane grid display. The expand/collapse control (<) exposes additional settings to vary the colour, opacity and size of the ground plane grid.

·       Advanced: Exposes advanced settings for controlling the pan and roll of the camera.

·       Burnt-in Annotation: Exposes the Burnt-in Annotation controls for convenience.

Context Menu Items

Right-clicking the mouse (or tap-and-hold on a tablet) on the grid displays the context menu:

Performing the same action on a mimic displays the mimic context menu:

The possible actions from the context menu are:

Pause the video. Pausing the video can make it easier to align mimics up with objects in the scene.

Re-starts playing the video after it was previously paused.

Adds an extra mimic to the ground plane.

Removes the currently selected mimic from the ground plane.

Calibrating a Channel

Calibrating a channel is necessary in order to estimate object parameters such as height, area, speed and classification. If the height, tilt angle and vertical field of view corresponding to the installation are known, these can simply be entered as parameters in the appropriate fields in the control panel.

If however, these parameters are not explicitly known this section provides a step-by-step guide to calibrating a channel.

Step 1: Find People in the Scene

Find some people, or some people-sized objects in the scene. Try to find a person near the camera, and a person further away from the camera. It is useful to use the play/pause control to pause the video so that the mimics can be accurately placed. Place the mimics on top of or near the people:

Step 2: Enter the Camera Vertical Field of View

Determining the correct vertical field of view is important for an accurate calibration. The following table shows pre- calculated values for vertical field of view for different sensor sizes.

 

Focal Length(mm)

1

2

3

4

5

6

7

8

9

10

15

20

30

40

50

CCD Size (in)

CCD Height(mm)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1/6"

1.73

82

47

32

24

20

16

14

12

11

10

7

 

 

 

 

1/4"

2.40

100

62

44

33

27

23

19

17

15

14

9

7

 

 

 

1/3.6"

3.00

113

74

53

41

33

28

24

21

19

12

11

9

6

 

 

1/3.2"

3.42

119

81

59

46

38

32

27

24

21

16

13

10

7

 

 

1/3"

3.60

122

84

62

48

40

33

29

25

23

20

14

10

7

5

 

1/2.7"

3.96

126

89

67

53

43

37

32

28

25

22

15

11

8

6

 

1/2"

4.80

135

100

77

62

51

44

38

33

30

27

18

14

9

7

5

1/1.8"

5.32

139

106

83

67

56

48

42

37

33

30

20

15

10

8

6

2/3"

6.60

 

118

95

79

67

58

50

45

40

37

25

19

13

9

8

1"

9.60

 

135

116

100

88

77

69

62

56

51

35

27

18

14

11

4/3"

13.50

 

 

132

119

107

97

88

80

74

68

48

37

25

19

15

 

If the table does not contain the relevant parameters, the vertical FOV can be estimated by viewing the extremes of the image at the top and bottom. Note that without the correct vertical FOV, it may not be possible to get the mimics to match people at different positions in the scene.

Step 3: Enter the Camera Height

If the camera height is known, type it in directly. If the height is not known, estimate it as far as possible and type it in directly.

Step 4: Adjust the Tilt Angle and Camera Height

Adjust the camera tilt angle (and height if necessary) until both mimics are approximately the same size as a real person at that position in the scene. Click and drag the ground plane to change the tilt angle and use the mouse wheel or control panel to adjust the camera height.

The objective is to ensure that mimics placed at various locations on the grid line up with people or people-sized- objects in the scene.

Once the parameters have been adjusted, the object annotation will reflect the changes and classify the objects accordingly.

Step 5: Verify the Setup

Once the scene is calibrated, drag or add mimics to different locations in the scene and verify they appear at the same size/height as a real person would. Validate that the height and area reported by the VCAcore annotation looks approximately correct. Note that the burnt-in -annotation settings in the control panel can be used to enable and disable the different types of annotation.

Repeat step 4 until the calibration is acceptable.

Tip: If it all goes wrong and the mimics disappear or get lost due to an odd configuration, select one of the preset configurations to restore the configuration to normality.

Advanced Calibration Parameters

The advanced calibration parameters allow the ground plane to be panned and rolled without affecting the camera calibration parameters. This can be useful to visualize the calibration setup if the scene has pan or roll with respect to the camera.

Note: the pan and roll advanced parameters only affect the orientation of the 3D ground plane so that it can be more conveniently aligned with the video scene, and does not actually affect the calibration parameters.

Next Steps

Once the channel has been calibrated, the Classification Settings can be configured.

Classification

VCAcore can define a moving objects class using either it's Deep Learning models or by using properties extracted from an object in a calibrated scene.

Both methods of classification are applied through the use of filters in the rules interface. Classification filters allow an object, which has triggered a rule, to be evaluated against it's predicted class and filtered out if needed.

Object Classification

Once a camera view has been calibrated, each detected object in that view will have a number of properties extracted including object area and speed.

VCAcore's object classification performs classification by comparing these properties to a set of configurable object classifiers. VCAcore comes pre-loaded with the most common object classifiers, and in most cases these will not need to be modified.

Configuration

In some situations it might be desirable to change the classifier parameters, or add new object classifiers. The classification menu can be used to make these changes.

Each of the UI elements are described below:

Click and drag to rearrange the order of the classification groups.

Name: Specifies the name of the classification group.

Speed: Sets the speed range for the classification group. Objects which fall within the speed and area ranges will be classified with this group.

Area: Sets the area range for the classification group. Objects which fall within the speed and area ranges will be classified with this group.

Deletes the classification group.

To add a new classifier, click the Add Classifier button Add Classifier.

Calibration must be enabled on each channel object classification is to be used on. If not enabled, any rules that include an object filter will not trigger.

Classification (Object)

Objects are classified according to how their calibrated properties match the classifiers. Each classifier specifies a speed range and an area range. Objects with properties which fall within both ranges of speed and area will be classified as being an object of the corresponding class.

Note: If multiple classes contain overlapping speed and area ranges then object classification may be ambiguous (since an object will match more than one class). In this case the actual classification is not specified and may be any one of the overlapping classes.

The classification data from object classification can be accessed via template tokens.

Deep Learning Filter

This filter requires an additional VCA license from Mirasys.
VCAcore also supports classification through the use of the deep learning filter. In this case an object, which has triggered a rule, can be analysed using the deep learning filter and a predicted class and confidence level returned. The available object classes are defined by the model.

On VCAserver the deep learning filter can use GPU acceleration. GPU acceleration requires a NVIDIA GPU with CUDA Compute Capability 3.5 or higher and CUDA 10.0 to be installed.

Without GPU acceleration the deep learning filter will use the CPU, enabling the filter on multiple channels which are generating a high volume of events (more than 1 per second) may result in poor performance of the system and is not advised.

Configuration

The Deep Learning page allows the user to configure the deep learning filter in VCAcore.

Each of the possible object classes has additional parameters:

·       Allowed: Whether this object type will be allowed to pass through the filter. If this is unchecked, any objects classified as this type will not trigger any actions.

·       Confidence Threshold: A value between 0.0 and 1.0 representing the minimum confidence level required in order for the object to pass through the filter. Any objects with a lower classification score than this minimum value will be filtered out and will not trigger any actions.

Classification (DL)

When an object triggers the deep learning filter, the analyzed object will either be defined as one of the detectable object classes or as background. While an object is triggering a rule (i.e. triggering a presence rule) the deep learning filter will continue to evaluate and update its prediction until the deep learning filter returns an object of interest as defined by the configuration.

If the filter classifies an object which generated an event as background, the event will be filtered out and any attached actions will not be triggered.

The classification data from the deep learning filter can also be accessed via template tokens.

Analytics Pipeline for Classification

VCAcore supports two forms of object classification as described above. The deep learning filter neither requires the source input to have been calibrated or the object classifier to be configured. Likewise, the settings of the deep learning filter are entirely independent from object classification.

Either classification method can be used independently or together, defined by how a rule graph is constructed. However, when using both together care should be taken. For example, as the deep learning filter is trained to detect specific objects, if custom object classes have been configured in the object classifier, e.g. small animal, the deep learning filter may erroneously filter those alerts out as small animal is not a class it is trained to recognize. In these cases, use of the deep learning filter is not recommended.

Burnt-in Annotation

You will not see this in Spotter. Only in the VCA configuration tool. Spotter has own annotation for the objects. Please see Spotter manual.

Burnt-in Annotations allow VCAcore annotations to be burnt into the raw video stream. The burnt-in annotation settings control which portions of the VCAcore metadata (objects, events, etc) are rendered into the video stream.

Note:

·       To display object parameters such as speed, height, area and classifications, the channel must first be calibrated.

·       To display DL Classification data, the channel must have an active Deep Learning Filter rule configured.

·       To display colour signature annotations, the Colour Signature algorithm must be enabled under the advanced settings.

Display Event Log

Check the Display Event Log option to show the event log in the lower portion of the image.

Display System Messages

Check the Display System Messages option to show the system messages associated with Learning Scene and Tamper.

Display Zones

Check the Display Zones option to show the outline of any configured zones.

Display Line counters

Check the Display Line Counters option to display the line counter calibration feedback information. See the Rules for more information.

Display Counters

Check the Display Counters option to display the counter names and values. See the Counters topic for more information.

Display Deep Learning Classification

Check the Display DL Classification option to show the class and confidence of objects which have triggered the deep learning filter. Only objects which have triggered the deep learning filter will have these annotations.

Display Colour Signature

Check the Display Colour Signature option to show the current top four colours (of a possible ten) found in a given bounding box.

Display Objects

Check the Display Objects option to show the bounding boxes of tracked objects. Objects which are not in an alarmed state are rendered in yellow. Objects rendered in red are in an alarmed state (i.e. they have triggered a rule).

Object Speed

Check the Object Speed option to show the object speed.

Object Height

Check the Object Height option to show the object height.

Object Area

Check the Object Area option to show object area.

Object Classification

Check the Object Class to show the object Classification.

Advanced Settings

In most installations, the default VCAcore configuration will suffice. However, in some cases, better performance and additional features can be configured with modified parameters. The Advanced settings page allows configuration of the advanced VCAcore parameters.

Parameters
Colour Signature

Colour Signature is an algorithm for grouping the pixel colours of an object. When enabled, any object that is tracked by VCAcore will also have its pixels grouped into 10 colours. By default this information is added to VCAcore's metadata, available as tokens, via the SSE metadata service or that channel's RTSP metadata stream.

Additionally, to use the Colour Filter rule, Colour Signature must be enabled for that channel.

Alarm Hold off Time

The Alarm Hold-off Time defines the time between the successive re-triggering of an alarm generated by the same object triggering the same rule. To explain this concept, consider the following diagram where no Alarm Hold-off Time is configured:

In this detection scenario, the person enters the zone 3 times. At each point an alarm is raised, resulting in a total of 3 alarms. With the Alarm Hold-off Time configured, it's possible to prevent re-triggering of the same rule for the same object within the configured time period.

Consider the same scenario, but with an Alarm Hold-off Time of 5 seconds configured:

In this case, an alarm is not raised when the person enters the zone for the second time, because the time between the occurrence of the last alarm of the same type for the object is less than the Alarm Hold-off Time. When the person re-enters the zone for a third time, the elapsed time since the previous alarm of the same type for that object is greater than the Alarm Hold-off time and a new alarm is generated. In essence, the Alarm Hold-off Time can be configured to prevent multiple alarms being generated because an object is loitering on the edge of a zone. Without Alarm Hold-off Time configured, this scenario would cause so called "Alarm chatter".

The default setting for Alarm Hold-off Time is 5 seconds

Stationary Object Hold-on Time

The Stationary Object Hold-on Time defines the amount of time that an object will be tracked by the engine once it becomes stationary. Since objects which become stationary must be "merged" into the scene after some finite time, the tracking engine will forget about objects that have become stationary after the Stationary Object Hold-on Time.

The default setting is 60 seconds.

Minimum Tracked Object Size

The Minimum Tracked Object Size defines the size of the smallest object that will be considered for tracking.

For most applications the default setting of 10 is recommended. In some situations, where extra sensitivity is required, the value can be manually specified. While lower values allow the engine to track smaller objects, it may increase the susceptibility to false detections.

Camera Shake Cancellation

Enabling Camera Shake Cancellation stabilizes the video stream before the analytics process runs. This can be useful where the camera is installed on a pole or unstable platform and subject to sway or shake.

It's recommended to only enable this option when camera shake is expected in the installation scenario.

Detection Point of Tracked Objects

For every tracked object, a point is used to determine the object's position, and evaluate whether it intersects a zone and triggers a rule. This point is called the detection point.

There are 3 modes that define the detection point relative to the object:

Automatic

In automatic mode, the detection point is automatically set based on how the channel is configured. It selects 'Centroid' if the camera is calibrated overhead, or 'Mid-bottom' if the camera is calibrated side-on or not calibrated.

Centroid

In this mode, the detection point is forced to be the centroid of the object.

Mid-bottom

In this mode, the detection point is forced to be the middle of the bottom edge of the tracked object. Normally this is the ground contact point of the object (where the object intersects the ground plane).

Loss Of Signal Emit Interval

The Loss Of Signal Emit Interval defines the amount of time between emissions when a channel loses signal to it's source.

The default setting is 1 second.

Scene Change Detection

Learn more about Scene Change Detection.

Tamper Detection

The Tamper Detection module is intended to detect camera tampering events such as bagging, de-focusing and moving the camera. This is achieved by detecting large persistent changes in the image.

Enabling Tamper Detection

To enable tamper detection click the Enabled checkbox.

Advanced Tamper Detection Settings

In the advanced tamper detection settings it is possible to change the thresholds for the area of the image which must be changed and the length of time it must be changed for before the tamper event is triggered.

·       Duration: the length of time that the image must be persistently changed before the alarm is triggered.

·       Area Threshold: the percentage area of the image which must be changed for tampering to be triggered.

·       Suppress Alarm on Lights on/off: Large fast changes to the image lighting such as switching on/off indoor lighting can cause false tamper events. Enable this option if this is likely to be a problem in the area where the camera is installed. However, this option will reduce sensitivity to genuine alarms so it is not recommended to be used if rapid light changes are not likely to be a problem

If false alarms are a problem the duration and/or area should be increased so that large transient changes such as close objects temporarily obscuring the camera do not cause false alarms.

Notification

When a tamper event is detected, a tamper event is generated. This event is transmitted through any output elements as well as being displayed in the video stream:

Scene Change Detection

The scene change detection module resets the tracking algorithm when it detects a large persistent change in the image. This prevents the tracking engine from detecting image changes as tracked objects which could be potential sources of false alarms.

The kinds of changes the scene change detection module detects are as follows:

Sudden movement of a camera (e.g. due to repositioning or the use of a pan-tilt camera).

Sudden obscuration of a camera (e.g. a vehicle parks in front of a camera obscuring most of its view).

Gross illumination changes (e.g. lights being switched on/off, dazzling from car headlights).

Day/night transitions (e.g. when a camera switches from colour to black/white during a night-day transition)

Scene Change Settings

There are 3 options for the scene change detection mode:

Automatic: Detects scene changes automatically. This is the recommended setting unless the automatic mode is causing difficulties (e.g. re-learning the scene when unnecessary).

Manual: Allows the user to adjust the parameters used by the scene change detection algorithm.

Disabled: Disables the scene change detection.

Automatic

This is the default setting and will automatically use the recommended settings. It is recommended to use the automatic setting unless the scene change detection is causing difficulties.

Disabled

Scene change detection is disabled.

Note that when the scene change detection is disabled, gross changes in the image will not be detected. For example, if a truck parks in front of the camera the scene change will not be detected and false events may occur as a result.

Manual

Allows user configuration of the scene change detection algorithm parameters.

If automatic mode is triggering in situations where it's not desired (e.g. it's too sensitive, or not sensitive enough) then the parameters can be adjusted to manually control the behaviour.

In the manual mode the following settings are available:

·       Time Threshold: the length of time that the image must be persistently changed before the scene change is triggered and the tracking algorithm is reset.

·       Area Threshold: the percentage area of the image which must be changed for the scene change to be triggered and the tracking algorithm to be reset.

When both the time and area thresholds are exceeded the scene is considered to have changed and will be reset.

If false scene change detections are a problem, the time and/or area should be increased so that large transient changes such as a close object temporarily obscuring the camera do not cause false scene change detections.

Notification

When a scene change is detected, the scene is re-learnt and a message is displayed in the event log and annotated on the video

Mirasys VCA Visualization

VCA Visualization in Spotter

  1. Open Mirasys Spotter and open a camera where you configured VCA.
  2. Open camera tools and select “Highlight”. The “Highlight moving targets” is on by default, if camera has VCA configured. You can leave it on or deselect it. You can select which other options you want to see for the VCA camera. The available options are: Highlight moving targets, Show tracks, Show textual info (if ANPR+ is configured), Show zones, Show lines, Show counters and Reset counters
  3. You can set these settings on and off anytime you want in Spotter. Zones/lines will only appear when the related rule was triggered.

Customize VCA Visualization in Spotter Settings

  1. To customize the Bounding box colours, etc., open Spotter Settings and select the Plugins tab.
  2. Select VCA Visualization plugin. You can now select Visualization colour and Zone colour. It is also possible to set Line thickness and time for line length. These settings are also possible to set to be automatic, which means that line thickness is adjusted to window size.

Using VCA to Trigger Alarms

Zones are detection areas on which VCAcore logical rules operate. In order to detect a specific behaviour, a zone must be configured to specify the area where a logical rule applies e.g. creating alarm when a person is Dwelling on a certain area

Step 1: Creating Zones

Start by selecting a camera in view channels, this opens the selected camera settings.

Zones can be added in multiple ways:

  • Double-click anywhere on the video display.
  • Click the Create Zone button in the Zone settings menu.
  • Right-click or tap-hold to display the context menu and select the add zone icon

After a zone is created, zone ID can be seen by hovering mouse over the zone. Zone ID is used to create alarms in system manager.

Step 2: Adding logical rules

VCAcore’s logical rules are used to detect specific events in a video stream.

Logical rules can be added in the Rules settings menu. Click the   button and select the desired rule from the drop menu, please note that some rules cannot be linked to zones and need to be linked to other rules. For more detailed information about rules refer to VCA-Core-documentation-v1.3.0

Step 3: Creating alarms from VCA events in system manager

  1. Open Mirasys System Manager Alarm Settings
  2. Create new alarm and select “Metadata” as Trigger type.

  3. Select a camera that has VCA events configured and select correct zone ID (the zones you set in VCA Configuration tool. Note that the Zone ID is the # shown in the tooltip of the zone). Below zone dropdown, the possible events are listed. The ones you have configured are marked in the list with brackets that say “(Configured)”. Select one of the configured events.
    It is up to you whether you want to define an ending input as well.
  4. Move to the actions tab and select and action for you alarm. 
  5. Save Settings and open Spotter to view Alarm list.