Skip to Content
This documentation is provided with the HEAT environment and is relevant for this HEAT instance only.
RunnersAviation MetricsCognitive Load Metrics Node

cognitive-load-metrics (Processing Node)

The Cognitive Load Metrics node computes a cognitive load score from biometric data (eye-tracking and heart rate) using an XGBoost regression model. It produces an overall score, a per-window timeline, and a Low/Medium/High zone breakdown. When an optional phase-segment parent is connected, results are also computed per flight phase.

This node is domain-agnostic — it operates on any upstream payload that supplies the required biometric columns — and is currently hosted inside the Aviation Metrics runner.


Configuration Schema

All properties are optional. The node works with an empty configuration ({}).

PropertyTypeRequiredDefaultDescription
requiredColumnsarray<{name: string}>NoBuilt-in column setColumn mapping from the upstream bio payload. Each entry is an object with a name key matching a column in the data. When omitted, the processor uses the hardcoded default column set (see Required Upstream Columns below).
windowSizestringNo"10S"Pandas-compatible duration string for the sliding window (e.g. "10S", "15S", "30S"). Controls the temporal granularity of the timeline.
blendMode"off" | "ewma" | "median" | "dual"No"ewma"Smoothing strategy applied to the timeline scores before output. "off" disables smoothing entirely.
blendAlphanumber (0, 1]No0.5EWMA alpha parameter. Higher values make the smoothed timeline more responsive to recent windows. Only used when blendMode is "ewma".
classThresholdsobjectNo{ "low": 7, "high": 15 }Score boundaries for Low/Medium/High classification. Scores up to low are classified as "low", scores above low up to high as "medium", and scores above high as "high".
primaryInputNodestringNo"rapid-preprocess"Name of the parent node supplying the bio payload (the key used in input_map). Override this when reusing the node outside a RAPID context.
phaseInputNodestringNo"phase-segment"Name of the optional parent node supplying phase boundaries. Override when using a different phase-segmentation node.

Expected Upstream Input

Parent 1 (required): bio input (default: rapid-preprocess)

A JSON object containing a bio_input key with an array of sample objects:

{ "bio_input": [ { "CAPTURED DATE TIME (IS0 8601)": "2025-06-15T10:00:00.000Z", "LEFT EYE PupilDiameter (MM)": 3.42, "RIGHT EYE PupilDiameter (MM)": 3.38, "LEFT EYE GazeDirection X (Float)": 0.12, "LEFT EYE GazeDirection Y (Float)": -0.03, "LEFT EYE GazeDirection Z (Float)": 0.99, "HEARTRATE": 78 } ] }

Parent 2 (optional): phase boundaries (default: phase-segment)

A JSON object keyed by phase name. Each value has the same shape as the root payload above (containing its own bio_input array). When present, the node produces per-phase results in addition to the full-session result.

{ "take_off": { "bio_input": [ ... ] }, "downwind": { "bio_input": [ ... ] }, "final_approach": { "bio_input": [ ... ] } }

Required Upstream Columns

The model requires the following columns. These are supplied via requiredColumns in config, or fall back to the hardcoded defaults.

ColumnTypeNotes
LEFT EYE PupilDiameter (MM)floatValues of -1 are treated as missing
RIGHT EYE PupilDiameter (MM)floatValues of -1 are treated as missing
LEFT EYE GazeDirection X (Float)float
LEFT EYE GazeDirection Y (Float)float
LEFT EYE GazeDirection Z (Float)float
HEARTRATEnumericNon-positive values are treated as missing
CAPTURED DATE TIME (IS0 8601)ISO 8601 stringFalls back to a TIMESTAMP column if absent

Behaviour

  1. The processor reads the upstream rapid-preprocess payload and extracts bio_input.
  2. Each bio sample is filtered to the configured (or default) columns and assembled into a DataFrame indexed by timestamp.
  3. The DataFrame is split into overlapping sliding windows (default 10 seconds, 1-second step).
  4. For each window, features are extracted:
    • Gaze path length — Euclidean distance traversed by gaze direction over time.
    • Gaze dispersion — mean distance of gaze samples from their centroid.
    • Pupil diameter statistics — mean, standard deviation, min, range, rate of change, velocity.
    • Heart-rate features — HR standard deviation, mean inter-beat interval.
  5. Features are scaled using a pre-trained StandardScaler and passed to a pre-trained XGBoost regressor. The raw score is clipped to [0, 21].
  6. Per-window prediction confidence (based on data availability) and prediction stability (based on per-tree variance) are attached.
  7. The timeline is optionally smoothed using the configured blendMode (default: EWMA with alpha 0.5).
  8. An average score is computed across all raw (unsmoothed) window scores.
  9. The average score is classified as "low", "medium", or "high" according to classThresholds.
  10. A zone breakdown (seconds and percentage in each zone) is derived from the timeline using per-second majority voting.
  11. If a phase-segment parent is connected, steps 1—10 are repeated for each phase slice independently.

Output Artefact

The node outputs JSON. The top-level key full_session is always present. When phase-segment data is available, additional keys are added for each phase name.

{ "full_session": { "cognitive_load_score": { "average_cognitive_load_score": 9.4, "average_cog_class": "medium", "avg_prediction_confidence": 0.87, "avg_prediction_stability": 0.92 }, "cognitive_load_timeline": [ { "window_start": "2025-06-15T10:00:00+00:00", "cognitive_load_score": 8.2, "prediction_confidence": 0.91, "prediction_stability": 0.94 } ], "CognitiveZoneBreakdown": { "Low": { "seconds": 45, "percent": 30.0 }, "Medium": { "seconds": 75, "percent": 50.0 }, "High": { "seconds": 30, "percent": 20.0 } } }, "take_off": { "..." : "same structure as full_session" }, "downwind": { "..." : "same structure as full_session" } }

Example Configuration

Minimal (all defaults):

{}

Customised:

{ "requiredColumns": [ { "name": "LEFT EYE PupilDiameter (MM)" }, { "name": "RIGHT EYE PupilDiameter (MM)" }, { "name": "LEFT EYE GazeDirection X (Float)" }, { "name": "LEFT EYE GazeDirection Y (Float)" }, { "name": "LEFT EYE GazeDirection Z (Float)" }, { "name": "HEARTRATE" }, { "name": "CAPTURED DATE TIME (IS0 8601)" } ], "windowSize": "15S", "blendMode": "ewma", "blendAlpha": 0.3, "classThresholds": { "low": 6, "high": 14 }, "primaryInputNode": "my-custom-preprocess", "phaseInputNode": "my-phase-splitter" }

Integration in a Session Template

  1. Place a cognitive-load-metrics node in your session template.
  2. Connect a parent node that produces a { "bio_input": [...] } JSON payload. By default the processor looks for rapid-preprocess; set primaryInputNode in config to use a different parent.
  3. Optionally connect a phase-boundaries parent to enable per-phase scoring. By default the processor looks for phase-segment; set phaseInputNode in config to use a different parent.
  4. Supply configuration if you need to override column mappings, window size, smoothing, classification thresholds, or parent node names. Leave the config empty ({}) for default RAPID behaviour.
  5. Connect downstream consumers:
    • timeline-metrics — reads cognitive_load_timeline for unified timeline rendering.
    • Dashboard transforms (e.g. am-transform-circuits-dashboard, rapid-transform-otm-dashboard) — read the full output for dashboard panels.
    • json-merge — can merge cognitive load output with other analytics for aggregated views.