A multimodal driver monitoring benchmark dataset for driver modeling in assisted driving automation
View abstract on PubMed
Summary
This summary is machine-generated.The manD 1.0 dataset offers synchronized multimodal data from 50 participants in a driving simulator. This benchmark supports driver monitoring research for automated driving systems.
Area Of Science
- Human-Computer Interaction
- Automated Driving Systems
- Behavioral Science
Background
- Driver monitoring is crucial for interpreting, modeling, and predicting driver behavior in automated driving.
- Existing datasets may lack the multimodal synchronization required for comprehensive analysis of driver states.
Purpose Of The Study
- Introduce manD 1.0, a novel multimodal dataset for benchmarking driver monitoring in automated driving.
- Provide a standardized resource for research into human dimension in automated driving (manD).
Main Methods
- Collected synchronized data from 50 gender-balanced participants (ages 21-65) in a static driving simulator.
- Simulated five distinct driving scenarios with varying automation levels (SAE L0-L3), traffic, and weather conditions.
- Captured multimodal driver data including physiology, body movements, activities, gaze, and facial information.
Main Results
- The manD 1.0 dataset contains synchronized environmental, vehicle, and comprehensive driver state data.
- Data reflects diverse mental and physical states across various driving events and conditions.
- The dataset is suitable for developing and validating driver monitoring applications.
Conclusions
- manD 1.0 serves as a valuable benchmark for driver monitoring research in automated driving.
- The dataset facilitates data-driven modeling, prediction of driver reactions, and interaction strategy design.
- Enables further research into driver states, including motion sickness, within automated driving contexts.

