- This event has passed.
Thesis Defence: Leveraging Spatio-temporal Features to Forecast Time-To-Accident
June 12, 2023 at 10:00 am - 1:00 pm
Taif Anjum, supervised by Dr. Apurva Narayan and Dr. John Braun, will defend their thesis titled “Leveraging Spatio-temporal Features to Forecast Time-To-Accident” in partial fulfillment of the requirements for the degree of Master of Science in Computer Science.
An abstract for Taif Anjum’s thesis is included below.
Defences are open to all members of the campus community as well as the general public. Please email apurva.narayan@ubc.ca or john.braun@ubc.ca to receive the zoom link for this defence.
ABSTRACT
Globally, traffic accidents are the leading cause of death, according to reports published by the World Health Organization (WHO). Advanced Driver Assistance Systems (ADAS) effectively reduce the likelihood or severity of accidents. Time-To-Accident (TTA) forecasting is a critical component of ADAS; it can improve decision-making in traffic, dynamic path planning, and alert drivers to potential dangers. TTA is defined as the interval between the threat of an accident and the occurrence of the accident itself. Sensor or depth imaging-based TTA forecasting can be inaccurate and unaffordable to the mass. Existing vision-based traffic safety research in the literature focuses on accident anticipation with limited work on TTA prediction. We propose forecasting TTA by utilizing spatio-temporal features from dashboard camera videos and introducing a novel algorithm to compute ground truth TTA values.
Additionally, we present two efficient architectures, a Light 3D Convolutional Neural Network (Li3D) and Hybrid CNN-Transformer (HyCT) network, for our application. Li3D uses significantly fewer parameters and larger kernels for efficient feature learning without sacrificing accuracy. We considered transfer learning to improve the model’s performance further but faced limitations due to a lack of relevant large-scale video datasets and Li3D’s input requirements. Thus we propose the HyCT network which is trained via transfer learning. HyCT outperformed a comprehensive list of networks, including Li3D on the Car Crash Dataset (CCD). Furthermore, with our approach, we can simultaneously forecast TTA and recognize accident and non-accident scenes. In real-world scenarios, it is vital to identify when a normal driving scene transitions into a potential accident scene. We assess our approach’s ongoing scene recognition ability using the TTAASiST@ x (Time-to-Accident-based Accuracy at the xth frame Since Scene Transition) metric. With just 25% of the data, HyCT achieved over 85% accuracy in recognizing accident and non-accident scenes, further showcasing its performance on limited data. Our best models can predict TTA values with an average prediction error of 0.24 and 0.79 seconds on the CCD and DoTA dataset, respectively.