Mobile QR Code QR CODE
Title Where to Look: Visual Attention Estimation in Road Scene Video for Safe Driving
Authors (Yeejin Lee) ; (Byeongkeun Kang)
DOI https://doi.org/10.5573/IEIESPC.2022.11.2.105
Page pp.105-111
ISSN 2287-5255
Keywords Visual attention estimation; Intelligent transportation system; Convolutional neural networks; Saliency estimation; Video-based
Abstract This work addresses the task of locating regions that are more crucial for safe driving than other areas on roads. It could be utilized to improve the efficiency and safety of autonomous driving vehicles or robots and could also be useful for human drivers when employed in driver-assistance systems. To achieve robust and accurate attention prediction, we propose a multiscale color and motion-based attention prediction network. The network consists of three components where each processes multi-scaled color images, uses multi-scaled motion information, and merges the outputs of the two streams, respectively. The proposed network is guided to utilize the movement of objects/people as well as the type/location of things/stuff. We demonstrate the effectiveness of the proposed system by experimenting with an actual driving dataset. The experimental results show that the proposed framework outperforms previous works.