Using Partial Run Time Reconfigurable Hardware to accelerate Video Processing in Driver Assistance System



In this paper we show a reconfigurable hardware architecture for the acceleration of video-based driver assistance applications in future automotive systems. The concept is based on a separation of pixel-level operations and high level application code. Pixel-level operations are accelerated by coprocessors, whereas high level application code is implemented fully programmable on standard PowerPC CPU cores to allow flexibility for new algorithms. In addition, the application code is able to dynamically reconfigure the coprocessors available on the system, allowing for a much larger set of hardware accelerated functionality than would normally fit onto a device. This process makes use of the partial dynamic reconfiguration capabilities of Xilinx Virtex FPGAs.

In future automotive systems, video-based driver assistance will help improve security. Video processing for driver assistance requires real time implementation of complex algorithms. A pure software implementation does not offer the required real-time processing, based on available hardware in automotive environments. Therefore hardware acceleration is necessary. Dedicated hardware circuits (ASICs) can offer the required real time processing, but they do not offer the necessary flexibility. Video algorithms for driver assistance are not standardized, and may never be. Algorithmic research is expected to go on for future years. So a flexible, programmable hardware acceleration is needed. Specific driving conditions, e.g. highway, country side, urban traffic, tunnel, require specific optimized algorithms. Reconfigurable hardware offers high potential for real time video processing and its adaptability to various driving conditions and future algorithms. In this paper we present the architecture of the Autovision project primarily described superficially in [9]. Today’s systems for driver assistance offer features such as adaptive cruise control and lane departure warning. Video cameras and radar sensors are used to accomplish this. On highways and twoway primary roads a safe distance to previous cars can be kept automatically over a broad speed range [7] [8]. Basic concepts for modeling vehicle movement and vehicle environment have been developed by Dickmanns [4]. However, for complex driving situations and complex environments, e.g. urban traffic, there are no established and reliable algorithms. This is a topic for future research. Today’s videobased driver assistance systems are mainly using dedicated hardware accelerators to achieve real time performance. An advanced example is the EyeQ chip from Mobileye [1]. The EyeQ chip contains two ARM cores and four dedicated coprocessors for image classification, object tracking, lane detection, and filtering. EyeQ offers real time support for a set of applications in driver assistance, but due to its dedicated coprocessor architectures the flexibility and adaptability to future algorithms is limited. The following section presents a typical scenario encountered by driver-assistance applications, along with the requirements that must be met by the video processing within such an application. In section 3 we introduce a system of reconfigurable hardware coprocessors that can be used to accelerate pixel operations, while keeping high level application code on standard CPU cores for flexibility. Section 4 describes an example reconfigurable scenario and its implementation on our target platform. A representative coprocessor architecture is presented in section 5, followed by synthesis results for its implementation in section 6. Finally in section 7 the paper is concluded.

Free download research paper


CSE PROJECTS

FREE IEEE PAPER AND PROJECTS

FREE IEEE PAPER