EMBEDDED VIDEO PROCESSING FOR DISTRIBUTED INTELLIGENT SENSOR NETWORKS



In this paper the theoretical background of specific video processing algorithms is presented as well as the details of their implementation into SENSE nodes of a distributed sensor network devoted to airport surveillance tasks. The video processing algorithms are implemented used embedded DSP boards, based on Blackfin DSP of Analog Devices vision system processing architecture. This architecture is formed by two main processing systems whose communication is realized via serial port. Each processor supports different and exclusive tasks that determine their process and peripheral features.

This paper describes the current status of the implementation of the specific software that runs into the video module board of an intelligent sensor that is currently being developed under the SENSE project. The goal of the current application of the SENSE node is to detect and track people and objects standing or moving in determined areas of an airport. The specific objectives for low level visual processing in this SENSE application are:
• To represent the location of visual objects in image coordinates.
• To represent the contour and appearance of the visible regions of the visual objects as features.
• Low level classification of objects into three categories: person, person group and non-classified object .
Some considerations to be taken into account for the low level visual processing are:
• The scenes are dynamic
• Indoor scenes have illumination changes
• In the same scene, very illuminated and dark areas may appear.
• The background and foreground are dynamic.
• Soft real time processing
The video board software includes vision specific algorithms as well as the

code (including device drivers, interrupt handling code, hardware configuration code and kernel code). These programs must enable an adequate processing of the video images obtained from the camera, obtaining the video features required by the higher level processing board. JPEG video compression must be also carried out into the video board, as well as the XML stream generation and transmission to the reasoning board. The document is divided into two main parts: First, the main vision algorithms developed are described in section 2 .Next, the details of the implementation of the above described algorithms and some preliminary performance results obtained with real boards are also explained in sections 3 and 4, respectively.

Free download research paper




CSE PROJECTS

FREE IEEE PAPER AND PROJECTS

FREE IEEE PAPER