Browsing by Author "Masoud, Osama"
Now showing 1 - 14 of 14
- Results Per Page
- Sort Options
Item Algorithms for Vehicle Classification: Phase II(2001-11-01) Martin, Robert; Masoud, Osama; Gupte, Surendra; Papanikolopoulos, Nikolaos PThis report summarizes the research behind a real-time system for vehicle detection and classification in images of traffic obtained by a stationary CCD camera. The system models vehicles as rectangular bodies with appropriate dynamic behavior and processes images on three levels: raw image, blob, and vehicle. Correspondence is calculated between the processing levels as the vehicles move through the scene. This report also presents a new calibration algorithm for the camera. Implemented on a dual Pentium PC equipped with a Matrox Genesis C80 video processing board, the system performed detection and classification at a frame rate of 15 frames per second. Detection accuracy approached 95 percent, and classification of those detected vehicles neared 65 percent. The report includes an analysis of scenes from highway traffic to demonstrate this application.Item Development of a Tracking-based Monitoring and Data Collection System(2005-10-01) Veeraraghavan, Harini; Atev, Stefan; Masoud, Osama; Miller, Grant; Papanikolopoulos, Nikolaos PThis report outlines a series of vision-based algorithms for data collection at traffic intersections. We have purposed an algorithm for obtaining sound spatial resolution, minimizing occlusions through an optimization-based camera-placement algorithm. A camera calibration algorithm, along with the camera calibration guided user interface tool, is introduced. Finally, a computationally simple data collection system using a multiple cue-based tracker is also presented. Extensive experimental analysis of the system was performed using three different traffic intersections. This report also presents solutions to the problem of reliable target detection and tracking in unconstrained outdoor environments as they pertain to vision-based data collection at traffic intersections.Item Freeway Network Traffic Detection and Monitoring Incidents(Minnesota Department of Transportation, 2007-10) Joshi, Ajay J.; Atev, Stefan; Fehr, Duc; Drenner, Andrew; Bodor, Robert; Masoud, Osama; Papanikolopoulos, Nikolaos P.We propose methods to distinguish between moving cast shadows and moving foreground objects in video sequences. Shadow detection is an important part of any surveillance system as it makes object shape recovery possible, as well as improves accuracy of other statistics collection systems. As most such systems assume video frames without shadows, shadows must be dealt with beforehand. We propose a multi-level shadow identification scheme that is generally applicable without restrictions on the number of light sources, illumination conditions, surface orientations, and object sizes. In the first level, we use a background segmentation technique to identify foreground regions that include moving shadows. In the second step, pixel-based decisions are made by comparing the current frame with the background model to distinguish between shadows and actual foreground. In the third step, this result is improved using blob-level reasoning that works on geometric constraints of identified shadow and foreground blobs. Results on various sequences under different illumination conditions show the success of the proposed approach. Second, we propose methods for physical placement of cameras in a site so as to make the most of the number of cameras available.Item Image-Based Reconstruction for View-Independent Human Motion Recognition(2003-07-23) Bodor, Robert; Jackson, Bennett; Masoud, OsamaIn this paper, we introduce a novel method for employingimage-based rendering to extend the range of use of human motion recognition systems. We demonstrate the use of image-based rendering to generate additional training sets for view-dependent human motion recognition systems. Input views orthogonal to the direction of motion are created automatically to construct the proper view from a combination of non-orthogonal views taken from several cameras. To extend motion recognition systems, image-based rendering can be utilized in two ways: i) to generate additional training sets for these systems containing a large number of non-orthogonal views, and ii) to generate orthogonal views (the views those systems are trained to recognize) from a combination of non-orthogonal views taken from several cameras. In this case, image-based rendering is used to generate views orthogonal to the mean direction of motion. We tested the method using an existing view-dependent human motion recognition system on two different sequences of motion, and promising initial results were obtained.Item Managing Suburban Intersections through Sensing(2002-12-01) Veeraraghavan, Harini; Masoud, Osama; Papanikolopoulos, Nikolaos PIncreased urban sprawl and increased vehicular traffic have resulted in an increased number of traffic fatalities, the majority of which occur near intersections. According to the National Highway Safety Administration, one out of eight fatalities occurring at intersections is a pedestrian. An intelligent, real-time system capable of predicting situations leading to accidents or near misses will be very useful to improve the safety of pedestrians as well as vehicles. This project investigates the prediction of such situations using current traffic conditions and computer vision techniques. An intelligent system may gather and analyze such data in a scene (e.g., vehicle and pedestrian positions, trajectories, velocities, etc.) and provide necessary warnings. The current work focuses on the monitoring aspect of the project. Certain solutions are proposed and issues with the current implementation are highlighted. The cost of the proposed system is low and certain operational characteristics are presented.Item Monitoring Driver Activities(2004-09-01) Wahlstrom, Eric; Masoud, Osama; Papanikolopoulos, Nikolaos PUsing the Framework for Processing Video developed by Osama Masoud at the University of Minnesota, this study sought to identify and analyze distractions to the driver both within a vehicle as well as outside. A dashboard-mounted camera uses infrared light bursts to identify the pupil of the driver's eye. The software then tracks the relative position of the eye and pupil to make observations about the driver's gaze. The research also includes a method for measuring the driver's response to traffic and interactions between the driver and vehicle itself. The results will be used to study distractions to the driver and its affect on driver behavior in real road conditions.Item Monitoring Weaving Sections(2001-10-01) Masoud, Osama; Rogers, Scott; Papanikolopoulos, Nikolaos PTraffic control in highway weaving sections is complicated since vehicles are crossing paths, changing lanes, or merging with through traffic as they enter or exit an expressway. There are two types of weaving sections: (a) single weaving sections which have one point of entry upstream and one point of exit downstream; and (b) multiple weaving sections which have more than one point of entry followed by more than one point of exit. Sensors which are based on lane detection fail to monitor weaving sections since they cannot track vehicles which cross lanes. The fundamental problem that needs to be addressed is the establishment of correspondence between a traffic object A in lane x and the same object A in lane y at a later time. For example, vision systems that depend on multiple detection zones simply cannot establish correspondences since they assume that the vehicles stay in the same lane. The motivation behind this work is to compensate for inefficiencies in existing systems as well as to provide more comprehensive data about the weaving section being monitored. We have used a vision sensor as our input. The rich information provided by vision sensors is essential for extracting information that may cover several lanes of traffic. The information that our system provides includes (but is not limited to): (a) Extraction of vehicles and thus a count of vehicles, (b) Velocity of each vehicle in the weaving section, and (c) Direction of each vehicle (this is actually a trajectory versus time rather than a fixed direction, since vehicles may change direction while in a highway weaving section). The end-product of this research is a portable system that can gather data from various weaving sections. Experimental results indicate the potential of the approach.Item Pedestrian Control at Intersections (Phase I)(Minnesota Department of Transportation, 1996-10) Papanikolopoulos, Nikolaos P.; Masoud, Osama; Richards, Charles A.This report describes a real-time system for tracking pedestrians in sequences of grayscale images acquired by a stationary camera. The system outputs the spatio-temporal coordinates of each pedestrian during the period when the pedestrian is visible. Implemented on a Datacube MaxVideo 20 equipped with a Datacube Max 860, the system achieved a peak performance of over 30 framers per second. Experimental results based on indoor and outdoor scenes have shown that the system is robust under many difficult traffic situations. The system uses the "figure/ground" framework to accomplish the goal of pedestrian detection. The detection phase outputs the tracked blobs (regions), which in turn pass to the final level, the pedestrian level. The pedestrian level deals with pedestrian models and depends on the tracked blobs as the only source of input. By doing this, researchers avoid trying to infer information about pedestrians directly from raw images, a process that is highly sensitive to noise. The pedestrian level makes use of Kalman filtering to predict and estimate pedestrian attributes. The filtered attributes constitute the output of this level, which is the output of the system. This system was designed to be robust to high levels of noise and particularly to deal with difficult situations, such as partial or full occlusions of pedestrians. The report compares vision sensors with other types of possible sensors for the pedestrian control task and evaluates the use of active deformable models as an effective pedestrian tracking module.Item Pedestrian Control at Intersections - Phase III(Minnesota Department of Transportation, 1998-04) Masoud, Osama; Papanikolopoulos, Nikolaos P.This report presents a real-time system for pedestrian tracking in sequences of grayscale images acquired by a stationary CCD (charged-coupled devices) camera. The research objective involves integrating this system with a traffic control application, such as a pedestrian control scheme at intersections. The system outputs the spatiotemporal coordinates of each pedestrian during the period the pedestrian remains in the scene. The system processes at three levels: raw images, blobs, and pedestrians. It models blob tracking as a graph optimization problem and pedestrians as rectangular patches with a certain dynamic behavior. Kalman filtering is used to estimate pedestrian parameters. The system was implemented on a Datacube MaxVideo 20 equipped with a Datacube Max860 and on a Pentium-based PC. The system achieved a peak performance of more than 20 frames per second. Experimental results based on indoor and outdoor scenes demonstrated the system's robustness under many difficult situations such as partial or full occlusions of pedestrians.Item Pedestrian Control at Intersections: Phase IV(Minnesota Department of Transportation, 2000-02-01) Masoud, Osama; Papanikolopoulos, Nikolaos P.This report represents a real-time system for pedestrian tracking in sequences of greyscale images acquired by a stationary camera. Researchers also developed techniques for recognizing pedestrians' actions, such as running and walking, and integrated the system with a pedestrian control scheme at intersections. The proposed approach can be used to detect and track humans in a variety of applications. Furthermore, the proposed schemes also can be employed for the detection of several diverse traffic objects of interest, such as vehicles or bicycles. The system outputs the spatio-temporal coordinates of each pedestrian during the period that the pedestrian is in the scene. The system processes at three levels: raw images, blobs and pedestrians. Experimental results based on indoor and outdoor scenes demonstrated the system's robustness under many difficult situations such as partial or full occlusions of pedestrians. In particular, this report contains the results from a field test of the system conducted in November 1999. Keywords: pedestrian detection and tracking, action recognition, pedestrian control at intersectionsItem Pedestrian Control Issues at Busy Intersections and Monitoring Large Crowds(2002-03-01) Maurin, Benjamin; Masoud, Osama; Rogers, Scott; Papanikolopoulos, Nikolaos PThe authors present a vision-based method for monitoring crowded urban scenes involving vehicles, individual pedestrians, and crowds. Based on optical flow, the proposed method detects, tracks, and monitors moving objects. Many problems confront researchers who attempt to track moving objects, especially in an outdoor environment: background detection, visual noise from weather, objects that move in different directions, and conditions that change from day to evening. Several systems of visual detection have been proposed previously. This system captures speed and direction as well as position, velocity, acceleration or deceleration, bounding box, and shape features. It measures movement of pixels within a scene and uses mathematical calculations to identify groups of points with similar movement characteristics. It is not limited by assumptions about the shape or size of objects, but identifies objects based on similarity of pixel motion. Algorithms are used to determine direction of crowd movement, crowd density, and mostly used areas. The speed of the software in calculating these variables depends on the quality of detection set in the first stage. Illustrations include video stills with measurement areas marked on day, evening, and indoor video sequences. The authors foresee that this system could be used for intersection control, collection of traffic data, and crowd control.Item Real-Time Collision Warning and Avoidance at Intersections(Minnesota Department of Transportation, 2004-11-01) Atev, Stefan; Masoud, Osama; Janardan, Ravi; Papanikolopoulos, Nikolaos P.Monitoring traffic intersections in real-time as well as predicting possible collisions is an important first step towards building an early collision warning system. We present the general vision methods used in a system addressing this problem and describe the practical adaptations necessary to achieve real-time performance. A novel method for three dimensional vehicle size estimation is presented. We also describe a method for target localization in real-world coordinates, which allows for sequential incorporation of measurements from multiple cameras into a single target's state vector. Additionally, a fast implementation of a false-positive reduction method for the foreground pixel masks is developed. Finally, a low-overhead collision prediction algorithm using the time-as-axis paradigm is presented. The proposed system was able to perform in real-time on videos of quarter-VGA ($320\times240$) resolution. The errors in target position and dimension estimates in a test video sequence are quantified.Item Sensor-based Ramp Monitoring(2003-05-01) Papanikolopoulos, Nikolaos P; Masoud, Osama; Wahlstrom, EricThis report covers the creation of a system for monitoring vehicles in highway on-ramp queues. The initial phase of the project attempted to use a blob tracking algorithm to perform the ramp monitoring. The current system uses optical flow information to create virtual features based on trends in the optical flow. These features are clustered to form vehicle objects. These objects update themselves based on their statistics and those of other features in the image. The system has difficulties tracking vehicles when they stop at ramp queues and when they significantly occlude each other. However, the system succeeds by detecting vehicles entering and exiting ramps and can record their motion statistics as they do so. Several experimental results from ramps in the Twin Cities are presented.Item Using Geometric Primitives to Calibrate Traffic Scenes(2004-06-22) Masoud, OsamaIn this paper, we address the problem of recovering the intrinsic and extrinsic parameters of a camera or a group of cameras in a setting overlooking a traffic scene. Unlike many other settings, conventional camera calibration techniques are not applicable in this case. We present a method that uses certain geometric primitives commonly found in traffic scenes in order to recover calibration parameters. These primitives provide needed redundancy and are weighted depending on the significance of their corresponding image features. We show experimentally that these primitives are capable of achieving accurate results suitable for most traffic monitoring applications.