Poor visibility conditions often lead to large-scale chain accidents causing human fatalities and property damage. Such visibility-related vehicle accidents could have been prevented if motorists were warned ahead of time to reduce speed and remain cautious before moving into the poor visibility zone. The objective of this research was to advance the visibility measurement technologies that compute visibility through processing images captured by video cameras. There are two fundamental difficulties in measuring visibility. The first is that visibility is a complex multivariable function of many parameters such as objects available, light sources, light scatter, light absorption, etc., so that measurements of one or two parameters (as in most of today's visibility meters) cannot accurately estimate the true human-perceived visibility. On the other hand, any attempt to measure every possible atmospheric parameter to derive human perceived visibility is simply too complex and costly. The second source of difficulty is contributed by an attempt to express the spatially variant nature of atmospheric visibility using a single representative value, distance. It works only if the atmosphere is uniform, which rarely happens. A solution presented in this report is to measure visibility using visual properties of video images (perceived information) instead of indirectly measuring physical properties of atmosphere and converting them to visibility. The spatial variance problem in visibility was solved by introducing a new concept of relative measurement of visual information referred to as the Relative Visibility (RV). This report also includes a study result on the limitation of CCD cameras in visibility measurement applications, and shows how to overcome them through spatially arranged multiple targets. In addition, we explored various apparatuses of Near Infrared (NIR) light source and cameras for measuring night visibility. This result is included in the report.