Visual statistical learning (VSL) refers to our ability to extract and use environmental regularities to guide visual perception and behavior. Previous research on VSL has emphasized its role in shaping the perception of repeated visual statistics, such as the consistent association between pairs of novel shapes. However, much less is known about the nature of such learning and its utility in guiding spatial attention and visuomotor action. This dissertation examines visual statistical learning in two major domains: perception and attention/visuomotor action. Part I focuses on how people learn a consistent association between multiple novel shapes. I show that such learning depends critically on having conscious awareness of the visual statistics. Part II provides evidence that when people perform tasks such as visual search, they are able to extract consistent visual statistics (e.g., the frequent locations of the search target) without explicit awareness of those statistics. This dissertation demonstrates that multiple forms of VSL may depend on different cognitive mechanisms.