Monocular depth estimation is a fundamentally challenging problem in Computer Vision. It is useful for Robotics applications where design constraints prohibit the use of multiple cameras. It also finds widespread use in autonomous driving. Since the task is to estimate depth from a single image, rather than two or more, a global perspective of the scene is required. Pixel-wise losses like reconstruction loss, left-right consistency loss, capture local scene information. However, they do not take into account global scene consistency. Generative Adversarial Networks(GANs) effectively capture the global structure of the scene and produce real-looking images, so they have the potential of depth estimation from a single image. This work focuses on using adversarial training for a supervised monocular depth estimation task in combination with pixel-wise losses. We observe that with minimal depth-supervised training, there is a significant reduction of error in depth estimation in a number of GAN variants explored.