The presence of haze within the atmospheric medium degrades the quality of videos captured by camera sensors. The expulsion of haze, referred to as dehazing, is typically performed subject to the physical degradation display, that involves an explanation of an ill-posed inverse drawback. A few efforts have been made for image dehazing, whereas, video dehazing still remains an unexplored area of research. This paper proposes an approach for video dehazing combining the concepts of single image dehazing, optical stream estimation and Markov Random Field (MRF). The proposed method enhances the temporal and spatial coherence of the hazy video. Assuming that the dark channel of the haze-free picture is zero, we acquire the raw transmission map. In the proposed approach, we focus on the raw transmission map obtained from the dark channel prior using guided filter. We assess the forward and reverse optical streams between the neighboring frames to locate individual pixels using Linear Discriminant Analysis. The color of the haze-free pixels in the frames is approximated by a few hundred discrete colors, which generate a fixed cluster in space and the directions of the pixel. The pixels at a given cluster are spread and can be determined by analyzing the forward and in reverse optical frames to predict its value after haze removal. Largest Margin Nearest Neighbor (LMNN) algorithm is applied to get the smooth transmission map of the foggy frames of the video to approximate the pixel value in the RGB space. The stream fields are utilized in an augmented MRF model on the transmission guide obtained to enhance the temporal and the spatial coherence of the transmission. The proposed method is compared against the state-of-the-art on both real and synthetic videos to preserve the information optimally. © 2018 ACM.