The problem of Scene flow estimation in depth videos has been attracting attention of researchers of machine vision, due to its potential application in various areas of robotics. The conventional scene flow estimation methods are difficult to use in real-time applications due to their long computational overhead. We propose a conditional adversarial network SceneFlowGAN for scene flow estimation. The proposed SceneFlowGAN uses loss function at two ends: both the generator and the discriminator. The proposed network is a first attempt to estimate scene flow using generative adversarial networks, and is able to estimate both the optical flow and disparity from the input stereo images simultaneously. The proposed method is experimented on a huge RGB-D benchmark sceneflow estimation dataset. © 2019 IEEE.