The problem of language translation has prevailed in society for so long. However, up to some extent the problem is being reduced by the online available machine translation systems like Google, Bing, Babelfish, etc. But with the emergence of these Machine Translation Systems, there arises the problem of their validation. Can we trust on such translation systems blindly? Is there no scope of improvement? Are these Machine Translation systems not prone to errors? The answer to all these questions is No. So, for this purpose, we need a mechanism that can test or assess these Machine Translation systems. In this paper, we have proposed an algorithm that will evaluate such Machine Translation systems. Our algorithm is being compared with a very well-known BLEU algorithm that works very well for non-Indian languages. The accuracy of the designed algorithm is evaluated using the standard datasets like Tides and EMILLE. © BEIESP.