The task of the Evaluation of Machine Translation (MT) is very difficult and challenging too. The difficulty comes from the fact that translation is not a science but it is more an art; most of the sentences can be translated in many acceptable forms. Consequently, there is no such fix standard against which a particular translation can be evaluated. If it has been possible to make an independent algorithm that would be able to evaluate a specific Machine Translation, then the belief is that this evaluation algorithm will be a better algorithm than the translating algorithm itself. Initially, MT evaluation used to be done by human beings which was a time-consuming task and highly subjective too. Also evaluation results may vary from one human evaluator to another for the same sentence pair. Therefore we need automatic evaluation systems, which are quick and objective. Different methods for Automatic Evaluation of Machine Translation have been projected in recent years, out of which many of them have been accepted willingly by the MT community. In the proposed work, we have checked the applicability of BLEU metric and of its modified versions for English to Hindi Machine Translation(s) particularly for Agriculture Domain. Further, we have incorporated some additional features like synonym replacement and shallow parsing modules and after that we have calculated the final score by using BLEU and M-BLEU metrics. The sentences which have been tested are taken from Agriculture Domain. The BLEU metric does not consider the synonym problem and it considers synonym as different words thereby lowering down the final calculated score while comparing the human translations with the machine translations. To overcome this drawback of BLEU, we have incorporated a synonym replacement module in our algorithm. For this, first of all the word is replaced by its synonym present in any of the reference human translations and then it is compared with the reference human translation. © 2016 IEEE.