Machine learning is the key solution to many AI issues, but learning models rely heavily on specific training data. While a Bayesian setup can be used to incorporate some learning patterns with previous knowledge, those patterns can not access any organized world knowledge on requirements. The primary objective is to enable human-capable machines in ordinary everyday circumstances to estimate and make presumptions. In this paper we propose to respond to such common sense issues through a textual inference system with external, organized common sense graphs for explanatory inferences. The framework is based on a schematic map as a pair of questions and answers, a linked subgraph from the semantine to the symbolic space of knowledge-based external information. It displays a schematic map with a new network graphic module for information knowledge and performance with graph representations. LSTMs and graphical networks with a hierarchical attention-based direction are the basis of our model. It is flexible and understandable from the intermediate attention scores, leading to confident results. We also achieved state-of-the-art reliability on CommonsenseQA, a broad database of common sense reasoning utilizing ConceptNet as the only external tool for BERT-based models. © 2021 Institute of Physics Publishing. All rights reserved.