Enhancing Table-to-Text Generation with Numerical Reasoning Using Graph2Seq Models
DOI:
https://doi.org/10.25124/ijies.v8i02.236Keywords:
Table-to-Text Generation, Graph, Graphsage-RNN, GCN-RNN, HallucinationAbstract
Interpreting data in tables into narratives is necessary because tables cannot explain their own data.
Additionally, there is a need to produce more analytic narratives from the results of numerical
reasoning on data from tables. The sequence-to-sequence (Seq2Seq) encoder-decoder structure is the
most widely used in table-to-text generation (T2XG). However, Seq2Seq requires the linearization of
tables, which can omit structural information and create hallucination problems. Alternatively, the
graph-to-sequence (Graph2Seq) encoder-decoder structure utilizes a graph encoder to better capture
important data information. Several studies have shown that Graph2Seq outperforms Seq2Seq. Thus,
this study applies Graph2Seq to T2XG, leveraging the structured nature of tables that can be
represented by graphs. This research initiates the use of Graph2Seq in T2XG with GCN-RNN and
GraphSage-RNN, aiming to improve narrative generation from tables through enhanced numerical
reasoning. Based on the automatic evaluation of the application of Graph2Seq on the T2XG task, it
has the same performance as the baseline model. Meanwhile, in human evaluation, Graphsage-RNN
is better able to reduce the possibility of hallucinations in text compared to the baseline model and
GCN-RNN.