In this paper, we propose a globally normalized model for context-free grammar (CFG)-based semantic parsing. Instead of predicting a probability, our model predicts a real-valued score at each step and does not suffer from the label bias problem. Experiments show that our approach outperforms locally normalized models on small datasets, but it does not yield improvement on a large dataset.
Bibtex
@inproceedings{huang-etal-2021-globally,
title = “A Globally Normalized Neural Model for Semantic Parsing”,
author = {Huang, Chenyang and
Yang, Wei and
Cao, Yanshuai and
Za{\”\i}ane, Osmar and
Mou, Lili},
booktitle = “Proceedings of the 5th Workshop on Structured Prediction for NLP (SPNLP 2021)”,
month = aug,
year = “2021”,
address = “Online”,
publisher = “Association for Computational Linguistics”,
url = “https://aclanthology.org/2021.spnlp-1.7”,
doi = “10.18653/v1/2021.spnlp-1.7”,
pages = “61–66”,
abstract = “In this paper, we propose a globally normalized model for context-free grammar (CFG)-based semantic parsing. Instead of predicting a probability, our model predicts a real-valued score at each step and does not suffer from the label bias problem. Experiments show that our approach outperforms locally normalized models on small datasets, but it does not yield improvement on a large dataset.”,
}
Related Research
-
A High-level Overview of Large Language Models
A High-level Overview of Large Language Models
W. Zi, L. El Asri, and S. Prince.
Research
-
ACL 2023 Recommended Reading List
ACL 2023 Recommended Reading List
P. Forsyth, K. Tang, and W. Zi.
Research
-
RBC Borealis at International Conference on Learning Representations (ICLR): Machine Learning for a better financial future
RBC Borealis at International Conference on Learning Representations (ICLR): Machine Learning for a better financial future
Learning And Generalization; Natural Language Processing; Time series Modelling
Research