ID: cmp-lg/9604017

Fast Parsing using Pruning and Grammar Specialization

April 26, 1996

View on ArXiv

Similar papers 2

A Probabilistic Generative Grammar for Semantic Parsing

June 21, 2016

87% Match
Abulhair Saparov
Computation and Language
Machine Learning
Machine Learning

Domain-general semantic parsing is a long-standing goal in natural language processing, where the semantic parser is capable of robustly parsing sentences from domains outside of which it was trained. Current approaches largely rely on additional supervision from new domains in order to generalize to those domains. We present a generative model of natural language utterances and logical forms and demonstrate its application to semantic parsing. Our approach relies on domain-i...

Find SimilarView on arXiv

Automatic Generation of High Quality CCGbanks for Parser Domain Adaptation

June 5, 2019

87% Match
Masashi Yoshikawa, Hiroshi Noji, ... , Bekki Daisuke
Computation and Language

We propose a new domain adaptation method for Combinatory Categorial Grammar (CCG) parsing, based on the idea of automatic generation of CCG corpora exploiting cheaper resources of dependency trees. Our solution is conceptually simple, and not relying on a specific parser architecture, making it applicable to the current best-performing parsers. We conduct extensive parsing experiments with detailed discussion; on top of existing benchmark datasets on (1) biomedical texts and...

Find SimilarView on arXiv

An Efficient Distribution of Labor in a Two Stage Robust Interpretation Process

June 17, 1997

87% Match
Carolyn Penstien Rose', Alon Lavie
Computation and Language

Although Minimum Distance Parsing (MDP) offers a theoretically attractive solution to the problem of extragrammaticality, it is often computationally infeasible in large scale practical applications. In this paper we present an alternative approach where the labor is distributed between a more restrictive partial parser and a repair module. Though two stage approaches have grown in popularity in recent years because of their efficiency, they have done so at the cost of requir...

Find SimilarView on arXiv

Exploiting Diversity in Natural Language Processing: Combining Parsers

June 1, 2000

87% Match
John C. Henderson, Eric Brill
Computation and Language

Three state-of-the-art statistical parsers are combined to produce more accurate parses, as well as new bounds on achievable Treebank parsing accuracy. Two general approaches are presented and two combination techniques are described for each approach. Both parametric and non-parametric models are explored. The resulting parsers surpass the best previously published performance results for the Penn Treebank.

Find SimilarView on arXiv

Trading off Completeness for Efficiency --- The \textsc{ParseTalk} Performance Grammar Approach to Real-World Text Parsing

May 15, 1996

87% Match
Peter Neuhaus, Udo Hahn
Computation and Language

We argue for a performance-based design of natural language grammars and their associated parsers in order to meet the constraints posed by real-world natural language understanding. This approach incorporates declarative and procedural knowledge about language and language use within an object-oriented specification framework. We discuss several message passing protocols for real-world text parsing and provide reasons for sacrificing completeness of the parse in favor of eff...

Find SimilarView on arXiv

Look-up and Adapt: A One-shot Semantic Parser

October 27, 2019

87% Match
Zhichu Lu, Forough Arabshahi, ... , Mitchell Tom
Computation and Language
Machine Learning

Computing devices have recently become capable of interacting with their end users via natural language. However, they can only operate within a limited "supported" domain of discourse and fail drastically when faced with an out-of-domain utterance, mainly due to the limitations of their semantic parser. In this paper, we propose a semantic parser that generalizes to out-of-domain examples by learning a general strategy for parsing an unseen utterance through adapting the log...

Find SimilarView on arXiv

Apportioning Development Effort in a Probabilistic LR Parsing System through Evaluation

April 12, 1996

87% Match
John University of Sussex Carroll, Ted University of Cambridge Briscoe
Computation and Language

We describe an implemented system for robust domain-independent syntactic parsing of English, using a unification-based grammar of part-of-speech and punctuation labels coupled with a probabilistic LR parser. We present evaluations of the system's performance along several different dimensions; these enable us to assess the contribution that each individual part is making to the success of the system as a whole, and thus prioritise the effort to be devoted to its further enha...

Find SimilarView on arXiv

Learning Dynamic Feature Selection for Fast Sequential Prediction

May 22, 2015

87% Match
Emma Strubell, Luke Vilnis, ... , McCallum Andrew
Computation and Language
Machine Learning

We present paired learning and inference algorithms for significantly reducing computation and increasing speed of the vector dot products in the classifiers that are at the heart of many NLP components. This is accomplished by partitioning the features into a sequence of templates which are ordered such that high confidence can often be reached using only a small fraction of all features. Parameter estimation is arranged to maximize accuracy and early confidence in this sequ...

Find SimilarView on arXiv

Statistical Decision-Tree Models for Parsing

April 29, 1995

87% Match
David M. Magerman
Computation and Language

Syntactic natural language parsers have shown themselves to be inadequate for processing highly-ambiguous large-vocabulary text, as is evidenced by their poor performance on domains like the Wall Street Journal, and by the movement away from parsing-based approaches to text-processing in general. In this paper, I describe SPATTER, a statistical parser based on decision-tree learning techniques which constructs a complete parse for every sentence and achieves accuracy rates fa...

Find SimilarView on arXiv

Grammar Specialization through Entropy Thresholds

May 25, 1994

87% Match
Christer Swedish Institute of Computer Science Samuelsson
Computation and Language

Explanation-based generalization is used to extract a specialized grammar from the original one using a training corpus of parse trees. This allows very much faster parsing and gives a lower error rate, at the price of a small loss in coverage. Previously, it has been necessary to specify the tree-cutting criteria (or operationality criteria) manually; here they are derived automatically from the training set and the desired coverage of the specialized grammar. This is done b...

Find SimilarView on arXiv