ID: cmp-lg/9708013

explanation-based learning of data oriented parsing

August 20, 1997

View on ArXiv

Similar papers 3

Training Dependency Parsers with Partial Annotation

September 29, 2016

85% Match
Zhenghua Li, Yue Zhang, ... , Zhang Min
Computation and Language
Machine Learning

Recently, these has been a surge on studying how to obtain partially annotated data for model supervision. However, there still lacks a systematic study on how to train statistical models with partial annotation (PA). Taking dependency parsing as our case study, this paper describes and compares two straightforward approaches for three mainstream dependency parsers. The first approach is previously proposed to directly train a log-linear graph-based parser (LLGPar) with PA ba...

Find SimilarView on arXiv

An improved parser for data-oriented lexical-functional analysis

September 27, 2000

85% Match
Rens Bod
Computation and Language

We present an LFG-DOP parser which uses fragments from LFG-annotated sentences to parse new sentences. Experiments with the Verbmobil and Homecentre corpora show that (1) Viterbi n best search performs about 100 times faster than Monte Carlo search while both achieve the same accuracy; (2) the DOP hypothesis which states that parse accuracy increases with increasing fragment size is confirmed for LFG-DOP; (3) LFG-DOP's relative frequency estimator performs worse than a discou...

Find SimilarView on arXiv

Tree-gram Parsing: Lexical Dependencies and Structural Relations

November 6, 2000

85% Match
Khalil Sima'an
Computation and Language
Artificial Intelligence
Human-Computer Interaction

This paper explores the kinds of probabilistic relations that are important in syntactic disambiguation. It proposes that two widely used kinds of relations, lexical dependencies and structural relations, have complementary disambiguation capabilities. It presents a new model based on structural relations, the Tree-gram model, and reports experiments showing that structural relations should benefit from enrichment by lexical dependencies.

Find SimilarView on arXiv

Dynamic Oracles for Top-Down and In-Order Shift-Reduce Constituent Parsing

October 25, 2018

85% Match
Daniel Fernández-González, Carlos Gómez-Rodríguez
Computation and Language

We introduce novel dynamic oracles for training two of the most accurate known shift-reduce algorithms for constituent parsing: the top-down and in-order transition-based parsers. In both cases, the dynamic oracles manage to notably increase their accuracy, in comparison to that obtained by performing classic static training. In addition, by improving the performance of the state-of-the-art in-order shift-reduce parser, we achieve the best accuracy to date (92.0 F1) obtained ...

Find SimilarView on arXiv

The TreeBanker: a Tool for Supervised Training of Parsed Corpora

May 7, 1997

85% Match
David SRI International, Cambridge Carter
Computation and Language

I describe the TreeBanker, a graphical tool for the supervised training involved in domain customization of the disambiguation component of a speech- or language-understanding system. The TreeBanker presents a user, who need not be a system expert, with a range of properties that distinguish competing analyses for an utterance and that are relatively easy to judge. This allows training on a corpus to be completed in far less time, and with far less expertise, than would be ne...

Find SimilarView on arXiv

Learning Dynamic Feature Selection for Fast Sequential Prediction

May 22, 2015

85% Match
Emma Strubell, Luke Vilnis, ... , McCallum Andrew
Computation and Language
Machine Learning

We present paired learning and inference algorithms for significantly reducing computation and increasing speed of the vector dot products in the classifiers that are at the heart of many NLP components. This is accomplished by partitioning the features into a sequence of templates which are ordered such that high confidence can often be reached using only a small fraction of all features. Parameter estimation is arranged to maximize accuracy and early confidence in this sequ...

Find SimilarView on arXiv

Pearl: A Probabilistic Chart Parser

May 3, 1994

85% Match
David M. Magerman, Mitchell P. Marcus
Computation and Language

This paper describes a natural language parsing algorithm for unrestricted text which uses a probability-based scoring function to select the "best" parse of a sentence. The parser, Pearl, is a time-asynchronous bottom-up chart parser with Earley-type top-down prediction which pursues the highest-scoring theory in the chart, where the score of a theory represents the extent to which the context of the sentence predicts that interpretation. This parser differs from previous at...

Find SimilarView on arXiv

Statistical Decision-Tree Models for Parsing

April 29, 1995

85% Match
David M. Magerman
Computation and Language

Syntactic natural language parsers have shown themselves to be inadequate for processing highly-ambiguous large-vocabulary text, as is evidenced by their poor performance on domains like the Wall Street Journal, and by the movement away from parsing-based approaches to text-processing in general. In this paper, I describe SPATTER, a statistical parser based on decision-tree learning techniques which constructs a complete parse for every sentence and achieves accuracy rates fa...

Find SimilarView on arXiv

Learning to Resolve Natural Language Ambiguities: A Unified Approach

November 3, 1998

85% Match
Dan Roth
Computation and Language
Machine Learning

We analyze a few of the commonly used statistics based and machine learning algorithms for natural language disambiguation tasks and observe that they can be re-cast as learning linear separators in the feature space. Each of the methods makes a priori assumptions, which it employs, given the data, when searching for its hypothesis. Nevertheless, as we show, it searches a space that is as rich as the space of all linear separators. We use this to build an argument for a data ...

Find SimilarView on arXiv

Learning Dependencies between Case Frame Slots

May 12, 1996

85% Match
Hang Theory NEC Lab., RWCP Li, Naoki Theory NEC Lab., RWCP Abe
Computation and Language

We address the problem of automatically acquiring case frame patterns (selectional patterns) from large corpus data. In particular, we propose a method of learning dependencies between case frame slots. We view the problem of learning case frame patterns as that of learning multi-dimensional discrete joint distributions, where random variables represent case slots. We then formalize the dependencies between case slots as the probabilistic dependencies between these random var...

Find SimilarView on arXiv