ID: cmp-lg/9604017

Fast Parsing using Pruning and Grammar Specialization

April 26, 1996

View on ArXiv

Similar papers 4

Fast and Scalable Expansion of Natural Language Understanding Functionality for Intelligent Agents

May 3, 2018

86% Match
Anuj Goyal, Angeliki Metallinou, Spyros Matsoukas
Computation and Language

Fast expansion of natural language functionality of intelligent virtual agents is critical for achieving engaging and informative interactions. However, developing accurate models for new natural language domains is a time and data intensive process. We propose efficient deep neural network architectures that maximally re-use available resources through transfer learning. Our methods are applied for expanding the understanding capabilities of a popular commercial agent and ar...

Find SimilarView on arXiv

Robust Probabilistic Predictive Syntactic Processing

May 9, 2001

86% Match
Brian Roark
Computation and Language

This thesis presents a broad-coverage probabilistic top-down parser, and its application to the problem of language modeling for speech recognition. The parser builds fully connected derivations incrementally, in a single pass from left-to-right across the string. We argue that the parsing approach that we have adopted is well-motivated from a psycholinguistic perspective, as a model that captures probabilistic dependencies between lexical items, as part of the process of bui...

Find SimilarView on arXiv

Domain Adaptation for Semantic Parsing

June 23, 2020

86% Match
Zechang Li, Yuxuan Lai, ... , Zhao Dongyan
Computation and Language

Recently, semantic parsing has attracted much attention in the community. Although many neural modeling efforts have greatly improved the performance, it still suffers from the data scarcity issue. In this paper, we propose a novel semantic parser for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain. Our semantic parser benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treat...

Find SimilarView on arXiv

A fast partial parse of natural language sentences using a connectionist method

March 22, 1995

86% Match
Caroline University of Hertfordshire Lyon, Bob University of Hertfordshire Dickerson
Computation and Language

The pattern matching capabilities of neural networks can be used to locate syntactic constituents of natural language. This paper describes a fully automated hybrid system, using neural nets operating within a grammatic framework. It addresses the representation of language for connectionist processing, and describes methods of constraining the problem size. The function of the network is briefly explained, and results are given.

Find SimilarView on arXiv

A New Statistical Parser Based on Bigram Lexical Dependencies

May 6, 1996

86% Match
Michael University of Pennsylvania Collins
Computation and Language

This paper describes a new statistical parser which is based on probabilities of dependencies between head-words in the parse tree. Standard bigram probability estimation techniques are extended to calculate probabilities of dependencies between pairs of words. Tests using Wall Street Journal data show that the method performs at least as well as SPATTER (Magerman 95, Jelinek et al 94), which has the best published results for a statistical parser on this task. The simplicity...

Find SimilarView on arXiv

Learning Unification-Based Natural Language Grammars

February 3, 1995

86% Match
Miles Dept. of Computer Science, University of York, York, England Osborne
Computation and Language

When parsing unrestricted language, wide-covering grammars often undergenerate. Undergeneration can be tackled either by sentence correction, or by grammar correction. This thesis concentrates upon automatic grammar correction (or machine learning of grammar) as a solution to the problem of undergeneration. Broadly speaking, grammar correction approaches can be classified as being either {\it data-driven}, or {\it model-based}. Data-driven learners use data-intensive methods ...

Find SimilarView on arXiv

Data-Oriented Language Processing. An Overview

November 14, 1996

86% Match
Rens University of Amsterdam Bod, Remko University of Amsterdam Scha
Computation and Language

During the last few years, a new approach to language processing has started to emerge, which has become known under various labels such as "data-oriented parsing", "corpus-based interpretation", and "tree-bank grammar" (cf. van den Berg et al. 1994; Bod 1992-96; Bod et al. 1996a/b; Bonnema 1996; Charniak 1996a/b; Goodman 1996; Kaplan 1996; Rajman 1995a/b; Scha 1990-92; Sekine & Grishman 1995; Sima'an et al. 1994; Sima'an 1995-96; Tugwell 1995). This approach, which we will c...

Find SimilarView on arXiv

Supervised Grammar Induction Using Training Data with Limited Constituent Information

May 2, 1999

86% Match
Rebecca Hwa
Computation and Language

Corpus-based grammar induction generally relies on hand-parsed training data to learn the structure of the language. Unfortunately, the cost of building large annotated corpora is prohibitively expensive. This work aims to improve the induction strategy when there are few labels in the training data. We show that the most informative linguistic constituents are the higher nodes in the parse trees, typically denoting complex noun phrases and sentential clauses. They account fo...

Find SimilarView on arXiv

A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures

June 19, 2020

86% Match
Meishan Zhang
Computation and Language

Syntactic and semantic parsing has been investigated for decades, which is one primary topic in the natural language processing community. This article aims for a brief survey on this topic. The parsing community includes many tasks, which are difficult to be covered fully. Here we focus on two of the most popular formalizations of parsing: constituent parsing and dependency parsing. Constituent parsing is majorly targeted to syntactic analysis, and dependency parsing can han...

Find SimilarView on arXiv

Heuristics and Parse Ranking

August 28, 1995

86% Match
B. Srinivas, Christine Doran, Seth Kulick
Computation and Language

There are currently two philosophies for building grammars and parsers -- Statistically induced grammars and Wide-coverage grammars. One way to combine the strengths of both approaches is to have a wide-coverage grammar with a heuristic component which is domain independent but whose contribution is tuned to particular domains. In this paper, we discuss a three-stage approach to disambiguation in the context of a lexicalized grammar, using a variety of domain independent heur...

Find SimilarView on arXiv