ID: cmp-lg/9708012

Encoding Frequency Information in Lexicalized Grammars

August 19, 1997

View on ArXiv

Similar papers 4

Formal Aspects of Language Modeling

November 7, 2023

85% Match
Ryan Cotterell, Anej Svete, Clara Meister, ... , Du Li
Computation and Language

Large language models have become one of the most commonly deployed NLP inventions. In the past half-decade, their integration into core natural language processing tools has dramatically increased the performance of such tools, and they have entered the public discourse surrounding artificial intelligence. Consequently, it is important for both developers and researchers alike to understand the mathematical foundations of large language models, as well as how to implement th...

Find SimilarView on arXiv

Frequency vs. Association for Constraint Selection in Usage-Based Construction Grammar

April 11, 2019

85% Match
Jonathan Dunn
Computation and Language

A usage-based Construction Grammar (CxG) posits that slot-constraints generalize from common exemplar constructions. But what is the best model of constraint generalization? This paper evaluates competing frequency-based and association-based models across eight languages using a metric derived from the Minimum Description Length paradigm. The experiments show that association-based models produce better generalizations across all languages by a significant margin.

Find SimilarView on arXiv

Exploiting auxiliary distributions in stochastic unification-based grammars

August 25, 2000

85% Match
Mark Johnson, Stefan Riezler
Computation and Language

This paper describes a method for estimating conditional probability distributions over the parses of ``unification-based'' grammars which can utilize auxiliary distributions that are estimated by other means. We show how this can be used to incorporate information about lexical selectional preferences gathered from other sources into Stochastic ``Unification-based'' Grammars (SUBGs). While we apply this estimator to a Stochastic Lexical-Functional Grammar, the method is gene...

Find SimilarView on arXiv

Exploiting Diversity in Natural Language Processing: Combining Parsers

June 1, 2000

85% Match
John C. Henderson, Eric Brill
Computation and Language

Three state-of-the-art statistical parsers are combined to produce more accurate parses, as well as new bounds on achievable Treebank parsing accuracy. Two general approaches are presented and two combination techniques are described for each approach. Both parametric and non-parametric models are explored. The resulting parsers surpass the best previously published performance results for the Penn Treebank.

Find SimilarView on arXiv

Expoiting Syntactic Structure for Language Modeling

November 12, 1998

84% Match
Ciprian CLSP The Johns Hopkins University Chelba, Frederick CLSP The Johns Hopkins University Jelinek
Computation and Language

The paper presents a language model that develops syntactic structure and uses it to extract meaningful information from the word history, thus enabling the use of long distance dependencies. The model assigns probability to every joint sequence of words--binary-parse-structure with headword annotation and operates in a left-to-right manner --- therefore usable for automatic speech recognition. The model, its probabilistic parameterization, and a set of experiments meant to e...

Find SimilarView on arXiv

Estimating Lexical Priors for Low-Frequency Syncretic Forms

April 24, 1995

84% Match
Harald Max Planck Institute for Psycholinguistics Baayen, Richard AT&T Bell Laboratories Sproat
Computation and Language

Given a previously unseen form that is morphologically n-ways ambiguous, what is the best estimator for the lexical prior probabilities for the various functions of the form? We argue that the best estimator is provided by computing the relative frequencies of the various functions among the hapax legomena --- the forms that occur exactly once in a corpus. This result has important implications for the development of stochastic morphological taggers, especially when some init...

Find SimilarView on arXiv

On Unsupervised Training of Link Grammar Based Language Models

August 27, 2022

84% Match
Nikolay Mikhaylovskiy
Computation and Language
Artificial Intelligence

In this short note we explore what is needed for the unsupervised training of graph language models based on link grammars. First, we introduce the ter-mination tags formalism required to build a language model based on a link grammar formalism of Sleator and Temperley [21] and discuss the influence of context on the unsupervised learning of link grammars. Second, we pro-pose a statistical link grammar formalism, allowing for statistical language generation. Third, based on t...

Find SimilarView on arXiv

Grammar Induction for Minimalist Grammars using Variational Bayesian Inference : A Technical Report

October 31, 2017

84% Match
Eva Portelance, Amelia Bruno, Daniel Harasim, ... , O'Donnell Timothy J.
Computation and Language

The following technical report presents a formal approach to probabilistic minimalist grammar parameter estimation. We describe a formalization of a minimalist grammar. We then present an algorithm for the application of variational Bayesian inference to this formalization.

Find SimilarView on arXiv

Probabilistic top-down parsing and language modeling

May 8, 2001

84% Match
Brian Roark
Computation and Language

This paper describes the functioning of a broad-coverage probabilistic top-down parser, and its application to the problem of language modeling for speech recognition. The paper first introduces key notions in language modeling and probabilistic parsing, and briefly reviews some previous approaches to using syntactic structure for language modeling. A lexicalized probabilistic top-down parser is then presented, which performs very well, in terms of both the accuracy of return...

Find SimilarView on arXiv

Handling Massive N-Gram Datasets Efficiently

June 25, 2018

84% Match
Giulio Ermanno Pibiri, Rossano Venturini
Information Retrieval
Databases

This paper deals with the two fundamental problems concerning the handling of large n-gram language models: indexing, that is compressing the n-gram strings and associated satellite data without compromising their retrieval speed; and estimation, that is computing the probability distribution of the strings from a large textual source. Regarding the problem of indexing, we describe compressed, exact and lossless data structures that achieve, at the same time, high space reduc...

Find SimilarView on arXiv