ID: cmp-lg/9708012

Encoding Frequency Information in Lexicalized Grammars

August 19, 1997

View on ArXiv

Similar papers 5

A Lexicalized Tree Adjoining Grammar for English

September 18, 1998

84% Match
Research Group XTAG
Computation and Language

This document describes a sizable grammar of English written in the TAG formalism and implemented for use with the XTAG system. This report and the grammar described herein supersedes the TAG grammar described in an earlier 1995 XTAG technical report. The English grammar described in this report is based on the TAG formalism which has been extended to include lexicalization, and unification-based feature structures. The range of syntactic phenomena that can be handled is larg...

Find SimilarView on arXiv

Tree-gram Parsing: Lexical Dependencies and Structural Relations

November 6, 2000

84% Match
Khalil Sima'an
Computation and Language
Artificial Intelligence
Human-Computer Interaction

This paper explores the kinds of probabilistic relations that are important in syntactic disambiguation. It proposes that two widely used kinds of relations, lexical dependencies and structural relations, have complementary disambiguation capabilities. It presents a new model based on structural relations, the Tree-gram model, and reports experiments showing that structural relations should benefit from enrichment by lexical dependencies.

Find SimilarView on arXiv

Supertagging: Introduction, learning, and application

December 19, 2014

84% Match
Taraka Rama K
Computation and Language

Supertagging is an approach originally developed by Bangalore and Joshi (1999) to improve the parsing efficiency. In the beginning, the scholars used small training datasets and somewhat na\"ive smoothing techniques to learn the probability distributions of supertags. Since its inception, the applicability of Supertags has been explored for TAG (tree-adjoining grammar) formalism as well as other related yet, different formalisms such as CCG. This article will try to summarize...

Find SimilarView on arXiv

Unsupervised Language Acquisition

November 12, 1996

84% Match
Marcken Carl MIT de
Computation and Language

This thesis presents a computational theory of unsupervised language acquisition, precisely defining procedures for learning language from ordinary spoken or written utterances, with no explicit help from a teacher. The theory is based heavily on concepts borrowed from machine learning and statistical estimation. In particular, learning takes place by fitting a stochastic, generative model of language to the evidence. Much of the thesis is devoted to explaining conditions tha...

Find SimilarView on arXiv

Modeling the Complexity and Descriptive Adequacy of Construction Grammars

April 11, 2019

84% Match
Jonathan Dunn
Computation and Language

This paper uses the Minimum Description Length paradigm to model the complexity of CxGs (operationalized as the encoding size of a grammar) alongside their descriptive adequacy (operationalized as the encoding size of a corpus given a grammar). These two quantities are combined to measure the quality of potential CxGs against unannotated corpora, supporting discovery-device CxGs for English, Spanish, French, German, and Italian. The results show (i) that these grammars provid...

Find SimilarView on arXiv

Robust Probabilistic Predictive Syntactic Processing

May 9, 2001

84% Match
Brian Roark
Computation and Language

This thesis presents a broad-coverage probabilistic top-down parser, and its application to the problem of language modeling for speech recognition. The parser builds fully connected derivations incrementally, in a single pass from left-to-right across the string. We argue that the parsing approach that we have adopted is well-motivated from a psycholinguistic perspective, as a model that captures probabilistic dependencies between lexical items, as part of the process of bui...

Find SimilarView on arXiv

Exploring the Statistical Derivation of Transformational Rule Sequences for Part-of-Speech Tagging

June 3, 1994

84% Match
Lance A. Univ. of Pennsylvania and Bowdoin College Ramshaw, Mitchell P. Univ. of Pennsylvania Marcus
Computation and Language

Eric Brill has recently proposed a simple and powerful corpus-based language modeling approach that can be applied to various tasks including part-of-speech tagging and building phrase structure trees. The method learns a series of symbolic transformational rules, which can then be applied in sequence to a test corpus to produce predictions. The learning process only requires counting matches for a given set of rule templates, allowing the method to survey a very large space ...

Find SimilarView on arXiv

Part of Speech Based Term Weighting for Information Retrieval

April 5, 2017

84% Match
Christina Lioma, Roi Blanco
Information Retrieval

Automatic language processing tools typically assign to terms so-called weights corresponding to the contribution of terms to information content. Traditionally, term weights are computed from lexical statistics, e.g., term frequencies. We propose a new type of term weight that is computed from part of speech (POS) n-gram statistics. The proposed POS-based term weight represents how informative a term is in general, based on the POS contexts in which it generally occurs in la...

Find SimilarView on arXiv

Building Probabilistic Models for Natural Language

June 11, 1996

84% Match
Stanley F. Harvard University Chen
Computation and Language

In this thesis, we investigate three problems involving the probabilistic modeling of language: smoothing n-gram models, statistical grammar induction, and bilingual sentence alignment. These three problems employ models at three different levels of language; they involve word-based, constituent-based, and sentence-based models, respectively. We describe techniques for improving the modeling of language at each of these levels, and surpass the performance of existing algorith...

Find SimilarView on arXiv

Feature Selective Likelihood Ratio Estimator for Low- and Zero-frequency N-grams

November 5, 2021

84% Match
Masato Kikuchi, Mitsuo Yoshida, ... , Ozono Tadachika
Computation and Language

In natural language processing (NLP), the likelihood ratios (LRs) of N-grams are often estimated from the frequency information. However, a corpus contains only a fraction of the possible N-grams, and most of them occur infrequently. Hence, we desire an LR estimator for low- and zero-frequency N-grams. One way to achieve this is to decompose the N-grams into discrete values, such as letters and words, and take the product of the LRs for the values. However, because this metho...

Find SimilarView on arXiv