October 1, 2001
Similar papers 4
December 16, 2019
Natural language exhibits statistical dependencies at a wide range of scales. For instance, the mutual information between words in natural language decays like a power law with the temporal lag between them. However, many statistical learning models applied to language impose a sampling scale while extracting statistical structure. For instance, Word2Vec constructs a vector embedding that maximizes the prediction between a target word and the context words that appear nearby...
December 7, 2014
Attributes of words and relations between two words are central to numerous tasks in Artificial Intelligence such as knowledge representation, similarity measurement, and analogy detection. Often when two words share one or more attributes in common, they are connected by some semantic relations. On the other hand, if there are numerous semantic relations between two words, we can expect some of the attributes of one of the words to be inherited by the other. Motivated by thi...
February 6, 2020
Machine learning is a means to uncover deep patterns from rich sources of data. Here, we find that machine learning can recover the conceptual organization of the human mind when applied to the natural language use of millions of people. Utilizing text from billions of webpages, we recover most of the concepts contained in English, Dutch, and Japanese, as represented in large scale Word Association networks. Our results justify machine learning as a means to probe the human m...
March 31, 2012
WordNet proved that it is possible to construct a large-scale electronic lexical database on the principles of lexical semantics. It has been accepted and used extensively by computational linguists ever since it was released. Inspired by WordNet's success, we propose as an alternative a similar resource, based on the 1987 Penguin edition of Roget's Thesaurus of English Words and Phrases. Peter Mark Roget published his first Thesaurus over 150 years ago. Countless writers, ...
February 18, 2013
The automatic disambiguation of word senses (i.e., the identification of which of the meanings is used in a given context for a word that has multiple meanings) is essential for such applications as machine translation and information retrieval, and represents a key step for developing the so-called Semantic Web. Humans disambiguate words in a straightforward fashion, but this does not apply to computers. In this paper we address the problem of Word Sense Disambiguation (WSD)...
October 24, 2018
Language is hierarchically organized: words are built into phrases, sentences, and paragraphs to represent complex ideas. Here we ask whether the organization of language in written text displays the fractal hierarchical architecture common in systems optimized for efficient information transmission. We test the hypothesis that the expositional structure of scientific research articles displays Rentian scaling, and that the exponent of the scaling law changes as the article's...
June 8, 2009
We study the global topology of the syntactic and semantic distributional similarity networks for English through the technique of spectral analysis. We observe that while the syntactic network has a hierarchical structure with strong communities and their mixtures, the semantic network has several tightly knit communities along with a large core without any such well-defined community structure.
July 29, 2006
Deviations from the average can provide valuable insights about the organization of natural systems. This article extends this important principle to the more systematic identification and analysis of singular local connectivity patterns in complex networks. Four measurements quantifying different and complementary features of the connectivity around each node are calculated and multivariate statistical methods are then applied in order to identify outliers. The potential of ...
February 28, 2008
Natural languages are described in this paper in terms of networks of synonyms: a word is identified with a node, and synonyms are connected by undirected links. Our statistical analysis of the network of synonyms in Polish language showed it is scale-free; similar to what is known for English. The statistical properties of the networks are also similar. Thus, the statistical aspects of the networks are good candidates for culture independent elements of human language. We hy...
August 14, 2019
Knowledge is a network of interconnected concepts. Yet, precisely how the topological structure of knowledge constrains its acquisition remains unknown, hampering the development of learning enhancement strategies. Here we study the topological structure of semantic networks reflecting mathematical concepts and their relations in college-level linear algebra texts. We hypothesize that these networks will exhibit structural order, reflecting the logical sequence of topics that...