Linguistic categories include
The definition of linguistic categories is a major concern of linguistic theory, and thus, the definition and naming of categories varies across different theoretical frameworks and grammatical traditions for different languages. The operationalization of linguistic categories in lexicography, computational linguistics, natural language processing, corpus linguistics, and terminology management typically requires resource-, problem- or application-specific definitions of linguistic categories. In Cognitive linguistics it has been argued that linguistic categories have a prototype structure like that of the categories of common words in a language.[1]
To facilitate the interoperability between lexical resources, linguistic annotations and annotation tools and for the systematic handling of linguistic categories across different theoretical frameworks, a number of inventories of linguistic categories have been developed and are being used, with examples as given below. The practical objective of such inventories is to perform quantitative evaluation (for language-specific inventories), to train NLP tools, or to facilitate cross-linguistic evaluation, querying or annotation of language data. At a theoretical level, the existence of universal categories in human language has been postulated, e.g., in Universal grammar, but also heavily criticized.
Schools commonly teach that there are 9 parts of speech in English: noun, verb, article, adjective, preposition, pronoun, adverb, conjunction, and interjection. However, there are clearly many more categories and sub-categories. For nouns, the plural, possessive, and singular forms can be distinguished. In many languages words are also marked for their case (role as subject, object, etc.), grammatical gender, and so on; while verbs are marked for tense, aspect, and other things. In some tagging systems, different inflections of the same root word will get different parts of speech, resulting in a large number of tags. For example, NN for singular common nouns, NNS for plural common nouns, NP for singular proper nouns (see the POS tags used in the Brown Corpus). Other tagging systems use a smaller number of tags and ignore fine differences or model them as features somewhat independent from part-of-speech.[2]
In part-of-speech tagging by computer, it is typical to distinguish from 50 to 150 separate parts of speech for English. POS tagging work has been done in a variety of languages, and the set of POS tags used varies greatly with language. Tags usually are designed to include overt morphological distinctions, although this leads to inconsistencies such as case-marking for pronouns but not nouns in English, and much larger cross-language differences. The tag sets for heavily inflected languages such as Greek and Latin can be very large; tagging words in agglutinative languages such as Inuit languages may be virtually impossible. Work on stochastic methods for tagging Koine Greek (DeRose 1990) has used over 1,000 parts of speech and found that about as many words were ambiguous in that language as in English. A morphosyntactic descriptor in the case of morphologically rich languages is commonly expressed using very short mnemonics, such as Ncmsan for Category=Noun, Type = common, Gender = masculine, Number = singular, Case = accusative, Animate = no.
The most popular "tag set" for POS tagging for American English is probably the Penn tag set, developed in the Penn Treebank project.
For Western European languages, cross-linguistically applicable annotation schemes for parts-of-speech, morphosyntax and syntax have been developed with the EAGLES Guidelines. The "Expert Advisory Group on Language Engineering Standards" (EAGLES) was an initiative of the European Commission that ran within the DG XIII Linguistic Research and Engineering programme from 1994 to 1998, coordinated by Consorzio Pisa Ricerche, Pisa, Italy. The EAGLES guidelines provide guidance for markup to be used with text corpora, particularly for identifying features relevant in computational linguistics and lexicography. Numerous companies, research centres, universities and professional bodies across the European Union collaborated to produce the EAGLES Guidelines, which set out recommendations for de facto standards and rules of best practice for:[3]
The Eagles guidelines have inspired subsequent work on other regions, as well, e.g., Eastern Europe.[4]
A generation later, a similar effort was initiated by the research community under the umbrella of Universal Dependencies. Petrov et al.[5][6] have proposed a "universal", but highly reductionist, tag set, with 12 categories (for example, no subtypes of nouns, verbs, punctuation, etc.; no distinction of "to" as an infinitive marker vs. preposition (hardly a "universal" coincidence), etc.). Subsequently, this was complemented with cross-lingual specifications for dependency syntax (Stanford Dependencies),[7] and morphosyntax (Interset interlingua,[8] partially building on the Multext-East/Eagles tradition) in the context of the Universal Dependencies (UD), an international cooperative project to create treebanks of the world's languages with cross-linguistically applicable ("universal") annotations for parts of speech, dependency syntax, and (optionally) morphosyntactic (morphological) features. Core applications are automated text processing in the field of natural language processing (NLP) and research into natural language syntax and grammar, especially within linguistic typology. The annotation scheme has it roots in three related projects: The UD annotation scheme uses a representation in the form of dependency trees as opposed to a phrase structure trees. At as of February 2019, there are just over 100 treebanks of more than 70 languages available in the UD inventory.[9] The project's primary aim is to achieve cross-linguistic consistency of annotation. However, language-specific extensions are permitted for morphological features (individual languages or resources can introduce additional features). In a more restricted form, dependency relations can be extended with a secondary label that accompanies the UD label, e.g., aux:pass for an auxiliary (UD aux) used to mark passive voice.[10]
The Universal Dependencies have inspired similar efforts for the areas of inflectional morphology,[11] frame semantics[12] and coreference.[13] For phrase structure syntax, a comparable effort does not seem to exist, but the specifications of the Penn Treebank have been applied to (and extended for) a broad range of languages,[14] e.g., Icelandic,[15] Old English,[16] Middle English,[17] Middle Low German,[18] Early Modern High German,[19] Yiddish,[20] Portuguese,[21] Japanese,[22] Arabic[23] and Chinese.[24]
In linguistics, an interlinear gloss is a gloss (series of brief explanations, such as definitions or pronunciations) placed between lines (inter- + linear), such as between a line of original text and its translation into another language. When glossed, each line of the original text acquires one or more lines of transcription known as an interlinear text or interlinear glossed text (IGT)—interlinear for short. Such glosses help the reader follow the relationship between the source text and its translation, and the structure of the original language. There is no standard inventory for glosses, but common labels are collected in the Leipzig Glossing Rules.[25] Wikipedia also provides a List of glossing abbreviations that draws on this and other sources.
GOLD ("General Ontology for Linguistic Description") is an ontology for descriptive linguistics. It gives a formalized account of the most basic categories and relations used in the scientific description of human language, e.g., as a formalization of interlinear glosses. GOLD was first introduced by Farrar and Langendoen (2003).[26] Originally, it was envisioned as a solution to the problem of resolving disparate markup schemes for linguistic data, in particular data from endangered languages. However, GOLD is much more general and can be applied to all languages. In this function, GOLD overlaps with the ISO 12620 Data Category Registry (ISOcat); it is, however, more stringently structured.
GOLD was maintained by the LINGUIST List and others from 2007 to 2010.[27] The RELISH project created a mirror of the 2010 edition of GOLD as a Data Category Selection within ISOcat. As of 2018, GOLD data remains an important terminology hub in the context of the Linguistic Linked Open Data cloud, but as it is not actively maintained anymore, its function is increasingly replaced by OLiA (for linguistic annotation, building on GOLD and ISOcat) and lexinfo.net (for dictionary metadata, building on ISOcat).
ISO 12620 is a standard from ISO/TC 37 that defines a Data Category Registry, a registry for registering linguistic terms used in various fields of translation, computational linguistics and natural language processing and defining mappings both between different terms and the same terms used in different systems.[28][29][30]
An earlier implementation of this standard, ISOcat, provides persistent identifiers and URIs for linguistic categories, including the inventory of the GOLD ontology (see below). The goal of the registry is that new systems can reuse existing terminology, or at least be easily mapped to existing terminology, to aid interoperability.[31] The standard is used by other standards such as Lexical Markup Framework (ISO 24613:2008), and a number of terminologies have been added to the registry, including the Eagles guidelines, the National Corpus of Polish, and the TermBase eXchange format from the Localization Industry Standards Association.
However, the current edition ISO 12620:2019[32] does no longer provide a registry of terms for language technology and terminology, but it is now restricted to terminology resources, hence the revised title "Management of terminology resources — Data category specifications". Accordingly, ISOcat is no longer actively developed.[33] As of May 2020, successor systems, CLARIN Concept Registry[34] and DatCatInfo[35] are only emerging.
For linguistic categories relevant to lexical resources, the lexinfo vocabulary represents an established community standard,[36] in particular in connection with the OntoLex vocabulary and machine-readable dictionaries in the context of Linguistic Linked Open Data technologies. Like the OntoLex vocabulary builds on the Lexical Markup Framework (LMF), lexinfo builds on (the LMF section of) ISOcat.[37] Unlike ISOcat, however, lexinfo is actively maintained and currently (May 2020) extended in a community effort.[38]
Similar in spirit to GOLD, the Ontologies of Linguistic Annotation (OLiA) provide a reference inventory of linguistic categories for syntactic, morphological and semantic phenomena relevant for linguistic annotation and linguistic corpora in the form of an ontology. In addition, they also provide machine-readable annotation schemes for more than 100 languages, linked with the OLiA reference model.[39] The OLiA ontologies represent a major hub of annotation terminology in the (Linguistic) Linked Open Data cloud, with applications for search, retrieval and machine learning over heterogeneously annotated language resources.[37]
In addition to annotation schemes, the OLiA Reference Model is also linked with the Eagles Guidelines,[40] GOLD,[40] ISOcat,[41] CLARIN Concept Registry,[42] Universal Dependencies,[43] lexinfo,[43] etc., they thus enable interoperability between these vocabularies. OLiA is being developed as a community project on GitHub [44]
{{cite journal}}
: Cite journal requires |journal=
(help)