Categories
  Encyclosphere.org ENCYCLOREADER
  supported by EncyclosphereKSF

SMART Information Retrieval System

From HandWiki - Reading time: 3 min

The SMART (System for the Mechanical Analysis and Retrieval of Text) Information Retrieval System is an information retrieval system developed at Cornell University in the 1960s.[1] Many important concepts in information retrieval were developed as part of research on the SMART system, including the vector space model, relevance feedback, and Rocchio classification. Gerard Salton led the group that developed SMART. Other contributors included Mike Lesk.

The SMART system also provides a set of corpora, queries and reference rankings, taken from different subjects, notably

  • ADI: publications from information science reviews
  • Computer science
  • Cranfield collection: publications from aeronautic reviews
  • Forensic science: library science
  • MEDLARS collection: publications from medical reviews
  • Time (magazine) collection: archives of the generalist review Time in 1963

To the legacy of the SMART system belongs the so-called SMART triple notation, a mnemonic scheme for denoting tf-idf weighting variants in the vector space model. The mnemonic for representing a combination of weights takes the form ddd.qqq, where the first three letters represents the term weighting of the collection document vector and the second three letters represents the term weighting for the query document vector. For example, ltc.lnn represents the ltc weighting applied to a collection document and the lnn weighting applied to a query document.

The following tables establish the SMART notation:[2]

Symbols and notation
[math]\displaystyle{ D_i = \{w_{i_1}, w_{i_2}, \ldots, w_{i_t}\} }[/math] represents a document vector, where [math]\displaystyle{ w_{i_k} }[/math] is the weight of the term [math]\displaystyle{ T_k }[/math] in [math]\displaystyle{ D_i }[/math] and [math]\displaystyle{ t }[/math] is the number of unique terms in [math]\displaystyle{ D_i }[/math]. Positive features characterize terms that are present in a document, and the weight of zero is used for terms that are absent from a document.
[math]\displaystyle{ f_{i_k} }[/math] Occurrence frequency of term [math]\displaystyle{ T_k }[/math] in document [math]\displaystyle{ D_i }[/math] [math]\displaystyle{ u_i }[/math] Number of unique terms in document [math]\displaystyle{ D_i }[/math]
[math]\displaystyle{ N }[/math] Number of collection documents [math]\displaystyle{ \operatorname{avg}(u) }[/math] Average number of unique terms in a document
[math]\displaystyle{ n_k }[/math] Number of documents with term [math]\displaystyle{ T_k }[/math] present [math]\displaystyle{ b_t }[/math] Number of characters in document [math]\displaystyle{ D_i }[/math]
[math]\displaystyle{ \max(f_{i_k}) }[/math] Occurrence frequency of the most common term in document [math]\displaystyle{ D_i }[/math] [math]\displaystyle{ \operatorname{avg}(b) }[/math] Average number of characters in a document
[math]\displaystyle{ \operatorname{avg}(f_{i_k}) }[/math] Average occurrence frequency of a term in document [math]\displaystyle{ D_i }[/math] [math]\displaystyle{ G }[/math] Global collection statistics
[math]\displaystyle{ s }[/math] The slope in the context of pivoted document length normalization[3]
Smart term-weighting triple notation
Term frequency [math]\displaystyle{ \text{tf}(f_{i_k}) }[/math] Document frequency [math]\displaystyle{ \text{df}(N, n_k) }[/math] Document length normalization [math]\displaystyle{ g(G, D_i) }[/math]
b [math]\displaystyle{ 1 }[/math] Binary weight x n [math]\displaystyle{ 1 }[/math] Disregards the collection frequency x n [math]\displaystyle{ 1 }[/math] No document length normalization
t n [math]\displaystyle{ f_{i_k} }[/math] Raw term frequency f [math]\displaystyle{ \log_2\left(\frac{N}{n_k}\right) }[/math] Inverse collection frequency c [math]\displaystyle{ \sqrt{\sum_{k=1}^t w_{i_k}^2} }[/math] Cosine normalization
a [math]\displaystyle{ 0.5 + 0.5\frac{f_{i_k}}{\max(f_{i_k})} }[/math] Augmented normalized term frequency t [math]\displaystyle{ \log_2\left(\frac{N+1}{n_k}\right) }[/math] Inverse collection frequency u [math]\displaystyle{ 1-s+s\frac{u_i}{\operatorname{avg}(u)} }[/math] Pivoted unique normalization[3]
l [math]\displaystyle{ 1+\log_2 f_{i_k} }[/math] Logarithm p [math]\displaystyle{ \log_2\left(\frac{N-n_k}{n_k}\right) }[/math] Probabilistic inverse collection frequency b [math]\displaystyle{ 1-s+s\frac{b_i}{\operatorname{avg}(b)} }[/math] Pivoted characted length normalization[3]
L [math]\displaystyle{ \frac{1+\log_2(f_{i_k})}{1 + \log_2(\operatorname{avg}(f_{i_k}))} }[/math] Average-term-frequency-based normalization[3]
d [math]\displaystyle{ 1+\log_2(1+\log_2(f_{i_k})) }[/math] Double logarithm

The gray letters in the first, fifth, and ninth columns are the scheme used by Salton and Buckley in their 1988 paper.[4] The bold letters in the second, sixth, and tenth columns are the scheme used in experiments reported thereafter.

References

  1. Salton, G, Lesk, M.E. (June 1965). "The SMART automatic document retrieval systems—an illustration". Communications of the ACM 8 (6): 391–398. doi:10.1145/364955.364990. 
  2. Palchowdhury, Sauparna (2016). "On The Provenance of tf-idf". http://sauparna.sdf.org/Information_Retrieval/.ontfidf. 
  3. 3.0 3.1 3.2 3.3 Singhal, A., Buckley, C., & Mitra, M. (1996). Pivoted Document Length Normalization. SIGIR Forum, 51, 176-184.
  4. Salton, G., & Buckley, C. (1988). Term-Weighting Approaches in Automatic Text Retrieval. Inf. Process. Manage., 24, 513-523.

External links





Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Software:SMART_Information_Retrieval_System
1 | Status: cached on July 21 2024 22:36:30
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF