Term Discrimination is a way to rank keywords in how useful they are for information retrieval.
This is a method similar to tf-idf but it deals with finding keywords suitable for information retrieval and ones that are not. Please refer to Vector Space Model first.
This method uses the concept of Vector Space Density that the less dense an occurrence matrix is, the better an information retrieval query will be.
An optimal index term is one that can distinguish two different documents from each other and relate two similar documents. On the other hand, a sub-optimal index term can not distinguish two different document from two similar documents.
The discrimination value is the difference in the occurrence matrix's vector-space density versus the same matrix's vector-space without the index term's density.
Let: [math]\displaystyle{ A }[/math] be the occurrence matrix [math]\displaystyle{ A_k }[/math] be the occurrence matrix without the index term [math]\displaystyle{ k }[/math] and [math]\displaystyle{ Q(A) }[/math] be density of [math]\displaystyle{ A }[/math]. Then: The discrimination value of the index term [math]\displaystyle{ k }[/math] is: [math]\displaystyle{ DV_k = Q(A) - Q(A_k) }[/math]
Given an occurrency matrix: [math]\displaystyle{ A }[/math] and one keyword: [math]\displaystyle{ k }[/math]
A higher value is better because including the keyword will result in better information retrieval.
Keywords that are sparse should be poor discriminators because they have poor recall, whereas keywords that are frequent should be poor discriminators because they have poor precision.