This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
A diversity index is a method of measuring how many different types (e.g. species) there are in a dataset (e.g. a community). Some more sophisticated indices also account for the phylogenetic relatedness among the types.[1] Diversity indices are statistical representations of different aspects of biodiversity (e.g. richness, evenness, and dominance), which are useful simplifications for comparing different communities or sites.
When diversity indices are used in ecology, the types of interest are usually species, but they can also be other categories, such as genera, families, functional types, or haplotypes. The entities of interest are usually individual organisms (e.g. plants or animals), and the measure of abundance can be, for example, number of individuals, biomass or coverage. In demography, the entities of interest can be people, and the types of interest various demographic groups. In information science, the entities can be characters and the types of the different letters of the alphabet. The most commonly used diversity indices are simple transformations of the effective number of types (also known as 'true diversity'), but each diversity index can also be interpreted in its own right as a measure corresponding to some real phenomenon (but a different one for each diversity index).[2][3][4][5]
Many indices only account for categorical diversity between subjects or entities. Such indices, however do not account for the total variation (diversity) that can be held between subjects or entities which occurs only when both categorical and qualitative diversity are calculated.
True diversity, or the effective number of types, refers to the number of equally abundant types needed for the average proportional abundance of the types to equal that observed in the dataset of interest (where all types may not be equally abundant). The true diversity in a dataset is calculated by first taking the weighted generalized mean Mq−1 of the proportional abundances of the types in the dataset, and then taking the reciprocal of this. The equation is:[4][5]
The denominator Mq−1 equals the average proportional abundance of the types in the dataset as calculated with the weighted generalized mean with exponent q − 1. In the equation, R is richness (the total number of types in the dataset), and the proportional abundance of the ith type is pi. The proportional abundances themselves are used as the nominal weights. The numbers are called Hill numbers of order q or effective number of species.[6]
When q = 1, the above equation is undefined. However, the mathematical limit as q approaches 1 is well defined and the corresponding diversity is calculated with the following equation:
which is the exponential of the Shannon entropy calculated with natural logarithms (see above). In other domains, this statistic is also known as the perplexity.
The general equation of diversity is often written in the form[2][3]
and the term inside the parentheses is called the basic sum. Some popular diversity indices correspond to the basic sum as calculated with different values of q.[3]
The value of q is often referred to as the order of the diversity. It defines the sensitivity of the true diversity to rare vs. abundant species by modifying how the weighted mean of the species' proportional abundances is calculated. With some values of the parameter q, the value of the generalized mean Mq−1 assumes familiar kinds of weighted means as special cases. In particular,
Generally, increasing the value of q increases the effective weight given to the most abundant species. This leads to obtaining a larger Mq−1 value and a smaller true diversity (qD) value with increasing q.
When q = 1, the weighted geometric mean of the pi values is used, and each species is exactly weighted by its proportional abundance (in the weighted geometric mean, the weights are the exponents). When q > 1, the weight given to abundant species is exaggerated, and when q < 1, the weight given to rare species is. At q = 0, the species weights exactly cancel out the species proportional abundances, such that the weighted mean of the pi values equals 1 / R even when all species are not equally abundant. At q = 0, the effective number of species, 0D, hence equals the actual number of species R. In the context of diversity, q is generally limited to non-negative values. This is because negative values of q would give rare species so much more weight than abundant ones that qD would exceed R.[4][5]
Richness R simply quantifies how many different types the dataset of interest contains. For example, species richness (usually noted S) is simply the number of species, e.g. at a particular site. Richness is a simple measure, so it has been a popular diversity index in ecology, where abundance data are often not available.[7] If true diversity is calculated with q = 0, the effective number of types (0D) equals the actual number of types, which is identical to Richness (R).[3][5]
The Shannon index has been a popular diversity index in the ecological literature, where it is also known as Shannon's diversity index, Shannon–Wiener index, and (erroneously) Shannon–Weaver index.[8] The measure was originally proposed by Claude Shannon in 1948 to quantify the entropy (hence Shannon entropy, related to Shannon information content) in strings of text.[9] The idea is that the more letters there are, and the closer their proportional abundances in the string of interest, the more difficult it is to correctly predict which letter will be the next one in the string. The Shannon entropy quantifies the uncertainty (entropy or degree of surprise) associated with this prediction. It is most often calculated as follows:
where pi is the proportion of characters belonging to the ith type of letter in the string of interest. In ecology, pi is often the proportion of individuals belonging to the ith species in the dataset of interest. Then the Shannon entropy quantifies the uncertainty in predicting the species identity of an individual that is taken at random from the dataset.
Although the equation is here written with natural logarithms, the base of the logarithm used when calculating the Shannon entropy can be chosen freely. Shannon himself discussed logarithm bases 2, 10 and e, and these have since become the most popular bases in applications that use the Shannon entropy. Each log base corresponds to a different measurement unit, which has been called binary digits (bits), decimal digits (decits), and natural digits (nats) for the bases 2, 10 and e, respectively. Comparing Shannon entropy values that were originally calculated with different log bases requires converting them to the same log base: change from the base a to base b is obtained with multiplication by logba.[9]
The Shannon index (H') is related to the weighted geometric mean of the proportional abundances of the types. Specifically, it equals the logarithm of true diversity as calculated with q = 1:[4]
This can also be written
which equals
Since the sum of the pi values equals 1 by definition, the denominator equals the weighted geometric mean of the pi values, with the pi values themselves being used as the weights (exponents in the equation). The term within the parentheses hence equals true diversity 1D, and H' equals ln(1D).[2][4][5]
When all types in the dataset of interest are equally common, all pi values equal 1 / R, and the Shannon index hence takes the value ln(R). The more unequal the abundances of the types, the larger the weighted geometric mean of the pi values, and the smaller the corresponding Shannon entropy. If practically all abundance is concentrated to one type, and the other types are very rare (even if there are many of them), Shannon entropy approaches zero. When there is only one type in the dataset, Shannon entropy exactly equals zero (there is no uncertainty in predicting the type of the next randomly chosen entity).
In machine learning the Shannon index is also called as Information gain.
The Rényi entropy is a generalization of the Shannon entropy to other values of q than 1. It can be expressed:
which equals
This means that taking the logarithm of true diversity based on any value of q gives the Rényi entropy corresponding to the same value of q.
The Simpson index was introduced in 1949 by Edward H. Simpson to measure the degree of concentration when individuals are classified into types.[10] The same index was rediscovered by Orris C. Herfindahl in 1950.[11] The square root of the index had already been introduced in 1945 by the economist Albert O. Hirschman.[12] As a result, the same measure is usually known as the Simpson index in ecology, and as the Herfindahl index or the Herfindahl–Hirschman index (HHI) in economics.
The measure equals the probability that two entities taken at random from the dataset of interest represent the same type.[10] It equals:
where R is richness (the total number of types in the dataset). This equation is also equal to the weighted arithmetic mean of the proportional abundances pi of the types of interest, with the proportional abundances themselves being used as the weights.[2] Proportional abundances are by definition constrained to values between zero and one, but it is a weighted arithmetic mean, hence λ ≥ 1/R, which is reached when all types are equally abundant.
By comparing the equation used to calculate λ with the equations used to calculate true diversity, it can be seen that 1/λ equals 2D, i.e., true diversity as calculated with q = 2. The original Simpson's index hence equals the corresponding basic sum.[3]
The interpretation of λ as the probability that two entities taken at random from the dataset of interest represent the same type assumes that the first entity is replaced to the dataset before taking the second entity. If the dataset is very large, sampling without replacement gives approximately the same result, but in small datasets, the difference can be substantial. If the dataset is small, and sampling without replacement is assumed, the probability of obtaining the same type with both random draws is:
where ni is the number of entities belonging to the ith type and N is the total number of entities in the dataset.[10] This form of the Simpson index is also known as the Hunter–Gaston index in microbiology.[13]
Since the mean proportional abundance of the types increases with decreasing number of types and increasing abundance of the most abundant type, λ obtains small values in datasets of high diversity and large values in datasets of low diversity. This is counterintuitive behavior for a diversity index, so often, such transformations of λ that increase with increasing diversity have been used instead. The most popular of such indices have been the inverse Simpson index (1/λ) and the Gini–Simpson index (1 − λ).[2][3] Both of these have also been called the Simpson index in the ecological literature, so care is needed to avoid accidentally comparing the different indices as if they were the same.
The inverse Simpson index equals:
This simply equals true diversity of order 2, i.e. the effective number of types that is obtained when the weighted arithmetic mean is used to quantify average proportional abundance of types in the dataset of interest.
The index is also used as a measure of the effective number of parties.
The Gini-Simpson Index is also called Gini impurity, or Gini's diversity index[14] in the field of Machine Learning. The original Simpson index λ equals the probability that two entities taken at random from the dataset of interest (with replacement) represent the same type. Its transformation 1 − λ, therefore, equals the probability that the two entities represent different types. This measure is also known in ecology as the probability of interspecific encounter (PIE)[15] and the Gini–Simpson index.[3] It can be expressed as a transformation of the true diversity of order 2:
The Gibbs–Martin index of sociology, psychology, and management studies,[16] which is also known as the Blau index, is the same measure as the Gini–Simpson index.
The quantity is also known as the expected heterozygosity in population genetics.
The Berger–Parker index, named after Wolfgang H. Berger and Frances Lawrence Parker,[17] equals the maximum pi value in the dataset, i.e., the proportional abundance of the most abundant type. This corresponds to the weighted generalized mean of the pi values when q approaches infinity, and hence equals the inverse of the true diversity of order infinity (1/∞D).