In computer science, an implicit data structure or space-efficient data structure is a data structure that stores very little information other than the main or required data: a data structure that requires low overhead. They are called "implicit" because the position of the elements carries meaning and relationship between elements; this is contrasted with the use of pointers to give an explicit relationship between elements. Definitions of "low overhead" vary, but generally means constant overhead; in big O notation, O(1) overhead. A less restrictive definition is a succinct data structure, which allows greater overhead.
An implicit data structure is one with constant O(1) space overhead (above the information-theoretic lower bound).
Historically, Munro & Suwanda (1980) defined an implicit data structure (and algorithms acting on one) as one "in which structural information is implicit in the way data are stored, rather than explicit in pointers." They are somewhat vague in the definition, defining it most strictly as a single array, with only the size retained (a single number of overhead),[1] or more loosely as a data structure with constant overhead (O(1)).[2] This latter definition is today more standard, and the still-looser notion of a data structure with non-constant but small o(n) overhead is today known as a succinct data structure, as defined by Jacobson (1988); it was referred to as semi-implicit by Munro & Suwanda (1980).[3]
A fundamental distinction is between static data structures (read-only) and dynamic data structures (which can be modified). Simple implicit data structures, such as representing a sorted list as an array, may be very efficient as a static data structure, but inefficient as a dynamic data structure, due to modification operations (such as insertion in the case of a sorted list) being inefficient.
A trivial example of an implicit data structure is an array data structure, which is an implicit data structure for a list, and requires only the constant overhead of the length; unlike a linked list, which has a pointer associated with each data element, which explicitly gives the relationship from one element to the next. Similarly, a null-terminated string is an implicit data structure for a string (list of characters). These are considered very simple because they are static data structures (read-only), and only admit the simple operation of iteration over the elements.
Similarly simple is representing a multi-dimensional array as a single 1-dimensional array, together with its dimensions. For example, representing an m × n array as a single list of length m·n, together with the numbers m and n (instead of as a 1-dimensional array of pointers to each 1-dimensional subarray). The elements need not be of the same type, and a table of data (a list of records) may similarly be represented implicitly as a flat (1-dimensional) list, together with the length of each field, so long as each field has uniform size (so a single size can be used per field, not per record).
A less trivial example is representing a sorted list by a sorted array, which allows search in logarithmic time by binary search. Contrast with a search tree, specifically a binary search tree, which also allows logarithmic-time search, but requires pointers. A sorted array is only efficient as a static data structure, as modifying the list is slow – unlike a binary search tree – but does not require the space overhead of a tree.
An important example of an implicit data structure is representing a perfect binary tree as a list, in increasing order of depth, so root, first left child, first right child, first left child of first left child, etc. Such a tree occurs notably for an ancestry chart to a given depth, and the implicit representation is known as an Ahnentafel (ancestor table).
This can be generalized to a complete binary tree (where the last level may be incomplete), which yields the best-known example of an implicit data structure, namely the binary heap, which is an implicit data structure for a priority queue. This is more sophisticated than earlier examples because it allows multiple operations, and is an efficient dynamic data structure (it allows efficient modification of the data): not only top, but also insert and pop.
More sophisticated implicit data structures include the beap (bi-parental heap).
The trivial examples of lists or tables of values date to prehistory, while historically non-trivial implicit data structures date at least to the Ahnentafel, which was introduced by Michaël Eytzinger in 1590 for use in genealogy. In formal computer science, the first implicit data structure is generally considered to be the sorted list, used for binary search, which was introduced by John Mauchly in 1946, in the Moore School Lectures, the first ever set of lectures regarding any computer-related topic.[4][5] The binary heap was introduced in Williams (1964) to implement the heapsort.[5] The notion of an implicit data structure was formalized in Munro & Suwanda (1980), as part of introducing and analyzing the beap.[5]
See publications of Hervé Brönnimann, J. Ian Munro, and Greg Frederickson.