The term “seizure detection” generally refers to the use of an automated algorithm (a “seizure detection algorithm” or “SDA”) to recognize that a seizure is occurring (or has occurred) through analysis of biologic signals recorded from a patient with epilepsy. Essentially, the goal is to receive and analyze a set of signals and transform the information they contain into an output signal or indicator of whether or not the patient is in a state of seizure (Figure 1-Figure 2). Important objectives are to perform this transformation as quickly, efficiently and accurately as possible. For most applications, “real-time” seizure detection is needed, requiring “online” analysis of signal data points as soon as they are available (e.g., received into a processor), with detection decisions being made with negligible delay and without use of “future information” (i.e., only using information that is available at the time the detection is made).
The term “Seizure Detection” differs from “Seizure Prediction” (see, e.g., Mormann 2008). In the case of detection, the SDA is not expected to identify the presence of a seizure until after it has begun “electrographically” (i.e., characteristics of the seizure have appeared in the biologic signal(s) being monitored). In the case of prediction, the algorithm is expected to forecast impending seizures before they start. Of course, this distinction can become muddled when there is uncertainty about when the seizure starts and if the prediction algorithm is based on detecting a signal pattern that is consistently followed by a seizure. The latter may indicate that the seizure has actually already begun by the time the detected pattern appears. In this case the “prediction” method may be actually just “detecting” the seizure before its (subjectively scored) seizure onset.
Contents |
Approximately 1-2% of the population suffers from seizures. The unpredictability of when seizures will suddenly occur is a primary reason for the disability associated with epilepsy, and this uncertainty dramatically impacts the quality of life for patients and their caregivers (Fisher et al. 2000). Despite most persons with epilepsy showing no effects of the disorder other than during and immediately following the occurrence of seizures, not knowing when these events may happen can prevent them from driving, swimming, cooking, etc. Removing this uncertainty, through sufficiently early and accurate seizure detection and immediate warning, may thus result in a significant quality of life improvement. Accurate detection and logging of seizures can be used to improve the diagnostic yield from patient monitoring during epilepsy surgery evaluation and to improve understanding of epilepsy as a dynamical disease. Additionally, accurate automatic seizure detection offers the potential for automated “closed-loop” therapy, in which a therapy such as electrical stimulation, drug infusion, cooling, or biofeedback may be delivered in response to a seizure detection (Osorio et al. 2001, Theodore and Fisher 2004, Osorio et al. 2005, Morrell 2006, Anderson et al., 2008, Stacey and Litt 2008, Rothman 2008, Osorio and Frei 2009). Closed-loop control of seizures has major advantages over therapy that does not utilize immediate “feedback” of the patient state. In particular, closed-loop therapy can be precisely timed (e.g., delivered immediately upon detection of a seizure) and even adaptively dosed (e.g., delivering different dose levels and using different therapy modalities and delivery sites depending upon measured seizure characteristics and patient states). In other words, the therapy can be tailored to be administered only when and where needed. Coupling the use of quantitative monitoring algorithms with therapy also enables the objective assessment of therapeutic efficacy, by enabling characterization of therapy dose-response and correlating administration with changes in seizure severity and frequency of occurrence over time.
There are numerous seizure detection algorithms described in the literature. The most prominent early attempt at automated seizure detection was made by Gotman (Gotman 1982) which built upon earlier work that he and others had done that attempted to quantify EEG transients/spikes (see, e.g., Gotman and Gloor 1976, Ives et al. 1976) and nonstationarities (Lopes da Silva 1975). Today most algorithms for seizure detection are based upon moving-window analysis of electrical signals recorded from the scalp (the electroencephalogram or “EEG”) or directly from the brain (electrocorticogram or “ECoG” if from the cortex, otherwise intracranial EEG or “iEEG”)(see, e.g., Osorio et al. 1998 and references therein), though many other signals may also be used, including cardiac-based (Marshall et al. 1983, O’Donovan et al. 1996, Frei et al. 1996), chemical (Crick et al. 2007) and motion-related (Nijsen et al. 2005) signals. In each window, one or more quantifying measures are computed from the data and changes in their values are analyzed as a function of time. Commonly used measures for EEG quantification include (i) amplitude and/or signal power, often restricted to a particular frequency band (or weighted as a function of frequency) via application of a filter to the signals, (ii) frequency changes in the signal, (iii) phase variable changes, (iv) rhythmicity changes, and (v) a measure of distance between the signal segment and a template signal with known morphology. These quantities may also be combined to derive other measures, such as signal arc-length or line-length measures (Esteller et al. 2001), or measures of similarity between the power spectral densities obtained from different signal epochs (Murro et al. 1991, Alarcon et al. 1995, Gabor and Seyal 1996). Depending upon whether a measure utilizes a one, two, or several input signals to produce its output, the measure is referred to as a univariate, bivariate or multivariate measure, respectively. Differences between measures in the most recent moving window(s) are typically compared to reference or background values to identify statistically significant changes associated with the seizure activity. Often the ratio between the recent moving window (“foreground”) and past non-seizure values (“background”) is compared to a threshold in order to detect significant changes (Osorio et al. 1998).
There are several difficult challenges associated with the development of a seizure detection algorithm (“SDA”), including:
Seizures are a symptom associated with abnormal electrical activity in the brain, sometimes described as an electrical storm in the brain or earthquake in the brain (Osorio et al. 2010). Unfortunately, answers to the "what is a seizure?" and "when to detect it?" questions remain elusive since there is currently no consensus or objective definition of what constitutes a seizure. This significantly hinders the perfection of seizure detection, as it is obviously difficult to develop an algorithm to detect events with perfect precision when the events themselves aren’t clearly defined.
Even when the EEG signal contains an unequivocal seizure, expert reviewers sometimes differ significantly (by tens of seconds or more) in their respective markings of the seizure’s electrographic onset (EO) and electrographic end (EE) times. This inter-rater variability in EO and EE markings complicates SDA development and performance assessment, since most applications desire rapid detection of the signal changes that immediately follow EO. Depending upon when the EO is marked, the signal change immediately following might, for example, comprise a train of spikes, a rhythmic high frequency oscillation, or some other type of signal change (see, e.g., Figure 3). Thus one can appreciate the importance of an adaptive SDA that can be tailored to detect virtually any type of relevant signal change, depending on what the electroencephalographer/epileptologist/user desires to detect or what is needed for the particular application.
A significant limitation in comparing expert visual analysis (EVA) scores with the results of an SDA is that EVA is typically retrospective – the electroencephalographer identifies that a seizure has occurred and then pages forward and backward through the signal display/printout to set the times at which they believe the event starts and ends. On the other hand, any SDA that operates in real-time is required to issue a decision at a particular time using only signal information available up to that point in time. While some SDAs have been developed that perform retrospectively to first determine a seizure’s presence and then back up to determine when it started (Chan et al. 2008), and these approaches have some utility for offline processing, the associated detection delay limits their usefulness for warning or closed-loop control applications.
Another difficulty in determining “what to detect and when” arises because in certain applications the user may not want to detect ‘’all’’ seizures. In seizure warning applications, for example, it may be desirable to only detect, or at least only warn of, the seizures that have clinical/behavioral manifestations or some loss of normal function. Classifying seizures as either “clinical” or “subclinical,” depending upon whether behavioral manifestations are present or absent, is an important objective but can be difficult to accomplish with a good degree of accuracy. Patients are often unaware that they have had a seizure, there isn’t always an expert observer present and cognitive/functional testing during seizures to assess potential impairment is rarely administered (Osorio and Frei 2010). Given these factors, a more proper terminology for seizure classification divides events into: (i) known clinical seizures (“KCSz”) and (ii) seizures not known to be clinical (“NKCSz”).
Practical strategies for seizure detection algorithm development, in the face of the aforementioned uncertainties, generally fall into two groups: (1) develop an SDA that detects certain specific signal characteristics that a group of seizures have in common (e.g., the earliest signal pattern common to all KCSz), or (2) develop multi-faceted SDAs which attempt to detect a variety of abnormal epileptiform activity present in brain signals, including not just unequivocal seizures but other relevant activity such as brief seizures, bursts, spike trains, and even single spikes, then correlate the occurrence of these detections with clinical events of interest in hopes of providing the most complete information possible about the brain system dynamics underlying seizures for that patient. The first approach has limitations in that seizures lacking the target signal characteristic set may go undetected, while the second approach tends to detect both artifacts and other paroxysmal events (both epileptic and non-epileptic) that may not generally be considered to be seizures.
A challenge in SDA development is to detect the earliest signal change that will evolve with high likelihood in a clinical seizure. However, since onset patterns of “subclinical seizures” (i.e., the electrographic seizures that aren’t known to be clinical) are often very similar to those of clinical seizures, the task involves a speed vs. accuracy trade-off: How far should the SDA let a seizure evolve before making a detection decision? The relative frequency of clinical vs. subclinical events, the negative consequences of improper classification (which may depend, e.g., upon the severity of clinical symptoms or side-effects of a detection-triggered therapy) and the patient-specific signal patterns are all important factors in deciding when to detect. Figure 3-Figure 5 provides an illustrative example. Figure 9-Figure 11 at the end of this article gives another such example for the interested reader.
In seizure warning and automated delivery of therapy applications, it is desirable for any SDA to detect seizures as early as possible. However, this goal obviously conflicts with the desire for highly accurate detections, since the more signal the algorithm can “see” to analyze, the more information it has at its disposal for use in improving detection accuracy but the later the resulting detection will be. The two examples in Figure 3-Figure 11 illustrate this challenge. Speed and accuracy may both be improved by improving signal acquisition, such as by placing electrodes inside the brain and nearer to the location where the seizures are starting, or by utilizing multiple sensors to help separate signal changes of interest from artifacts. However, such improvements may come at the expense of increased invasiveness of monitoring.
The EEG, especially when recorded from the scalp, is prone to many different artifacts which obstruct the view of underlying brain activity. Figure 6 shows some of these common artifacts. Most SDAs are based, at least in part, on detecting changes in the distribution of signal energy as a function of time and frequency (time-frequency-energy or “TFE” analysis). One of the difficulties posed by artifacts/noise in the signal is that it can often overlap in the frequency domain with signals of interest (seizures). Also, the artifacts often come and go with time, as do seizures, making the degree of overlap at any given moment somewhat unpredictable. The underlying brain signals and their characteristics also change with time and state of the patient (i.e., they are non-stationary), which can pose a problem for certain popular conventional analysis techniques. Fourier analysis is perhaps the most common TFE analysis technique, but it has an underlying assumption that the signals being analyzed are stationary, which limits its utility for this application. Over the past two decades, wavelet-based TFE analysis and other related methods have become popular tools for use with EEG and other non-stationary signals, in part because their multi-resolution approach to TFE analysis deals with inherent non-stationarity in the signal more successfully (Schiff 1994, Osorio et al. 1998, Jouny et al. 2010). New methods for TFE analysis of non-stationary signals, such as Intrinsic Timescale Decomposition (ITD) (Frei and Osorio 2007) have recently provided additional tools to decompose these and other complex signals into components of interest, enabling separation of seizure/epileptiform signal components from background/normal components.
The processors that are approved for use in today's implantable medical devices are much more limited in their processing capability than desktop computers or those used in other "non-implanted applications." Capabilities for implantable devices, including those currently under development, are 4-5 orders of magnitude lower than present-day computers, functioning on a level that is approximately equivalent to the IBM 286 PCs of the late 1970s to early 1980s (Werder 2007). Reasons for this include the cost of gaining regulatory approval for state of the art processors and the manufacturing constraints imposed by regulatory agencies. This imposes significant limitations upon the computational complexity of algorithms that may be used in seizure warning and closed-loop therapy devices. Despite the fact that processing power required for an SDA typically increases at least linearly with the sampling rate and with the number of signals being monitored, current trends toward use of very high sampling rates (e.g., 10 kHz or more) and micro-electrodes, which can require well over a hundred in number to cover even small regions of the brain, continue in hopes that these approaches will provide useful information that cannot otherwise be obtained. Other research has focused on developing SDAs which retain desirable characteristics of the most promising existing algorithms but require less computational power (Bhavaraju 2006, Raghunathan et al. 2009, Raghunathan et al. 2010).
There is a great deal of inter-subject variability in seizure characteristics, which makes it difficult for any prospective algorithm to detect everything. Thus far, there is no “one size fits all” SDA that will detect every seizure for every subject with any reasonable degree of accuracy. However, the frequent lack of a priori information about a subject’s seizures and need for a reasonably broadly applicable approach to seizure detection has led to the development of generic SDAs, which may be group-optimized in their parameter settings to detect as large a set of seizures as possible (Osorio et al. 1998). Yet, the stereotypic nature of seizures within a given subject affords significant improvement when generic SDAs are tailored or adapted to the make use of an individual’s or group's seizure fingerprint(s). Flexible SDAs, along with strategies for automating individualized adaptation of their parameters that allow efficient physician supervision (e.g., providing relevant patient-specific information, including samples of seizures one wishes to detect, less intense seizures one wishes to avoid detecting, and typical variations observed in non-seizure data), are likely to outperform a generic approach (Frei et al. 2002, Qu and Gotman 1993, Shoeb et al. 2004, Qu and Gotman 1995, Wilson 2006, Haas et al. 2007).
SDA performance is conventionally assessed using sensitivity, specificity and speed of detection in comparison to the “gold standard” of expert visual analysis of EEG and patient video by a trained electroencephalographer/epileptologist. The inter-relationship between sensitivity and specificity is often analyzed using Receiver Operating Characteristic (ROC) curves. However, the results obtained/reported using each of these standard statistics must be critically and carefully interpreted to be meaningful and are far from complete in terms of describing SDA performance. This may be illustrated with a few simple examples (see Figure 7-Figure 8). Extensive prospective validation of any SDA is required to properly assess its performance (see, e.g., Osorio et al. 2001).
Accurate automated seizure detection remains an important challenge and a critical first step in (1) removing the uncertainty associated with when epileptic seizures will occur, (2) furthering our understanding of seizures and what may cause them and (3) enabling the development of systems for automated “closed-loop” therapy intended to terminate seizures as they begin or even prevent their occurrence altogether.
Publicly Available Databases: Recently several databases of biological signal data (e.g., EEG, ECoG or EKG) containing seizures have been made publicly available. The use of common datasets is essential in comparing performance between different detection algorithms and to properly assess prospective performance on data that are not used in developing the algorithm parameters. Some such databases can be found at:
Some Existing Commercially-Available Seizure Detection Algorithms:
Internal References
Further Reading