Part of a series on |
Earthquakes |
---|
Earthquake forecasting is a branch of the science of seismology concerned with the probabilistic assessment of general earthquake seismic hazard, including the frequency and magnitude of damaging earthquakes in a given area over years or decades.[1] While forecasting is usually considered to be a type of prediction, earthquake forecasting is often differentiated from earthquake prediction, whose goal is the specification of the time, location, and magnitude of future earthquakes with sufficient precision that a warning can be issued.[2][3] Both forecasting and prediction of earthquakes are distinguished from earthquake warning systems, which, upon detection of an earthquake, provide a real-time warning to regions that might be affected.
In the 1970s, scientists were optimistic that a practical method for predicting earthquakes would soon be found, but by the 1990s continuing failure led many to question whether it was even possible.[4] Demonstrably successful predictions of large earthquakes have not occurred, and the few claims of success are controversial.[5] Consequently, many scientific and government resources have been used for probabilistic seismic hazard estimates rather than prediction of individual earthquakes. Such estimates are used to establish building codes, insurance rate structures, awareness and preparedness programs, and public policy related to seismic events.[6] In addition to regional earthquake forecasts, such seismic hazard calculations can take factors such as local geological conditions into account. Anticipated ground motion can then be used to guide building design criteria.[citation needed]
Methods for earthquake forecasting generally look for trends or patterns that lead to an earthquake. As these trends may be complex and involve many variables, advanced statistical techniques are often needed to understand them, therefore these are sometimes called statistical methods. These approaches tend to have relatively long time periods, making them useful for earthquake forecasting.
Even the stiffest of rock is not perfectly rigid. Given a large force (such as between two immense tectonic plates moving past each other) the Earth's crust will bend or deform. According to the elastic rebound theory of (Reid 1910), eventually the deformation (strain) becomes great enough that something breaks, usually at an existing fault. Slippage along the break (an earthquake) allows the rock on each side to rebound to a less deformed state. In the process, energy is released in various forms, including seismic waves.[7] The cycle of tectonic force being accumulated in elastic deformation and released in a sudden rebound is then repeated. As the displacement from a single earthquake ranges from less than a meter to around 10 meters (for an M 8 quake),[8] the demonstrated existence of large strike-slip displacements of hundreds of miles shows the existence of a long-running earthquake cycle.[9]
The most studied earthquake faults (such as the Nankai megathrust, the Wasatch fault, and the San Andreas Fault) appear to have distinct segments. The characteristic earthquake model postulates that earthquakes are generally constrained within these segments.[10] As the lengths and other properties[11] of the segments are fixed, earthquakes that rupture the entire fault should have similar characteristics. These include the maximum magnitude (which is limited by the length of the rupture), and the amount of accumulated strain needed to rupture the fault segment. Since continuous plate motions cause the strain to accumulate steadily, seismic activity on a given segment should be dominated by earthquakes of similar characteristics that recur at somewhat regular intervals.[12] For a given fault segment, identifying these characteristic earthquakes and timing their recurrence rate (or conversely return period) should therefore inform us about the next rupture; this is the approach generally used in forecasting seismic hazard. Return periods are also used for forecasting other rare events, such as cyclones and floods, and assume that future frequency will be similar to observed frequency to date.
Extrapolation from the Parkfield earthquakes of 1857, 1881, 1901, 1922, 1934, and 1966 led to a forecast of an earthquake around 1988, or before 1993 at the latest (at the 95% confidence interval), based on the characteristic earthquake model.[13] Instrumentation was put in place in hopes of detecting precursors of the anticipated earthquake. However, the forecasted earthquake did not occur until 2004. The failure of the Parkfield prediction experiment has raised doubt as to the validity of the characteristic earthquake model itself.[14]
At the contact where two tectonic plates slip past each other, every section must eventually slip, as (in the long-term) none get left behind. But they do not all slip at the same time; different sections will be at different stages in the cycle of strain (deformation) accumulation and sudden rebound. In the seismic gap model, the "next big quake" should be expected not in the segments where recent seismicity has relieved the strain, but in the intervening gaps where the unrelieved strain is the greatest.[15] This model has an intuitive appeal; it is used in long-term forecasting, and was the basis of a series of circum-Pacific (Pacific Rim) forecasts in 1979 and 1989–1991.[16]
However, some underlying assumptions about seismic gaps are now known to be incorrect. A close examination suggests that "there may be no information in seismic gaps about the time of occurrence or the magnitude of the next large event in the region";[17] statistical tests of the circum-Pacific forecasts shows that the seismic gap model "did not forecast large earthquakes well".[18] Another study concluded that a long quiet period did not increase earthquake potential.[19]
The 2015 Uniform California Earthquake Rupture Forecast, Version 3, or UCERF3, is the latest official earthquake rupture forecast (ERF) for the state of California , superseding UCERF2. It provides authoritative estimates of the likelihood and severity of potentially damaging earthquake ruptures in the long- and near-term. Combining this with ground motion models produces estimates of the severity of ground shaking that can be expected during a given period (seismic hazard), and of the threat to the built environment (seismic risk). This information is used to inform engineering design and building codes, planning for disaster, and evaluating whether earthquake insurance premiums are sufficient for the prospective losses.[20] A variety of hazard metrics [21] can be calculated with UCERF3; a typical metric is the likelihood of a magnitude[22] M 6.7 earthquake (the size of the 1994 Northridge earthquake) in the 30 years (typical life of a mortgage) since 2014.
UCERF3 was prepared by the Working Group on California Earthquake Probabilities (WGCEP), a collaboration between the United States Geological Survey (USGS), the California Geological Survey (CGS), and the Southern California Earthquake Center (SCEC), with significant funding from the California Earthquake Authority (CEA).[23]
|last1= Geller |first1= Robert J. |date= December 1997 |title= Earthquake prediction: a critical review. |journal= Geophysical Journal International |volume= 131 |issue= 3 |pages= 425–450 |doi= 10.1111/j.1365-246X.1997.tb06588.x |bibcode = 1997GeoJI.131..425G |url= http://gji.oxfordjournals.org/content/131/3/425.full.pdf
|doi-access= free
|last1 = Kagan |first1 = Yan Y. |last2 = Jackson |first2 = David D. |date = 27 May 1996 |title = Statistical tests of VAN earthquake predictions: comments and reflections |journal = Geophysical Research Letters |volume = 23 |issue = 11 |pages = 1433–1436 |bibcode = 1996GeoRL..23.1433K |doi = 10.1029/95GL03786 |url = ftp://minotaur.ess.ucla.edu/pub/kagan/save/kjvan.pdf
Original source: https://en.wikipedia.org/wiki/Earthquake forecasting.
Read more |