This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Didier Sornette | |
---|---|
Born | Paris, France | June 25, 1957
Nationality | French |
Alma mater | Ecole Normale Supérieure, (1977–1981) University of Nice (1980–1985) |
Known for | Prediction of crises and extreme events in complex systems, physical modeling of earthquakes, physics of complex systems and pattern formation in spatio-temporal structures |
Awards | Science et Défense French National Award, 2000 Research McDonnell award, Risques-Les Echos prize 2002 for Predictability of catastrophic events |
Scientific career | |
Fields | Physics, geophysics, complex systems, economics, finance |
Institutions | Swiss Federal Institute of Technology Zurich, Swiss Finance Institute, UCLA, CNRS |
Didier Sornette (born June 25, 1957, in Paris) is a French researcher studying subjects including complex systems and risk management. He is Professor on the Chair of Entrepreneurial Risks at the Swiss Federal Institute of Technology Zurich (ETH Zurich) and is also a professor of the Swiss Finance Institute, He was previously a Professor of Geophysics at UCLA, Los Angeles California (1996–2006) and a Research Professor at the French National Centre for Scientific Research (1981–2006).
With his long-time collaborator Dr. Guy Ouillon, Sornette has been leading a research group on the “Physics of earthquakes” over the last 25 years. The group is active in the modelling of earthquakes, landslides, and other natural hazards, combining concepts and tools from statistical physics, statistics, tectonics, seismology and more. First located at the Laboratory of Condensed Matter Physics (University of Nice, France), then at the Earth and Space Department (UCLA, USA), the group is now at ETH-Zurich (Switzerland) since March 2006.
The group has tackled the problem of earthquake and rupture prediction since the mid-90s within the broader physical concept of critical phenomena.[1] Considering rupture as a second-order phase transition, this predicts that, approaching rupture, the spatial correlation length of stress and damage increases.[2] This in turn leads to a power-law acceleration of moment and strain release, up to the macroscopic failure time of the sample (i.e. a large earthquake in nature). This prediction has been checked on various natural and industrial/laboratory data, over a wide spectrum of different scales (laboratory samples, mines, California earthquakes catalog), and under different loading conditions of the system (constant stress rate, constant strain rate). The most puzzling observation is that the critical power-law rate acceleration is decorated by log-periodic oscillations, suggesting a universal ratio close to 2.2. The existence of such oscillations stems from interactions between seismogenic structures (see below for the case of faults and fractures), but also offers a better constraint to identify areas within which a large event may occur. The concept of critical piezo-electricity in polycrystals [3][4][5] has been applied to the Earth's crust.[6]
Earthquake forecasting differs from prediction in the sense that no alarm is issued, but a time-dependent probability of earthquake occurrence is estimated. Sornette's group has contributed significantly to the theoretical development and study of the properties of the now standard Epidemic Type Aftershock Sequence (ETAS) model.[7] In a nutshell, this model states that each event triggers its own direct aftershocks, which themselves trigger their own aftershocks, and so on... The consequence is that events cannot be labeled anymore as foreshocks, mainshocks or aftershocks, as they can be all of that at the same time (with different levels of probability). In this model, the probability for an event to trigger another one primarily depends on their separating space and time distances, as well as on the magnitude of the triggering event, so that seismicity is then governed by a set of seven parameters. Sornette's group is currently pushing the model to its limits by allowing space and time variations of its parameters.[8] Despite the fact that this new model reaches better forecasting scores than any other competing model, it is not sufficient to achieve systematic reliable predictions. The main reason is that this model predicts future seismicity rates quite accurately, but fails to put constraints on the magnitudes (which are assumed to be distributed according to the Gutenberg-Richter law, and to be independent of each other). Some other seismic or non-seismic precursors are thus required in order to further improve those forecasts. According to the ETAS model, the rate of triggered activity around a given event behaves isotropically. This over-simplified assumption has recently relaxed by coupling the statistics of ETAS to genuine mechanical information. This is done by modelling the stress perturbation due to a given event on its surroundings, and correlating it with the space-time rate of subsequent activity as a function of transferred stress amplitude and sign. This suggests that triggering of aftershocks stems from a combination of dynamic (seismic waves) and elasto-static processes. Another unambiguous interesting result of this work is that the Earth crust in Southern California has quite a short memory of past stress fluctuations lasting only about 3 to 4 months.[9] This may put more constraint on the time window within which one may look for both seismic and non-seismic precursors.
Ouillon and Sornette have developed a pure statistical physics model of earthquake interaction and triggering, aiming at giving more flesh to the purely empirical ETAS linear model. The basic assumption of this "Multifractal stress activated" model[10][11] is that, at any place and time, the local failure rate depends exponentially on the applied stress. The second key ingredient is to recognize that, In the Earth crust, the local stress field is the sum of the large scale, far-field stress due to plate motion, plus all stress fluctuations due to past earthquakes. As elastic stresses add up, the exponentiation thus makes this model nonlinear. Solving it analytically allowed them to predict that each event triggers some aftershocks with a rate decaying in time according to the Omori law, i.e. as 1/tp, but with a special twist that had not been recognized heretofore. The unique prediction of the MSA model is that the exponent p is not constant (close to 1) but increases linearly with the magnitude of the mainshock. Statistical analyses of various catalogs (California, Japan, Taiwan, Harvard CMT) have been carried out to test this prediction, which confirmed it using different statistical techniques (stacks to improve signal to noise ratio, specifically devised wavelets for a multiscale analysis, extreme magnitude distributions, etc.).[12][13] This result thus shows that small events may trigger a smaller number of aftershocks than large ones, but that their cumulative effect may be more long-lasting in the Earth crust. A new technique has also recently introduced, called the barycentric fixed mass method, to improve considerably the estimation of multifractal structures of spatio-temporal seismicity expected from the MSA model.[14]
A significant part of the activity of Sornette's group has also been devoted to the statistical physics modelling as well as properties of fractures and faults at different scales. Those features are important as they may control various transport properties of the crust as well as represent the loci of earthquake nucleation.
Sornette and Sornette (1989)[15] suggested to view earthquakes and global plate tectonics as self-organized critical phenomena. As fault networks are clearly self-organized critical systems in the sense that earthquakes occur on faults, and faults grow because of earthquakes,[16][17][18] resulting in hierarchical properties, the study of their statistics should also bring information about the seismic process itself.[19] Davy, Sornette and Sornette [20][21][16][22] introduced a model of growth pattern formation of faulting and showed that the existence of un-faulted areas is the natural consequence of the fractal organization of faulting. Cowie et al. (1993; 1995) [23][24] developed the first theoretical model that encompasses both the long range and time organization of complex fractal fault patterns and the short time dynamics of earthquake sequences. A result is the generic existence in the model of fault competition with intermittent activity of different faults. The geometrical and dynamical complexity of faults and earthquakes is shown to result from the interplay between spatio-temporal chaos and an initial featureless quenched heterogeneity. Miltenberger et al.[25] and Sornette et al. (1994) [26] showed that self-organized criticality in earthquakes and tectonic deformations are related to synchronization of threshold relaxation oscillators. Lee et al. (1999) [27] demonstrated the intrinsic intermittent nature of seismic activity on faults, which results from their competition to accommodate the tectonic deformation. Sornette and Pisarenko (2003) performed a rigorous statistical analysis of distribution of plate sizes participating in plate tectonics and demonstrate the fractal nature of plate tectonics.[28]
Using a collection of maps centered at the same location but at different scales in Saudi Arabia (meter to hundreds of kilometers, i.e. slightly more than five decades), it was shown that joints and fault patterns display distinct spatial scaling properties within distinct ranges of scales.[29][30][31] These transition scales (which quantify the horizontal distribution of brittle structures) can be nicely correlated with the vertical mechanical layering of the host medium (the Earth crust). In particular, fracture patterns can be shown to be rather uniform at scales lower than the thickness of the sedimentary basin, and become heterogeneous and multifractal at larger scales. Those different regimes have been discovered by designing new multifractal analysis techniques (able to take account of the small size of the datasets as well as with irregular geometrical boundary conditions), as well as by introducing a new technique based on 2D anisotropic wavelet analysis. By mapping some joints within the crystalline basement in the same area, it was found that their spatial organization (spacing distribution) displayed discrete scale invariance over more than four decades.[32] Using some other dataset and a theoretical model, Huang et al. also showed that, due to interactions between parallel structures, the length distribution of joints displays discrete scale invariance.[33]
Motivated by earthquake prediction and forecast, Sornette' group has also contributed to the problem of 3D fault mapping. Given an earthquake catalog with a large number of events, the main idea is to invert for the set of planar segments that best fits this dataset.[34][35] More recently, Ouillon and Sornette developed techniques that model the spatial distribution of events using a mixture of anisotropic Gaussian kernels.[36] Those approaches allow one to identify a large number of faults that are not mapped by more traditional/geological techniques because they do not offer any signature at the surface. Those reconstructed 3D fault networks offer a good correlation with focal mechanisms, but also provide a significant gain when using them as the proxy of earthquakes locations in forecasting experiments. As catalogs can be very large (up to half-million events for Southern California), the catalog condensation technique has been introduced, which allows one to detect probable repeating events and get rid of this redundancy.[37]
In 2016, in collaboration with Prof. Friedemann Freund (with John Scoville) at NASA Ames and GeoCosmo, Sornette (with Guy Ouillon) has launched the Global Earthquake Forecasting Project (GEFS) to advance the field of earthquake prediction. This project is originally rooted in the rigorous theoretical and experimental solid-state physics of Prof. Friedemann Freund,[38][39] whose theory is able to explain the whole spectrum of electromagnetic type phenomena that have been reported before large earthquakes for decades, if not centuries: when submitting rocks to significant stresses, electrons and positive holes are activated; the latter flow to less stressed domains of the material thus generating large-scale electric currents. Those in turn induce local geoelectric and geomagnetic anomalies, stimulated infrared emission, air ionization, increase levels of ozone and carbon monoxide. All those fluctuations are currently measured using ground stations or remote sensing technologies. There are innumerable reports of heterogeneous types of precursory phenomena ranging from emission of electromagnetic waves from ultralow frequency (ULF) to visible (VIS) and near-infrared (NIR) light, electric field and magnetic field anomalies of various kinds (see below), all the way to unusual animal behavior, which has been reported again and again.
Space and ground anomalies preceding and/or contemporaneous to earthquakes include: (Satellite Component) 1. Thermal Infrared (TIR) anomalies 2. Total Electron Content (TEC) anomalies 3. Ionospheric tomography 4. Ionospheric electric field turbulences 5. Atmospheric Gravity Waves (AGW) 6. CO release from the ground 7. Ozone formation at ground level 8. VLF detection of air ionization 9. Mesospheric lightning 10. Lineaments in the VIS-NIR;
Ground Station Component: 1. Magnetic field variations 2. ULF emission from within the Earth crust 3. Tree potentials and ground potentials 4. Soil conductivity changes 5. Groundwater chemistry changes 6. Trace gas release from the ground 7. Radon emanation from the ground 8. Air ionization at the ground surface 9. Sub-ionospheric VLF/ELF propagation 10. Nightglow
These precursory signals are intermittent and seem not to occur systematically before every major earthquake. Researchers have not been able to explain and exploit them satisfactorily, but never together. Unfortunately, there is no worldwide repository for such data, and those databases are most often under-exploited using too simplistic analyses, or neglecting cross-correlations among them (most often because such data are acquired and possessed by distinct and competing institutions). The GEFS stands as a revolutionary initiative with the following goals: (i) initiate collaborations with many datacenters across the world to unify competences; (ii) propose a collaborative platform (InnovWiki, developed at ETH Zürich) to develop a mega repository of data and tools of analysis; (iii) develop and test rigorously real-time, high-dimension multivariate algorithms to predict earthquakes (location, time and magnitude) using all available data.
In 2004, Sornette used Amazon.com sales data to create a mathematical model for predicting bestseller potential based on very early sales results.[40][41][42] This was further developed to characterise the dynamics of success of YouTube videos.[43] This provides a general framework to analyse precursory and aftershock properties of shocks and ruptures in finance, material rupture, earthquakes, amazon.com sales: his work has documented ubiquitous power laws similar to the Omori law in seismology that allow one to distinguish between external shocks and endogenous self-organization.[44]
With collaborators, Sornette has extensively contributed to the application and generalisation of the logistic function (and equation). Applications include tests of chaos of the discrete logistic map,[45][46] an endo-exo approach to the classification of diseases,[47][48] the introduction of delayed feedback of population on the carrying capacity to capture punctuated evolution,[49][50] symbiosis,[51][52][53] deterministic dynamical models of regime switching between conventions and business cycles in economic systems,[54][55] the modelling of periodically collapsing bubbles,[56] interactions between several species via the mutual dependences on their carrying capacities.[57]
Another application is a methodology to determine the fundamental value of firms in the social networking sector, such as Facebook, Groupon, LinkedIn Corp., Pandora Media Inc, Twitter, Zynga and more recently the question of what justifies the skyrocketing values of the unicorn (finance) companies. The key idea proposed by Cauwels and Sornette[58] is that revenues and profits of a social-networking firm are inherently linked to its user basis through a direct channel that has no equivalent in other sectors; the growth of the number of users can be calibrated with standard logistic growth models and allows for reliable extrapolations of the size of the business at long time horizons. With their PhD student, they have applied this methodology to the valuation of Zynga before its IPO and have shown its value by presenting ex-ante forecasts leading to a successful trading strategy.[59] A recent application to the boom of so-called "unicorns", name given to the start-ups that are valued over $1 billion, such as Spotify's and Snapchat, are found in this master thesis.[60]
He has contributed theoretical models, empirical tests of the detection and operational implementation of forecasts of financial bubbles.[61][62][63][64]
By combining (i) the economic theory of rational expectation bubbles, (ii) behavioral finance on imitation and herding of investors and traders and (iii) the mathematical and statistical physics of bifurcations and phase transitions, he has pioneered the log-periodic power law singularity (LPPLS) model of financial bubbles. The LPPLS model considers the faster-than-exponential (power law with finite-time singularity) increase in asset prices decorated by accelerating oscillations as the main diagnostic of bubbles.[65] It embodies the effect of positive feedback loops of higher return anticipations competing with negative feedback spirals of crash expectations. The LPPLS model was first proposed in 1995 to predict the failure of critical pressure tanks embarked on the European Ariane rocket[66] and as a theoretical formulation of the acceleration moment release to predict earthquakes.[67] The LPPLS model was then proposed to also apply to model financial bubbles and their burst by Sornette, Johansen and Bouchaud [68] and independently by Feigenbaum and Freund.[69] The formal analogy between mechanical ruptures, earthquakes and financial crashes was further refined within the rational expectation bubble framework of Blanchard and Watson[70] by Johansen, Ledoit and Sornette.[71][72] This approach is now referred to in the literature as the JLS model. Recently, Sornette has added the S to the LPPL acronym of "log-periodic power law" to make clear that the "power law" part should not be confused with power law distributions: indeed, the "power law" refers to the hyperbolic singularity of the form , where is the logarithm of the price at time , and is the critical time of the end of the bubble.
In August 2008, in reaction to the then pervasive claim that the financial crisis could not have been foreseen, a view that he has combatted vigorously,[73] he has set up the Financial Crisis Observatory.[74] The Financial Crisis Observatory (FCO) is a scientific platform aimed at testing and quantifying rigorously, in a systematic way and on a large scale the hypothesis that financial markets exhibit a degree of inefficiency and a potential for predictability, especially during regimes when bubbles develop. The FCO evolved from ex-post analyses of many historical bubbles and crashes to previous and continuing ex-ante predictions of the risks of bubbles before their actual occurrences (including the US real estate bubble ending in mid-2006,[75] the Oil bubble bursting in July 2008,[76] the Chinese stock market bubbles[77][78]).
The FCO also launched a design (called the "financial bubble experiments") of ex-ante reports of bubbles where the digital authentication key of a document with the forecasts was published on the internet. The content of the document was only published after the event has passed to avoid any possible impact of the publication of the ex-ante prediction on the final outcome. Additionally, there was full transparency using one single communication channel.[79][80][81]
Since October 2014, each month, he publishes with his team a Global Bubble Status Report, the FCO Cockpit, which discusses the historical evolution of bubbles in and between different asset classes and geographies. It is the result of an extensive analysis done on the historical time series of approximately 430 systemic assets and 835 single stocks worldwide. The systemic assets are bond, equity and commodity indices and a selection of currency pairs. The single stocks are mainly US and European, equities. The monthly FCO cockpit reports are usually divided into two parts: the first part presents the state of the world, based on the analysis of the systemic assets, including stock and bond indices, currencies and commodities; the second part zooms in on the bubble behavior of single stocks by calculating the bubble warning indicators as well as two financial strength indicators, which indicate the fundamental value of the stock and the growth capability respectively. The stocks are the constituents of the Stoxx Europe 600, the S&P 500 and the Nasdaq 100 indices. These indicators provide a stock classification into four quadrants: Quadrant 1: Stocks with a strong positive bubble score and a strong value score; Quadrant 2: Stocks with a strong positive bubble score and a weak value score; Quadrant 3: Stocks with a strong negative bubble score and a weak value score; Quadrant 4: Stocks with strong negative bubble score and a strong financial strength. These four quadrants are used to construct four benchmark portfolio each month and are followed to test for their performance. The goal is to establish a long track record to continue testing the FCO hypotheses.
Inspired by the research of Ernst Fehr and his collaborators, Darcet and Sornette proposed that the paradox of human cooperation and altruism (without kinship, direct or indirect reciprocity) emerges naturally by an evolutionary feedback selection mechanism.[82] The corresponding generalised cost-benefit accounting equation has been tested and supported by simulations of an agent-based model mimicking the evolution selection pressure of our ancestors:[83][84] starting with a population of agents with no propensity for cooperation and altruistic punishment, simple rules of selection by survival in interacting groups lead to the emergence of a level of cooperation and altruistic punishment in agreement with experimental findings.[85]
Stimulated by Roy Baumeister's book Is There Anything Good About Men?: How Cultures Flourish by Exploiting Men (Oxford University Press; 2010), with his PhD student, M. Favre, Sornette developed a very simple agent-based model linking together quantitatively several unlikely pieces of data, such as differences between men and women, the time to our most recent common ancestors, and gender differences in the proportions of ancestors of the present human population. The question of whether men and women are innately different has occupied the attention and concern of psychologists for over a century by now. Most researchers assume that evolution contributed to shaping any innate differences, presumably by means of reproductive success. Therefore, insofar as the reproductive contingencies were different for men and women, the psychological consequences and adaptations stemming from natural selection would differ by gender. For that reason, new information about gender differences in reproductive success in our biological past is valuable. Favre and Sornette showed that the highly asymmetric investment cost for reproduction between males and females, the special role of females as sole child bearers, together with a high heterogeneity of males' fitnesses driven by females' selection pressure, was sufficient to explain quantitatively the fact that the present human population of Earth was descended from more females than males, at about a 2:1 ratio,[86] with however a broad distribution of possible values (the ratio 2:1 being the median in the ensemble of populations simulated by Favre and Sornette).
To describe the inherent sociability of Homo Sapiens, the UCLA professor of anthropology, Alan Fiske, has theorised that all human interactions can be decomposed into just four "relational models", or elementary forms of human relations: communal sharing, authority ranking, equity matching and market pricing (to these are added the limiting cases of asocial and null interactions, whereby people do not coordinate with reference to any shared principle).[87] With M. Favre, Sornette introduced the simplest model of dyadic social interactions and established its correspondence with Fiske's relational models theory (RMT).[88] Their model is rooted in the observation that each individual in a dyadic interaction can do either the same thing as the other individual, a different thing or nothing at all. The relationships generated by this representation aggregate into six exhaustive and disjoint categories that match the four relational models, while the remaining two correspond to the asocial and null interactions defined in RMT. The model can be generalised to the presence of N social actions. This mapping allows one to infer that the four relational models form an exhaustive set of all possible dyadic relationships based on social coordination, thus explaining why there could exist just four relational models.
He has developed the dragon king theory of extreme events.[89][90] The term "dragon-kings" (DK) embodies a double metaphor implying that an event is both extremely large (a "king" [91]), and born of unique origins ("dragon") relative to its peers. The hypothesis advanced in [92] is that DK events are generated by distinct mechanisms that intermittently amplify extreme events, leading to the generation of runaway disasters as well as extraordinary opportunities on the upside. He formulated the hypothesis that DK could be detected in advance by the observation of associated precursory signs.[93][94]
Together with Monika Gisler, he introduced the social bubble hypothesis[95] in a form that can be scrutinized methodically:[96][97][98][99] strong social interactions between enthusiastic supporters of an idea/concept/project weave a network based on positive feedbacks, leading to widespread endorsement and extraordinary commitment by those involved in the respective project beyond what would be rationalized by a standard cost-benefit analysis.[100] The social bubble hypothesis does not cast any value system, however, notwithstanding the use of the term "bubble," which is often associated with a negative outcome. Rather, it identifies the types of dynamics that shape scientific or technological endeavors. In other words, according to the social bubble hypothesis, major projects do in general proceed via a social bubble mechanism. In other words, it is claimed that most of the disruptive innovations go through such a social bubble dynamics.
The social bubble hypothesis is related to Schumpeter’s famous creative destruction and to the “technological economic paradigm shift” of the social economist Carlota Perez[101][102] who studies bubbles as antecedents of “techno-economic paradigm shifts.” Drawing from his professional experience as a venture capitalist, William H. Janeway similarly stresses the positive role of asset bubbles in financing technological innovations.[103]
With his Russian colleague, V.I. Yukalov, he has introduced the "quantum decision theory",[104] with the goal of establishing an holistic theoretical framework of decision making. Based on the mathematics of Hilbert spaces, it embraces uncertainty and enjoys non-additive probability for the resolution of complex choice situations with interference effects. The use of Hilbert spaces constitutes the simplest generalisation of the probability theory axiomatised by Kolmogorov[105] for real-valued probabilities to probabilities derived from algebraic complex number theory. By its mathematical structure, quantum decision theory aims at encompassing the superposition processes occurring down to the neuronal level. Numerous behavioral patterns, including those causing paradoxes within other theoretical approaches, are coherently explained by quantum decision theory.[104]
The version of Quantum Decision Theory (QDT) developed by Yukalov and Sornette principally differs from all other approaches just mentioned in two respects. First, QDT is based on a self-consistent mathematical foundation that is common for both quantum measurement theory and quantum decision theory. Starting from the von Neumann (1955) theory of quantum measurements,[106] Yukalov and Sornette have generalized it to the case of uncertain or inconclusive events, making it possible to characterize uncertain measurements and uncertain prospects. Second, the main formulas of QDT are derived from general principles, giving the possibility of general quantitative predictions.
With Wei-Xing Zhou, he has introduced the "thermal optimal path" method as a method to quantify the dynamical evolution of lead-lag structures between two time series. The method consists of constructing a distance matrix based on the matching of all sample data pairs between the two time series, as in recurrence plots. Then, the lag–lead structure is searched for as the optimal path in the distance matrix landscape that minimizes the total mismatch between the two time series, and that obeys a one-to-one causal matching condition. The problem is solved mathematically by transfer matrix techniques, matching the TOP method to the problem of random directed polymers interacting with random substrates. Applications include the study of the relationships between inflation, inflation change, GDP growth rate and unemployment rate,[107][108] volatilities of the US inflation rate versus economic growth rates,[109] the US stock market versus the Federal funds rate and Treasury bond yields[110] and the UK and US real-estate versus monetary policies.[111]
A recent improvement of TOP has been introduced, called TOPS (symmetric thermal optimal path),[111] which complement TOP by imposing that the lead-lag relationship should be invariant with respect to a time reversal of the time series after a change of sign. This means that, if 'X comes before Y', this transforms into 'Y comes before X' under a time reversal. The TOPS approach stresses the importance of accounting for change of regimes, so that similar pieces of information or policies may have drastically different impacts and developments, conditional on the economic, financial and geopolitical conditions.
In 2015, in reaction to the extraordinary pressure on the Swiss franc and the general debate that a strong Swiss franc is a problem for Switzerland, he has introduced the contrarian proposition that a strong Swiss franc is an extraordinary opportunity for Switzerland. He argues that the strong Swiss franc is the emergence (in the sense of complex adaptive systems) of the aggregate qualities of Switzerland, its political systems, its infrastructure, its work organisation and ethics, its culture and much more. He proposes to "mine" Swiss francs to stabilise the exchange against the euro to an economically and politically consensus (that could be around 1.20–1.25 ChF per euro) and buy as much euros and dollars as is necessary for this. The proceeds will be reinvested in a Swiss Sovereign Fund, which could reach a size of one trillion euros or more, according to the strategies used by the Norwegian sovereign fund, the Singaporean sovereign funds and university endowment funds such as Harvard or Stanford. A full English version and a presentation can be found at [1]. A summary of the arguments has been presented in the German-speaking media [112] [2].
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
{{cite journal}}
: Cite journal requires |journal=
(help)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
{{cite journal}}
: CS1 maint: multiple names: authors list (link)