WorldWideScience

Sample records for twelve large earthquakes

  1. Large earthquakes and creeping faults

    Science.gov (United States)

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  2. Foreshock occurrence before large earthquakes

    Science.gov (United States)

    Reasenberg, P.A.

    1999-01-01

    Rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured in two worldwide catalogs over ???20-year intervals. The overall rates observed are similar to ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering based on patterns of small and moderate aftershocks in California. The aftershock model was extended to the case of moderate foreshocks preceding large mainshocks. Overall, the observed worldwide foreshock rates exceed the extended California generic model by a factor of ???2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events, a large majority, composed of events located in shallow subduction zones, had a high foreshock rate, while a minority, located in continental thrust belts, had a low rate. These differences may explain why previous surveys have found low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggests the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich. If this is so, then the California generic model may significantly underestimate the conditional probability for a very large (M ??? 8) earthquake following a potential (M ??? 7) foreshock in Cascadia. The magnitude differences among the identified foreshock-mainshock pairs in the Harvard catalog are consistent with a uniform

  3. Extreme value statistics and thermodynamics of earthquakes. Large earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Lavenda, B. [Camerino Univ., Camerino, MC (Italy); Cipollone, E. [ENEA, Centro Ricerche Casaccia, S. Maria di Galeria, RM (Italy). National Centre for Research on Thermodynamics

    2000-06-01

    A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershocks sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Frechet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions show that self-similar power laws are transformed into non scaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Frechet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same catalogue of Chinese earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Frechet distribution. Earthquake temperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  4. Extreme value statistics and thermodynamics of earthquakes: large earthquakes

    Directory of Open Access Journals (Sweden)

    B. H. Lavenda

    2000-06-01

    Full Text Available A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershock sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Fréchet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions shows that self-similar power laws are transformed into nonscaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Fréchet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same Catalogue of Chinese Earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Fréchet distribution. Earthquaketemperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  5. Foreshock occurrence rates before large earthquakes worldwide

    Science.gov (United States)

    Reasenberg, P.A.

    1999-01-01

    Global rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured, using earthquakes listed in the Harvard CMT catalog for the period 1978-1996. These rates are similar to rates ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering, which is based on patterns of small and moderate aftershocks in California, and were found to exceed the California model by a factor of approximately 2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events a large majority, composed of events located in shallow subduction zones, registered a high foreshock rate, while a minority, located in continental thrust belts, measured a low rate. These differences may explain why previous surveys have revealed low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggest the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich.

  6. Surface slip during large Owens Valley earthquakes

    KAUST Repository

    Haddon, E. K.; Amos, C. B.; Zielke, Olaf; Jayko, A. S.; Burgmann, R.

    2016-01-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from approximate to 1.0 to 6.0 m and average 3.31.1 m (2 sigma). Vertical offsets are predominantly east-down between approximate to 0.1 and 2.4 m, with a mean of 0.80.5 m. The average lateral-to-vertical ratio compiled at specific sites is approximate to 6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7-11 m and net average of 4.41.5 m, corresponding to a geologic M-w approximate to 7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.12.0 m, 12.8 +/- 1.5 m, and 16.6 +/- 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between approximate to 0.6 and 1.6 mm/yr (1 sigma) over the late Quaternary.

  7. Surface slip during large Owens Valley earthquakes

    KAUST Repository

    Haddon, E. K.

    2016-01-10

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from approximate to 1.0 to 6.0 m and average 3.31.1 m (2 sigma). Vertical offsets are predominantly east-down between approximate to 0.1 and 2.4 m, with a mean of 0.80.5 m. The average lateral-to-vertical ratio compiled at specific sites is approximate to 6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7-11 m and net average of 4.41.5 m, corresponding to a geologic M-w approximate to 7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.12.0 m, 12.8 +/- 1.5 m, and 16.6 +/- 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between approximate to 0.6 and 1.6 mm/yr (1 sigma) over the late Quaternary.

  8. Surface slip during large Owens Valley earthquakes

    Science.gov (United States)

    Haddon, E.K.; Amos, C.B.; Zielke, O.; Jayko, Angela S.; Burgmann, R.

    2016-01-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ∼1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ∼0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ∼6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7–11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ∼7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ∼0.6 and 1.6 mm/yr (1σ) over the late Quaternary.

  9. Rapid Estimates of Rupture Extent for Large Earthquakes Using Aftershocks

    Science.gov (United States)

    Polet, J.; Thio, H. K.; Kremer, M.

    2009-12-01

    The spatial distribution of aftershocks is closely linked to the rupture extent of the mainshock that preceded them and a rapid analysis of aftershock patterns therefore has potential for use in near real-time estimates of earthquake impact. The correlation between aftershocks and slip distribution has frequently been used to estimate the fault dimensions of large historic earthquakes for which no, or insufficient, waveform data is available. With the advent of earthquake inversions that use seismic waveforms and geodetic data to constrain the slip distribution, the study of aftershocks has recently been largely focused on enhancing our understanding of the underlying mechanisms in a broader earthquake mechanics/dynamics framework. However, in a near real-time earthquake monitoring environment, in which aftershocks of large earthquakes are routinely detected and located, these data may also be effective in determining a fast estimate of the mainshock rupture area, which would aid in the rapid assessment of the impact of the earthquake. We have analyzed a considerable number of large recent earthquakes and their aftershock sequences and have developed an effective algorithm that determines the rupture extent of a mainshock from its aftershock distribution, in a fully automatic manner. The algorithm automatically removes outliers by spatial binning, and subsequently determines the best fitting “strike” of the rupture and its length by projecting the aftershock epicenters onto a set of lines that cross the mainshock epicenter with incremental azimuths. For strike-slip or large dip-slip events, for which the surface projection of the rupture is recti-linear, the calculated strike correlates well with the strike of the fault and the corresponding length, determined from the distribution of aftershocks projected onto the line, agrees well with the rupture length. In the case of a smaller dip-slip rupture with an aspect ratio closer to 1, the procedure gives a measure

  10. Large earthquake rates from geologic, geodetic, and seismological perspectives

    Science.gov (United States)

    Jackson, D. D.

    2017-12-01

    Earthquake rate and recurrence information comes primarily from geology, geodesy, and seismology. Geology gives the longest temporal perspective, but it reveals only surface deformation, relatable to earthquakes only with many assumptions. Geodesy is also limited to surface observations, but it detects evidence of the processes leading to earthquakes, again subject to important assumptions. Seismology reveals actual earthquakes, but its history is too short to capture important properties of very large ones. Unfortunately, the ranges of these observation types barely overlap, so that integrating them into a consistent picture adequate to infer future prospects requires a great deal of trust. Perhaps the most important boundary is the temporal one at the beginning of the instrumental seismic era, about a century ago. We have virtually no seismological or geodetic information on large earthquakes before then, and little geological information after. Virtually all-modern forecasts of large earthquakes assume some form of equivalence between tectonic- and seismic moment rates as functions of location, time, and magnitude threshold. That assumption links geology, geodesy, and seismology, but it invokes a host of other assumptions and incurs very significant uncertainties. Questions include temporal behavior of seismic and tectonic moment rates; shape of the earthquake magnitude distribution; upper magnitude limit; scaling between rupture length, width, and displacement; depth dependence of stress coupling; value of crustal rigidity; and relation between faults at depth and their surface fault traces, to name just a few. In this report I'll estimate the quantitative implications for estimating large earthquake rate. Global studies like the GEAR1 project suggest that surface deformation from geology and geodesy best show the geography of very large, rare earthquakes in the long term, while seismological observations of small earthquakes best forecasts moderate earthquakes

  11. Deeper penetration of large earthquakes on seismically quiescent faults.

    Science.gov (United States)

    Jiang, Junle; Lapusta, Nadia

    2016-06-10

    Why many major strike-slip faults known to have had large earthquakes are silent in the interseismic period is a long-standing enigma. One would expect small earthquakes to occur at least at the bottom of the seismogenic zone, where deeper aseismic deformation concentrates loading. We suggest that the absence of such concentrated microseismicity indicates deep rupture past the seismogenic zone in previous large earthquakes. We support this conclusion with numerical simulations of fault behavior and observations of recent major events. Our modeling implies that the 1857 Fort Tejon earthquake on the San Andreas Fault in Southern California penetrated below the seismogenic zone by at least 3 to 5 kilometers. Our findings suggest that such deeper ruptures may occur on other major fault segments, potentially increasing the associated seismic hazard. Copyright © 2016, American Association for the Advancement of Science.

  12. Assigning probability gain for precursors of four large Chinese earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Cao, T.; Aki, K.

    1983-03-10

    We extend the concept of probability gain associated with a precursor (Aki, 1981) to a set of precursors which may be mutually dependent. Making use of a new formula, we derive a criterion for selecting precursors from a given data set in order to calculate the probability gain. The probabilities per unit time immediately before four large Chinese earthquakes are calculated. They are approximately 0.09, 0.09, 0.07 and 0.08 per day for 1975 Haicheng (M = 7.3), 1976 Tangshan (M = 7.8), 1976 Longling (M = 7.6), and Songpan (M = 7.2) earthquakes, respectively. These results are encouraging because they suggest that the investigated precursory phenomena may have included the complete information for earthquake prediction, at least for the above earthquakes. With this method, the step-by-step approach to prediction used in China may be quantified in terms of the probability of earthquake occurrence. The ln P versus t curve (where P is the probability of earthquake occurrence at time t) shows that ln P does not increase with t linearly but more rapidly as the time of earthquake approaches.

  13. Comparison of two large earthquakes: the 2008 Sichuan Earthquake and the 2011 East Japan Earthquake.

    Science.gov (United States)

    Otani, Yuki; Ando, Takayuki; Atobe, Kaori; Haiden, Akina; Kao, Sheng-Yuan; Saito, Kohei; Shimanuki, Marie; Yoshimoto, Norifumi; Fukunaga, Koichi

    2012-01-01

    Between August 15th and 19th, 2011, eight 5th-year medical students from the Keio University School of Medicine had the opportunity to visit the Peking University School of Medicine and hold a discussion session titled "What is the most effective way to educate people for survival in an acute disaster situation (before the mental health care stage)?" During the session, we discussed the following six points: basic information regarding the Sichuan Earthquake and the East Japan Earthquake, differences in preparedness for earthquakes, government actions, acceptance of medical rescue teams, earthquake-induced secondary effects, and media restrictions. Although comparison of the two earthquakes was not simple, we concluded that three major points should be emphasized to facilitate the most effective course of disaster planning and action. First, all relevant agencies should formulate emergency plans and should supply information regarding the emergency to the general public and health professionals on a normal basis. Second, each citizen should be educated and trained in how to minimize the risks from earthquake-induced secondary effects. Finally, the central government should establish a single headquarters responsible for command, control, and coordination during a natural disaster emergency and should centralize all powers in this single authority. We hope this discussion may be of some use in future natural disasters in China, Japan, and worldwide.

  14. Global variations of large megathrust earthquake rupture characteristics

    Science.gov (United States)

    Kanamori, Hiroo

    2018-01-01

    Despite the surge of great earthquakes along subduction zones over the last decade and advances in observations and analysis techniques, it remains unclear whether earthquake complexity is primarily controlled by persistent fault properties or by dynamics of the failure process. We introduce the radiated energy enhancement factor (REEF), given by the ratio of an event’s directly measured radiated energy to the calculated minimum radiated energy for a source with the same seismic moment and duration, to quantify the rupture complexity. The REEF measurements for 119 large [moment magnitude (Mw) 7.0 to 9.2] megathrust earthquakes distributed globally show marked systematic regional patterns, suggesting that the rupture complexity is strongly influenced by persistent geological factors. We characterize this as the existence of smooth and rough rupture patches with varying interpatch separation, along with failure dynamics producing triggering interactions that augment the regional influences on large events. We present an improved asperity scenario incorporating both effects and categorize global subduction zones and great earthquakes based on their REEF values and slip patterns. Giant earthquakes rupturing over several hundred kilometers can occur in regions with low-REEF patches and small interpatch spacing, such as for the 1960 Chile, 1964 Alaska, and 2011 Tohoku earthquakes, or in regions with high-REEF patches and large interpatch spacing as in the case for the 2004 Sumatra and 1906 Ecuador-Colombia earthquakes. Thus, combining seismic magnitude Mw and REEF, we provide a quantitative framework to better represent the span of rupture characteristics of great earthquakes and to understand global seismicity. PMID:29750186

  15. Magnitude Estimation for Large Earthquakes from Borehole Recordings

    Science.gov (United States)

    Eshaghi, A.; Tiampo, K. F.; Ghofrani, H.; Atkinson, G.

    2012-12-01

    We present a simple and fast method for magnitude determination technique for earthquake and tsunami early warning systems based on strong ground motion prediction equations (GMPEs) in Japan. This method incorporates borehole strong motion records provided by the Kiban Kyoshin network (KiK-net) stations. We analyzed strong ground motion data from large magnitude earthquakes (5.0 ≤ M ≤ 8.1) with focal depths < 50 km and epicentral distances of up to 400 km from 1996 to 2010. Using both peak ground acceleration (PGA) and peak ground velocity (PGV) we derived GMPEs in Japan. These GMPEs are used as the basis for regional magnitude determination. Predicted magnitudes from PGA values (Mpga) and predicted magnitudes from PGV values (Mpgv) were defined. Mpga and Mpgv strongly correlate with the moment magnitude of the event, provided sufficient records for each event are available. The results show that Mpgv has a smaller standard deviation in comparison to Mpga when compared with the estimated magnitudes and provides a more accurate early assessment of earthquake magnitude. We test this new method to estimate the magnitude of the 2011 Tohoku earthquake and we present the results of this estimation. PGA and PGV from borehole recordings allow us to estimate the magnitude of this event 156 s and 105 s after the earthquake onset, respectively. We demonstrate that the incorporation of borehole strong ground-motion records immediately available after the occurrence of large earthquakes significantly increases the accuracy of earthquake magnitude estimation and the associated improvement in earthquake and tsunami early warning systems performance. Moment magnitude versus predicted magnitude (Mpga and Mpgv).

  16. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    Science.gov (United States)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  17. Automated Determination of Magnitude and Source Length of Large Earthquakes

    Science.gov (United States)

    Wang, D.; Kawakatsu, H.; Zhuang, J.; Mori, J. J.; Maeda, T.; Tsuruoka, H.; Zhao, X.

    2017-12-01

    Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus

  18. Geotechnical hazards from large earthquakes and heavy rainfalls

    CERN Document Server

    Kazama, Motoki; Lee, Wei

    2017-01-01

    This book is a collection of papers presented at the International Workshop on Geotechnical Natural Hazards held July 12–15, 2014, in Kitakyushu, Japan. The workshop was the sixth in the series of Japan–Taiwan Joint Workshops on Geotechnical Hazards from Large Earthquakes and Heavy Rainfalls, held under the auspices of the Asian Technical Committee No. 3 on Geotechnology for Natural Hazards of the International Society for Soil Mechanics and Geotechnical Engineering. It was co-organized by the Japanese Geotechnical Society and the Taiwanese Geotechnical Society. The contents of this book focus on geotechnical and natural hazard-related issues in Asia such as earthquakes, tsunami, rainfall-induced debris flows, slope failures, and landslides. The book contains the latest information and mitigation technology on earthquake- and rainfall-induced geotechnical natural hazards. By dissemination of the latest state-of-the-art research in the area, the information contained in this book will help researchers, des...

  19. The Quanzhou large earthquake: environment impact and deep process

    Science.gov (United States)

    WANG, Y.; Gao*, R.; Ye, Z.; Wang, C.

    2017-12-01

    The Quanzhou earthquake is the largest earthquake in China's southeast coast in history. The ancient city of Quanzhou and its adjacent areas suffered serious damage. Analysis of the impact of Quanzhou earthquake on human activities, ecological environment and social development will provide an example for the research on environment and human interaction.According to historical records, on the night of December 29, 1604, a Ms 8.0 earthquake occurred in the sea area at the east of Quanzhou (25.0°N, 119.5°E) with a focal depth of 25 kilometers. It affected to a maximum distance of 220 kilometers from the epicenter and caused serious damage. Quanzhou, which has been known as one of the world's largest trade ports during Song and Yuan periods was heavily destroyed by this earthquake. The destruction of the ancient city was very serious and widespread. The city wall collapsed in Putian, Nanan, Tongan and other places. The East and West Towers of Kaiyuan Temple, which are famous with magnificent architecture in history, were seriously destroyed.Therefore, an enormous earthquake can exert devastating effects on human activities and social development in the history. It is estimated that a more than Ms. 5.0 earthquake in the economically developed coastal areas in China can directly cause economic losses for more than one hundred million yuan. This devastating large earthquake that severely destroyed the Quanzhou city was triggered under a tectonic-extensional circumstance. In this coastal area of the Fujian Province, the crust gradually thins eastward from inland to coast (less than 29 km thick crust beneath the coast), the lithosphere is also rather thin (60 70 km), and the Poisson's ratio of the crust here appears relatively high. The historical Quanzhou Earthquake was probably correlated with the NE-striking Littoral Fault Zone, which is characterized by right-lateral slip and exhibiting the most active seismicity in the coastal area of Fujian. Meanwhile, tectonic

  20. The seismic cycles of large Romanian earthquake: The physical foundation, and the next large earthquake in Vrancea

    International Nuclear Information System (INIS)

    Purcaru, G.

    2002-01-01

    The occurrence patterns of large/great earthquakes at subduction zone interface and in-slab are complex in the space-time dynamics, and make even long-term forecasts very difficult. For some favourable cases where a predictive (empirical) law was found successful predictions were possible (eg. Aleutians, Kuriles, etc). For the large Romanian events (M > 6.7), occurring in the Vrancea seismic slab below 60 km, Purcaru (1974) first found the law of the occurrence time and magnitude: the law of 'quasicycles' and 'supercycles', for large and largest events (M > 7.25), respectively. The quantitative model of Purcaru with these seismic cycles has three time-bands (periods of large earthquakes)/century, discovered using the earthquake history (1100-1973) (however incomplete) of large Vrancea earthquakes for which M was initially estimated (Purcaru, 1974, 1979). Our long-term prediction model is essentially quasideterministic, it predicts uniquely the time and magnitude; since is not strict deterministic the forecasting is interval valued. It predicted the next large earthquake in 1980 in the 3rd time-band (1970-1990), and which occurred in 1977 (M7.1, M w 7.5). The prediction was successful, in long-term sense. We discuss the unpredicted events in 1986 and 1990. Since the laws are phenomenological, we give their physical foundation based on the large scale of rupture zone (RZ) and subscale of the rupture process (RP). First results show that: (1) the 1940 event (h=122 km) ruptured the lower part of the oceanic slab entirely along strike, and down dip, and similarly for 1977 but its upper part, (2) the RZ of 1977 and 1990 events overlap and the first asperity of 1977 event was rebroken in 1990. This shows the size of the events strongly depends on RZ, asperity size/strength and, thus on the failure stress level (FSL), but not on depth, (3) when FSL of high strength (HS) larger zones is critical largest events (eg. 1802, 1940) occur, thus explaining the supercyles (the 1940

  1. Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?

    Science.gov (United States)

    Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.

    2018-02-01

    The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and

  2. The Strain Energy, Seismic Moment and Magnitudes of Large Earthquakes

    Science.gov (United States)

    Purcaru, G.

    2004-12-01

    The strain energy Est, as potential energy, released by an earthquake and the seismic moment Mo are two fundamental physical earthquake parameters. The earthquake rupture process ``represents'' the release of the accumulated Est. The moment Mo, first obtained in 1966 by Aki, revolutioned the quantification of earthquake size and led to the elimination of the limitations of the conventional magnitudes (originally ML, Richter, 1930) mb, Ms, m, MGR. Both Mo and Est, not in a 1-to-1 correspondence, are uniform measures of the size, although Est is presently less accurate than Mo. Est is partitioned in seismic- (Es), fracture- (Eg) and frictional-energy Ef, and Ef is lost as frictional heat energy. The available Est = Es + Eg (Aki and Richards (1980), Kostrov and Das, (1988) for fundamentals on Mo and Est). Related to Mo, Est and Es, several modern magnitudes were defined under various assumptions: the moment magnitude Mw (Kanamori, 1977), strain energy magnitude ME (Purcaru and Berckhemer, 1978), tsunami magnitude Mt (Abe, 1979), mantle magnitude Mm (Okal and Talandier, 1987), seismic energy magnitude Me (Choy and Boatright, 1995, Yanovskaya et al, 1996), body-wave magnitude Mpw (Tsuboi et al, 1998). The available Est = (1/2μ )Δ σ Mo, Δ σ ~=~average stress drop, and ME is % \\[M_E = 2/3(\\log M_o + \\log(\\Delta\\sigma/\\mu)-12.1) ,\\] % and log Est = 11.8 + 1.5 ME. The estimation of Est was modified to include Mo, Δ and μ of predominant high slip zones (asperities) to account for multiple events (Purcaru, 1997): % \\[E_{st} = \\frac{1}{2} \\sum_i {\\frac{1}{\\mu_i} M_{o,i} \\Delta\\sigma_i} , \\sum_i M_{o,i} = M_o \\] % We derived the energy balance of Est, Es and Eg as: % \\[ E_{st}/M_o = (1+e(g,s)) E_s/M_o , e(g,s) = E_g/E_s \\] % We analyzed a set of about 90 large earthquakes and found that, depending on the goal these magnitudes quantify differently the rupture process, thus providing complementary means of earthquake characterization. Results for some

  3. Relations between source parameters for large Persian earthquakes

    Directory of Open Access Journals (Sweden)

    Majid Nemati

    2015-11-01

    Full Text Available Empirical relationships for magnitude scales and fault parameters were produced using 436 Iranian intraplate earthquakes of recently regional databases since the continental events represent a large portion of total seismicity of Iran. The relations between different source parameters of the earthquakes were derived using input information which has usefully been provided from the databases after 1900. Suggested equations for magnitude scales relate the body-wave, surface-wave as well as local magnitude scales to scalar moment of the earthquakes. Also, dependence of source parameters as surface and subsurface rupture length and maximum surface displacement on the moment magnitude for some well documented earthquakes was investigated. For meeting this aim, ordinary linear regression procedures were employed for all relations. Our evaluations reveal a fair agreement between obtained relations and equations described in other worldwide and regional works in literature. The M0-mb and M0-MS equations are correlated well to the worldwide relations. Also, both M0-MS and M0-ML relations have a good agreement with regional studies in Taiwan. The equations derived from this study mainly confirm the results of the global investigations about rupture length of historical and instrumental events. However, some relations like MW-MN and MN-ML which are remarkably unlike to available regional works (e.g., American and Canadian were also found.

  4. Feasibility study of short-term earthquake prediction using ionospheric anomalies immediately before large earthquakes

    Science.gov (United States)

    Heki, K.; He, L.

    2017-12-01

    We showed that positive and negative electron density anomalies emerge above the fault immediately before they rupture, 40/20/10 minutes before Mw9/8/7 earthquakes (Heki, 2011 GRL; Heki and Enomoto, 2013 JGR; He and Heki 2017 JGR). These signals are stronger for earthquake with larger Mw and under higher background vertical TEC (total electron conetent) (Heki and Enomoto, 2015 JGR). The epicenter, the positive and the negative anomalies align along the local geomagnetic field (He and Heki, 2016 GRL), suggesting electric fields within ionosphere are responsible for making the anomalies (Kuo et al., 2014 JGR; Kelley et al., 2017 JGR). Here we suppose the next Nankai Trough earthquake that may occur within a few tens of years in Southwest Japan, and will discuss if we can recognize its preseismic signatures in TEC by real-time observations with GNSS.During high geomagnetic activities, large-scale traveling ionospheric disturbances (LSTID) often propagate from auroral ovals toward mid-latitude regions, and leave similar signatures to preseismic anomalies. This is a main obstacle to use preseismic TEC changes for practical short-term earthquake prediction. In this presentation, we show that the same anomalies appeared 40 minutes before the mainshock above northern Australia, the geomagnetically conjugate point of the 2011 Tohoku-oki earthquake epicenter. This not only demonstrates that electric fields play a role in making the preseismic TEC anomalies, but also offers a possibility to discriminate preseismic anomalies from those caused by LSTID. By monitoring TEC in the conjugate areas in the two hemisphere, we can recognize anomalies with simultaneous onset as those caused by within-ionosphere electric fields (e.g. preseismic anomalies, night-time MSTID) and anomalies without simultaneous onset as gravity-wave origin disturbances (e.g. LSTID, daytime MSTID).

  5. Why and Where do Large Shallow Slab Earthquakes Occur?

    Science.gov (United States)

    Seno, T.; Yoshida, M.

    2001-12-01

    Within a shallow portion (20-60 km depth) of subducting slabs, it has been believed that large earthquakes seldom occur because the differential stress is generally expected to be low between bending at the trench-outer rise and unbending at the intermediate-depth. However, there are several regions in which large ( M>=7.0 ) earthquakes, including three events early in this year, have occurred in this portion. Searching such events from published individual studies and Harvard University centroid moment tensor catalogue, we find nineteen events in eastern Hokkaido, Kyushu-SW Japan, Mariana, Manila, Sumatra, Vanuatu, Chile, Peru, El Salvador, Mexico, and Cascadia. Slab stresses revealed from the mechanism solutions of those large events and smaller events are tensional in a slab dip direction. However, ages of the subducting oceanic plates are generally young, which denies a possibility that the slab pull works as a cause. Except for Manila and Sumatra, the stresses in the overriding plates are characterized by the change in {σ }Hmax direction from arc-parallel in the back-arc to arc-perpendicular in the fore-arc, which implies that a horizontal stress gradient exists in the across-arc direction. Peru and Chile, where the back-arc is compressional, can be categorized into this type, because a horizontal stress gradient exists over the continent from tension in east to compression in the west. In these regions, it is expected that mantle drag forces are operating beneath the upper plates, which drive the upper plates to the trenchward overriding the subducting oceanic plates. Assuming that the mantle drag forces beneath the upper plates originate from the mantle convection currents or upwelling plumes, we infer that the upper plates driven by the convection suck the oceanic plates, making the shallow portion of the slabs in extra-tension, thus resulting in the large shallow slab earthquakes in this tectonic regime.

  6. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    Science.gov (United States)

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-08-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.

  7. Estimating Source Duration for Moderate and Large Earthquakes in Taiwan

    Science.gov (United States)

    Chang, Wen-Yen; Hwang, Ruey-Der; Ho, Chien-Yin; Lin, Tzu-Wei

    2017-04-01

    Estimating Source Duration for Moderate and Large Earthquakes in Taiwan Wen-Yen Chang1, Ruey-Der Hwang2, Chien-Yin Ho3 and Tzu-Wei Lin4 1 Department of Natural Resources and Environmental Studies, National Dong Hwa University, Hualien, Taiwan, ROC 2Department of Geology, Chinese Culture University, Taipei, Taiwan, ROC 3Department of Earth Sciences, National Cheng Kung University, Tainan, Taiwan, ROC 4Seismology Center, Central Weather Bureau, Taipei, Taiwan, ROC ABSTRACT To construct a relationship between seismic moment (M0) and source duration (t) was important for seismic hazard in Taiwan, where earthquakes were quite active. In this study, we used a proposed inversion process using teleseismic P-waves to derive the M0-t relationship in the Taiwan region for the first time. Fifteen earthquakes with MW 5.5-7.1 and focal depths of less than 40 km were adopted. The inversion process could simultaneously determine source duration, focal depth, and pseudo radiation patterns of direct P-wave and two depth phases, by which M0 and fault plane solutions were estimated. Results showed that the estimated t ranging from 2.7 to 24.9 sec varied with one-third power of M0. That is, M0 is proportional to t**3, and then the relationship between both of them was M0=0.76*10**23(t)**3 , where M0 in dyne-cm and t in second. The M0-t relationship derived from this study was very close to those determined from global moderate to large earthquakes. For further understanding the validity in the derived relationship, through the constructed relationship of M0-, we inferred the source duration of the 1999 Chi-Chi (Taiwan) earthquake with M0=2-5*10**27 dyne-cm (corresponding to Mw = 7.5-7.7) to be approximately 29-40 sec, in agreement with many previous studies for source duration (28-42 sec).

  8. Characterising large scenario earthquakes and their influence on NDSHA maps

    Science.gov (United States)

    Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.

    2016-04-01

    The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can

  9. Large earthquake rupture process variations on the Middle America megathrust

    Science.gov (United States)

    Ye, Lingling; Lay, Thorne; Kanamori, Hiroo

    2013-11-01

    The megathrust fault between the underthrusting Cocos plate and overriding Caribbean plate recently experienced three large ruptures: the August 27, 2012 (Mw 7.3) El Salvador; September 5, 2012 (Mw 7.6) Costa Rica; and November 7, 2012 (Mw 7.4) Guatemala earthquakes. All three events involve shallow-dipping thrust faulting on the plate boundary, but they had variable rupture processes. The El Salvador earthquake ruptured from about 4 to 20 km depth, with a relatively large centroid time of ˜19 s, low seismic moment-scaled energy release, and a depleted teleseismic short-period source spectrum similar to that of the September 2, 1992 (Mw 7.6) Nicaragua tsunami earthquake that ruptured the adjacent shallow portion of the plate boundary. The Costa Rica and Guatemala earthquakes had large slip in the depth range 15 to 30 km, and more typical teleseismic source spectra. Regional seismic recordings have higher short-period energy levels for the Costa Rica event relative to the El Salvador event, consistent with the teleseismic observations. A broadband regional waveform template correlation analysis is applied to categorize the focal mechanisms for larger aftershocks of the three events. Modeling of regional wave spectral ratios for clustered events with similar mechanisms indicates that interplate thrust events have corner frequencies, normalized by a reference model, that increase down-dip from anomalously low values near the Middle America trench. Relatively high corner frequencies are found for thrust events near Costa Rica; thus, variations along strike of the trench may also be important. Geodetic observations indicate trench-parallel motion of a forearc sliver extending from Costa Rica to Guatemala, and low seismic coupling on the megathrust has been inferred from a lack of boundary-perpendicular strain accumulation. The slip distributions and seismic radiation from the large regional thrust events indicate relatively strong seismic coupling near Nicoya, Costa

  10. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    OpenAIRE

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-01-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a dept...

  11. Earthquakes

    Science.gov (United States)

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  12. Prediction of site specific ground motion for large earthquake

    International Nuclear Information System (INIS)

    Kamae, Katsuhiro; Irikura, Kojiro; Fukuchi, Yasunaga.

    1990-01-01

    In this paper, we apply the semi-empirical synthesis method by IRIKURA (1983, 1986) to the estimation of site specific ground motion using accelerograms observed at Kumatori in Osaka prefecture. Target earthquakes used here are a comparatively distant earthquake (Δ=95 km, M=5.6) caused by the YAMASAKI fault and a near earthquake (Δ=27 km, M=5.6). The results obtained are as follows. 1) The accelerograms from the distant earthquake (M=5.6) are synthesized using the aftershock records (M=4.3) for 1983 YAMASAKI fault earthquake whose source parameters have been obtained by other authors from the hypocentral distribution of the aftershocks. The resultant synthetic motions show a good agreement with the observed ones. 2) The synthesis for a near earthquake (M=5.6, we call this target earthquake) are made using a small earthquake which occurred in the neighborhood of the target earthquake. Here, we apply two methods for giving the parameters for synthesis. One method is to use the parameters of YAMASAKI fault earthquake which has the same magnitude as the target earthquake, and the other is to use the parameters obtained from several existing empirical formulas. The resultant synthetic motion with the former parameters shows a good agreement with the observed one, but that with the latter does not. 3) We estimate the source parameters from the source spectra of several earthquakes which have been observed in this site. Consequently we find that the small earthquakes (M<4) as Green's functions should be carefully used because the stress drops are not constant. 4) We propose that we should designate not only the magnitudes but also seismic moments of the target earthquake and the small earthquake. (J.P.N.)

  13. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-01-01

    Earthquakes are one of the most destructive natural hazards on our planet Earth. Hugh earthquakes striking offshore may cause devastating tsunamis, as evidenced by the 11 March 2011 Japan (moment magnitude Mw9.0) and the 26 December 2004 Sumatra (Mw9.1) earthquakes. Earthquake prediction (in terms of the precise time, place, and magnitude of a coming earthquake) is arguably unfeasible in the foreseeable future. To mitigate seismic hazards from future earthquakes in earthquake-prone areas, such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular, ground motion simulations for past and future (possible) significant earthquakes have been performed to understand factors that affect ground shaking in populated areas, and to provide ground shaking characteristics and synthetic seismograms for emergency preparation and design of earthquake-resistant structures. These simulation results can guide the development of more rational seismic provisions for leading to safer, more efficient, and economical50pt]Please provide V. Taylor author e-mail ID. structures in earthquake-prone regions.

  14. Where and why do large shallow intraslab earthquakes occur?

    Science.gov (United States)

    Seno, Tetsuzo; Yoshida, Masaki

    2004-03-01

    We try to find how often, and in what regions large earthquakes ( M≥7.0) occur within the shallow portion (20-60 km depth) of a subducting slab. Searching for events in published individual studies and the Harvard University centroid moment tensor catalogue, we find twenty such events in E. Hokkaido, Kyushu-SW, Japan, S. Mariana, Manila, Sumatra, Vanuatu, N. Chile, C. Peru, El Salvador, Mexico, N. Cascadia and Alaska. Slab stresses revealed from the mechanism solutions of these large intraslab events and nearby smaller events are almost always down-dip tensional. Except for E. Hokkaido, Manila, and Sumatra, the upper plate shows horizontal stress gradient in the arc-perpendicular direction. We infer that shear tractions are operating at the base of the upper plate in this direction to produce the observed gradient and compression in the outer fore-arc, balancing the down-dip tensional stress of the slab. This tectonic situation in the subduction zone might be realized as part of the convection system with some conditions, as shown by previous numerical simulations.

  15. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu; Duan, Benchun; Taylor, Valerie

    2011-01-01

    , such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular

  16. Global Omori law decay of triggered earthquakes: Large aftershocks outside the classical aftershock zone

    Science.gov (United States)

    Parsons, Tom

    2002-09-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near (defined as having shear stress change ∣Δτ∣ ≥ 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ˜39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ˜7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristic rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥ 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  17. Large LOCA accident analysis for AP1000 under earthquake

    International Nuclear Information System (INIS)

    Yu, Yu; Lv, Xuefeng; Niu, Fenglei

    2015-01-01

    Highlights: • Seismic failure event probability is induced by uncertainties in PGA and in Am. • Uncertainty in PGA is shared by all the components at the same place. • Relativity induced by sharing PGA value can be analyzed explicitly by MC method. • Multi components failures and accident sequences will occur under high PGA value. - Abstract: Seismic probabilistic safety assessment (PSA) is developed to give the insight of nuclear power plant risk under earthquake and the main contributors to the risk. However, component failure probability including the initial event frequency is the function of peak ground acceleration (PGA), and all the components especially the different kinds of components at same place will share the common ground shaking, which is one of the important factors to influence the result. In this paper, we propose an analysis method based on Monte Carlo (MC) simulation in which the effect of all components sharing the same PGA level can be expressed by explicit pattern. The Large LOCA accident in AP1000 is analyzed as an example, based on the seismic hazard curve used in this paper, the core damage frequency is almost equal to the initial event frequency, moreover the frequency of each accident sequence is close to and even equal to the initial event frequency, while the main contributors are seismic events since multi components and systems failures will happen simultaneously when a high value of PGA is sampled. The component failure probability is determined by uncertainties in PGA and in component seismic capacity, and the former is the crucial element to influence the result

  18. Large magnitude earthquakes on the Awatere Fault, Marlborough

    International Nuclear Information System (INIS)

    Mason, D.P.M.; Little, T.A.; Van Dissen, R.J.

    2006-01-01

    The Awatere Fault is a principal active strike-slip fault within the Marlborough fault system, and last ruptured in October 1848, in the M w ∼7.5 Marlborough earthquake. The coseismic slip distribution and maximum traceable length of this rupture are calculated from the magnitude and distribution of small, metre-scale geomorphic displacements attributable to this earthquake. These data suggest this event ruptured ∼110 km of the fault, with mean horizontal surface displacement of 5.3 ± 1.6m. Based on these parameters, the moment magnitude of this earthquake would be M w ∼7.4-7.7. Paeloseismic trenching investigations along the eastern section reveal evidence for at least eight, and possibly ten, surface-rupturing paleoearthquakes in the last 8600 years, including the 1848 rupture. The coseismic slip distribution and rupture length of the 1848 earthquake, in combination with the paleoearthquake age data, suggest the eastern section of the Awatere Fault ruptures in M w ∼7.5 earthquakes, with over 5 m of surface displacement, every 860-1080 years. (author). 21 refs., 10 figs., 7 tabs

  19. The characteristic of the building damage from historical large earthquakes in Kyoto

    Science.gov (United States)

    Nishiyama, Akihito

    2016-04-01

    The Kyoto city, which is located in the northern part of Kyoto basin in Japan, has a long history of >1,200 years since the city was initially constructed. The city has been a populated area with many buildings and the center of the politics, economy and culture in Japan for nearly 1,000 years. Some of these buildings are now subscribed as the world's cultural heritage. The Kyoto city has experienced six damaging large earthquakes during the historical period: i.e., in 976, 1185, 1449, 1596, 1662, and 1830. Among these, the last three earthquakes which caused severe damage in Kyoto occurred during the period in which the urban area had expanded. These earthquakes are considered to be inland earthquakes which occurred around the Kyoto basin. The damage distribution in Kyoto from historical large earthquakes is strongly controlled by ground condition and earthquakes resistance of buildings rather than distance from estimated source fault. Therefore, it is necessary to consider not only the strength of ground shaking but also the condition of building such as elapsed years since the construction or last repair in order to more accurately and reliably estimate seismic intensity distribution from historical earthquakes in Kyoto. The obtained seismic intensity map would be helpful for reducing and mitigating disaster from future large earthquakes.

  20. Ionospheric precursors to large earthquakes: A case study of the 2011 Japanese Tohoku Earthquake

    Science.gov (United States)

    Carter, B. A.; Kellerman, A. C.; Kane, T. A.; Dyson, P. L.; Norman, R.; Zhang, K.

    2013-09-01

    Researchers have reported ionospheric electron distribution abnormalities, such as electron density enhancements and/or depletions, that they claimed were related to forthcoming earthquakes. In this study, the Tohoku earthquake is examined using ionosonde data to establish whether any otherwise unexplained ionospheric anomalies were detected in the days and hours prior to the event. As the choices for the ionospheric baseline are generally different between previous works, three separate baselines for the peak plasma frequency of the F2 layer, foF2, are employed here; the running 30-day median (commonly used in other works), the International Reference Ionosphere (IRI) model and the Thermosphere Ionosphere Electrodynamic General Circulation Model (TIE-GCM). It is demonstrated that the classification of an ionospheric perturbation is heavily reliant on the baseline used, with the 30-day median, the IRI and the TIE-GCM generally underestimating, approximately describing and overestimating the measured foF2, respectively, in the 1-month period leading up to the earthquake. A detailed analysis of the ionospheric variability in the 3 days before the earthquake is then undertaken, where a simultaneous increase in foF2 and the Es layer peak plasma frequency, foEs, relative to the 30-day median was observed within 1 h before the earthquake. A statistical search for similar simultaneous foF2 and foEs increases in 6 years of data revealed that this feature has been observed on many other occasions without related seismic activity. Therefore, it is concluded that one cannot confidently use this type of ionospheric perturbation to predict an impending earthquake. It is suggested that in order to achieve significant progress in our understanding of seismo-ionospheric coupling, better account must be taken of other known sources of ionospheric variability in addition to solar and geomagnetic activity, such as the thermospheric coupling.

  1. Quasi real-time estimation of the moment magnitude of large earthquake from static strain changes

    Science.gov (United States)

    Itaba, S.

    2016-12-01

    The 2011 Tohoku-Oki (off the Pacific coast of Tohoku) earthquake, of moment magnitude 9.0, was accompanied by large static strain changes (10-7), as measured by borehole strainmeters operated by the Geological Survey of Japan in the Tokai, Kii Peninsula, and Shikoku regions. A fault model for the earthquake on the boundary between the Pacific and North American plates, based on these borehole strainmeter data, yielded a moment magnitude of 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency (JMA) announced just after earthquake occurrence was 7.9. Such geodetic moment magnitudes, derived from static strain changes, can be estimated almost as rapidly as determinations using seismic waves. I have to verify the validity of this method in some cases. In the case of this earthquake's largest aftershock, which occurred 29 minutes after the mainshock. The prompt report issued by JMA assigned this aftershock a magnitude of 7.3, whereas the moment magnitude derived from borehole strain data is 7.6, which is much closer to the actual moment magnitude of 7.7. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using static strain changes is one of the strong methods for rapid estimation of the magnitude of large earthquakes, and useful to improve the accuracy of Earthquake Early Warning.

  2. The evolution of hillslope strength following large earthquakes

    Science.gov (United States)

    Brain, Matthew; Rosser, Nick; Tunstall, Neil

    2017-04-01

    Earthquake-induced landslides play an important role in the evolution of mountain landscapes. Earthquake ground shaking triggers near-instantaneous landsliding, but has also been shown to weaken hillslopes, preconditioning them for failure during subsequent seismicity and/or precipitation events. The temporal evolution of hillslope strength during and following primary seismicity, and if and how this ultimately results in failure, is poorly constrained due to the rarity of high-magnitude earthquakes and limited availability of suitable field datasets. We present results obtained from novel geotechnical laboratory tests to better constrain the mechanisms that control strength evolution in Earth materials of differing rheology. We consider how the strength of hillslope materials responds to ground-shaking events of different magnitude and if and how this persists to influence landslide activity during interseismic periods. We demonstrate the role of stress path and stress history, strain rate and foreshock and aftershock sequences in controlling the evolution of hillslope strength and stability. Critically, we show how hillslopes can be strengthened rather than weakened in some settings, challenging conventional assumptions. On the basis of our laboratory data, we consider the implications for earthquake-induced geomorphic perturbations in mountain landscapes over multiple timescales and in different seismogenic settings.

  3. Catastrophic valley fills record large Himalayan earthquakes, Pokhara, Nepal

    Science.gov (United States)

    Stolle, Amelie; Bernhardt, Anne; Schwanghart, Wolfgang; Hoelzmann, Philipp; Adhikari, Basanta R.; Fort, Monique; Korup, Oliver

    2017-12-01

    Uncertain timing and magnitudes of past mega-earthquakes continue to confound seismic risk appraisals in the Himalayas. Telltale traces of surface ruptures are rare, while fault trenches document several events at best, so that additional proxies of strong ground motion are needed to complement the paleoseismological record. We study Nepal's Pokhara basin, which has the largest and most extensively dated archive of earthquake-triggered valley fills in the Himalayas. These sediments form a 148-km2 fan that issues from the steep Seti Khola gorge in the Annapurna Massif, invading and plugging 15 tributary valleys with tens of meters of debris, and impounding several lakes. Nearly a dozen new radiocarbon ages corroborate at least three episodes of catastrophic sedimentation on the fan between ∼700 and ∼1700 AD, coinciding with great earthquakes in ∼1100, 1255, and 1344 AD, and emplacing roughly >5 km3 of debris that forms the Pokhara Formation. We offer a first systematic sedimentological study of this formation, revealing four lithofacies characterized by thick sequences of mid-fan fluvial conglomerates, debris-flow beds, and fan-marginal slackwater deposits. New geochemical provenance analyses reveal that these upstream dipping deposits of Higher Himalayan origin contain lenses of locally derived river clasts that mark time gaps between at least three major sediment pulses that buried different parts of the fan. The spatial pattern of 14C dates across the fan and the provenance data are key to distinguishing these individual sediment pulses, as these are not evident from their sedimentology alone. Our study demonstrates how geomorphic and sedimentary evidence of catastrophic valley infill can help to independently verify and augment paleoseismological fault-trench records of great Himalayan earthquakes, while offering unparalleled insights into their long-term geomorphic impacts on major drainage basins.

  4. Quasi-periodic recurrence of large earthquakes on the southern San Andreas fault

    Science.gov (United States)

    Scharer, Katherine M.; Biasi, Glenn P.; Weldon, Ray J.; Fumal, Tom E.

    2010-01-01

    It has been 153 yr since the last large earthquake on the southern San Andreas fault (California, United States), but the average interseismic interval is only ~100 yr. If the recurrence of large earthquakes is periodic, rather than random or clustered, the length of this period is notable and would generally increase the risk estimated in probabilistic seismic hazard analyses. Unfortunately, robust characterization of a distribution describing earthquake recurrence on a single fault is limited by the brevity of most earthquake records. Here we use statistical tests on a 3000 yr combined record of 29 ground-rupturing earthquakes from Wrightwood, California. We show that earthquake recurrence there is more regular than expected from a Poisson distribution and is not clustered, leading us to conclude that recurrence is quasi-periodic. The observation of unimodal time dependence is persistent across an observationally based sensitivity analysis that critically examines alternative interpretations of the geologic record. The results support formal forecast efforts that use renewal models to estimate probabilities of future earthquakes on the southern San Andreas fault. Only four intervals (15%) from the record are longer than the present open interval, highlighting the current hazard posed by this fault.

  5. Some isotopic and geochemical anomalies observed in Mexico prior to large scale earthquakes and volcanic eruptions

    International Nuclear Information System (INIS)

    Cruz R, S. de la; Armienta, M.A.; Segovia A, N.

    1992-05-01

    A brief account of some experiences obtained in Mexico, related with the identification of geochemical precursors of volcanic eruptions and isotopic precursors of earthquakes and volcanic activity is given. The cases of three recent events of volcanic activity and one large earthquake are discussed in the context of an active geological environment. The positive results in the identification of some geochemical precursors that helped to evaluate the eruptive potential during two volcanic crises (Tacana 1986 and Colima 1991), and the significant radon-in-soil anomalies observed during a volcanic catastrophic eruption (El Chichon, 1982) and prior to a major earthquake (Michoacan, 1985) are critically analysed. (Author)

  6. Some isotopic and geochemical anomalies observed in Mexico prior to large scale earthquakes and volcanic eruptions

    Energy Technology Data Exchange (ETDEWEB)

    Cruz R, S. de la; Armienta, M A; Segovia A, N

    1992-05-15

    A brief account of some experiences obtained in Mexico, related with the identification of geochemical precursors of volcanic eruptions and isotopic precursors of earthquakes and volcanic activity is given. The cases of three recent events of volcanic activity and one large earthquake are discussed in the context of an active geological environment. The positive results in the identification of some geochemical precursors that helped to evaluate the eruptive potential during two volcanic crises (Tacana 1986 and Colima 1991), and the significant radon-in-soil anomalies observed during a volcanic catastrophic eruption (El Chichon, 1982) and prior to a major earthquake (Michoacan, 1985) are critically analysed. (Author)

  7. Geodetic characteristic of the postseismic deformation following the interplate large earthquake along the Japan Trench (Invited)

    Science.gov (United States)

    Ohta, Y.; Hino, R.; Ariyoshi, K.; Matsuzawa, T.; Mishina, M.; Sato, T.; Inazu, D.; Ito, Y.; Tachibana, K.; Demachi, T.; Miura, S.

    2013-12-01

    On March 9, 2011 at 2:45 (UTC), an M7.3 interplate earthquake (hereafter foreshock) occurred ~45 km northeast of the epicenter of the M9.0 2011 Tohoku earthquake. This foreshock preceded the 2011 Tohoku earthquake by 51 hours. Ohta et al., (2012, GRL) estimated co- and postseismic afterslip distribution based on a dense GPS network and ocean bottom pressure gauge sites. They found the afterslip distribution was mainly concentrated in the up-dip extension of the coseismic slip. The coseismic slip and afterslip distribution of the foreshock were also located in the slip deficit region (between 20-40m slip) of the coiseismic slip of the M9.0 mainshock. The slip amount for the afterslip is roughly consistent with that determined by repeating earthquake analysis carried out in a previous study (Kato et al., 2012, Science). The estimated moment release for the afterslip reached magnitude 6.8, even within a short time period of 51 hours. They also pointed out that a volumetric strainmeter time series suggests that this event advanced with a rapid decay time constant (4.8 h) compared with other typical large earthquakes. The decay time constant of the afterslip may reflect the frictional property of the plate interface, especially effective normal stress controlled by fluid. For verification of the short decay time constant of the foreshock, we investigated the postseismic deformation characteristic following the 1989 and 1992 Sanriku-Oki earthquakes (M7.1 and M6.9), 2003 and 2005 Miyagi-Oki earthquakes (M6.8 and M7.2), and 2008 Fukushima-Oki earthquake (M6.9). We used four components extensometer at Miyako (39.59N, 141.98E) on the Sanriku coast for 1989 and 1992 event. For 2003, 2005 and 2008 events, we used volumetric strainmeter at Kinka-zan (38.27N, 141.58E) and Enoshima (38.27N, 141.60E). To extract the characteristics of the postseismic deformation, we fitted the logarithmic function. The estimated decay time constants for each earthquake had almost similar range (1

  8. The typical seismic behavior in the vicinity of a large earthquake

    Science.gov (United States)

    Rodkin, M. V.; Tikhonov, I. N.

    2016-10-01

    The Global Centroid Moment Tensor catalog (GCMT) was used to construct the spatio-temporal generalized vicinity of a large earthquake (GVLE) and to investigate the behavior of seismicity in GVLE. The vicinity is made of earthquakes falling into the zone of influence of a large number (100, 300, or 1000) of largest earthquakes. The GVLE construction aims at enlarging the available statistics, diminishing a strong random component, and revealing typical features of pre- and post-shock seismic activity in more detail. As a result of the GVLE construction, the character of fore- and aftershock cascades was examined in more detail than was possible without of the use of the GVLE approach. As well, several anomalies in the behavior exhibited by a variety of earthquake parameters were identified. The amplitudes of all these anomalies increase with the approaching time of the generalized large earthquake (GLE) as the logarithm of the time interval from the GLE occurrence. Most of the discussed anomalies agree with common features well expected in the evolution of instability. In addition to these common type precursors, one earthquake-specific precursor was found. The decrease in mean earthquake depth presumably occurring in a smaller GVLE probably provides evidence of a deep fluid being involved in the process. The typical features in the evolution of shear instability as revealed in GVLE agree with results obtained in laboratory studies of acoustic emission (AE). The majority of the anomalies in earthquake parameters appear to have a secondary character, largely connected with an increase in mean magnitude and decreasing fraction of moderate size events (mw5.0-6.0) in the immediate GLE vicinity. This deficit of moderate size events could hardly be caused entirely by their incomplete reporting and can presumably reflect some features in the evolution of seismic instability.

  9. Rapid Large Earthquake and Run-up Characterization in Quasi Real Time

    Science.gov (United States)

    Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.

    2017-12-01

    Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.

  10. Spatial organization of foreshocks as a tool to forecast large earthquakes.

    Science.gov (United States)

    Lippiello, E; Marzocchi, W; de Arcangelis, L; Godano, C

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg(2)), with significant probability gains with respect to standard models.

  11. Irregular recurrence of large earthquakes along the san andreas fault: evidence from trees.

    Science.gov (United States)

    Jacoby, G C; Sheppard, P R; Sieh, K E

    1988-07-08

    Old trees growing along the San Andreas fault near Wrightwood, California, record in their annual ring-width patterns the effects of a major earthquake in the fall or winter of 1812 to 1813. Paleoseismic data and historical information indicate that this event was the "San Juan Capistrano" earthquake of 8 December 1812, with a magnitude of 7.5. The discovery that at least 12 kilometers of the Mojave segment of the San Andreas fault ruptured in 1812, only 44 years before the great January 1857 rupture, demonstrates that intervals between large earthquakes on this part of the fault are highly variable. This variability increases the uncertainty of forecasting destructive earthquakes on the basis of past behavior and accentuates the need for a more fundamental knowledge of San Andreas fault dynamics.

  12. Search for Anisotropy Changes Associated with Two Large Earthquakes in Japan and New Zealand

    Science.gov (United States)

    Savage, M. K.; Graham, K.; Aoki, Y.; Arnold, R.

    2017-12-01

    Seismic anisotropy is often considered to be an indicator of stress in the crust, because the closure of cracks due to differential stress leads to waves polarized parallel to the cracks travelling faster than the orthogonal direction. Changes in shear wave splitting have been suggested to result from stress changes at volcanoes and earthquakes. However, the effects of mineral or structural alignment, and the difficulty of distinguishing between changes in anisotropy along an earthquake-station path from distinguishing changes in the path itself, have made such findings controversial. Two large earthquakes in 2016 provide unique datasets to test the use of shear wave splitting for measuring variations in stress because clusters of closely-spaced earthquakes occurred both before and after a mainshock. We use the automatic, objective splitting analysis code MFAST to speed process and minimize unwitting observer bias when determining time variations. The sequence of earthquakes related to the M=7.2 Japanese Kumamoto earthquake of 14 April 2016 includes both foreshocks, mainshocks and aftershocks. The sequence was recorded by the NIED permanent network, which already contributed background seismic anisotropy measurements in a previous study of anisotropy and stress in Kyushu. Preliminary measurements of shear wave splitting from earthquakes that occurred in 2016 show results at some stations that clearly differ from those of the earlier study. They also change between earthquakes recorded before and after the mainshock. Further work is under way to determine whether the changes are more likely due to changes in stress during the observation time, or due to spatial changes in anisotropy combined with changes in earthquake locations. Likewise, background seismicity and also foreshocks and aftershocks in the 2013 Cook Strait earthquake sequence including two M=6.5 earthquakes in 2013 in New Zealand were in the same general region as aftershocks of the M=7.8 Kaikoura

  13. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    Science.gov (United States)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  14. Earthquakes in southern Dalmatia and coastal Montenegro before the large 6 April 1667 event

    Science.gov (United States)

    Albini, Paola; Rovida, Andrea

    2018-05-01

    The fourteenth to seventeenth century seismicity of southern Dalmatia (Croatia) and coastal Montenegro deserved to be fully reappraised because of the ascertained imperfect knowledge offered by modern seismological studies and of the awareness of the smokescreen effect due to the large 6 April 1667 M 6.4 earthquake that impacted exactly the area of study. The investigation consisted of (i) a reconsideration of earthquake records made available by previous studies and (ii) a systematic analysis of historical sources contemporary to the earthquakes, especially those not yet taken into account in seismological studies. The 168 contemporary and independent records collected cast a different light on more than 300 years of seismicity of this area. Records are reckoned to be unevenly distributed among the 39 studied earthquakes, out of which 15 still rely upon a single testimony. Each record has been reevaluated with respect to its content and attributed a level of reliability, which for those reporting other 14 events was so low to prevent us from confirming their real occurrence. Completely unreliable records have been identified and discussed, to conclude that they are at the root of five fake earthquakes. Altogether, 34 intensity values in EMS-98 were assessed related to 15 moderate and five damaging earthquakes. Existing and newly obtained data contributed to putting the pre-1667 seismicity of southern Dalmatia and coastal Montenegro into a substantially different perspective.

  15. Large scale earthquake simulator of 3-D (simultaneous X-Y-Z direction)

    International Nuclear Information System (INIS)

    Shiraki, Kazuhiro; Inoue, Masao

    1983-01-01

    Japan is the country where earthquakes are frequent, accordingly it is necessary to examine sufficiently the safety against earthquakes of important machinery and equipment such as nuclear and thermal power plants and chemical plants. For this purpose, aseismatic safety is evaluated by mounting an actual thing or a model on a vibration table and vibrating it by the magnitude several times as large as actual earthquakes. The vibration tables used so far can vibrate only in one direction or in two directions simultaneously, but this time, a three-dimensional vibration table was completed, which can vibrate in three directions simultaneously with arbitrary wave forms, respectively. By the advent of this vibration table, aseismatic test can be carried out, using the earthquake waves close to actual ones. It is expected that this vibration table achieves large role for the improvement of aseismatic reliability of nuclear power machinery and equipment. When a large test body is vibrated on the vibration table, the center of gravity of the test body and the point of action of vibrating force are different, therefore the rotating motion around three axes is added to the motion in three axial directions, and these motions must be controlled so as to realize three-dimensional earthquake motion. The main particulars and the construction of the vibration table, the mechanism of three-direction vibration, the control of the table and the results of test of the table are reported. (Kako, I.)

  16. Strong motion modeling at the Paducah Diffusion Facility for a large New Madrid earthquake

    International Nuclear Information System (INIS)

    Herrmann, R.B.

    1991-01-01

    The Paducah Diffusion Facility is within 80 kilometers of the location of the very large New Madrid earthquakes which occurred during the winter of 1811-1812. Because of their size, seismic moment of 2.0 x 10 27 dyne-cm or moment magnitude M w = 7.5, the possible recurrence of these earthquakes is a major element in the assessment of seismic hazard at the facility. Probabilistic hazard analysis can provide uniform hazard response spectra estimates for structure evaluation, but a deterministic modeling of a such a large earthquake can provide strong constraints on the expected duration of motion. The large earthquake is modeled by specifying the earthquake fault and its orientation with respect to the site, and by specifying the rupture process. Synthetic time histories, based on forward modeling of the wavefield, from each subelement are combined to yield a three component time history at the site. Various simulations are performed to sufficiently exercise possible spatial and temporal distributions of energy release on the fault. Preliminary results demonstrate the sensitivity of the method to various assumptions, and also indicate strongly that the total duration of ground motion at the site is controlled primarily by the length of the rupture process on the fault

  17. Large LOCA-earthquake combination probability assessment - Load combination program. Project 1 summary report

    Energy Technology Data Exchange (ETDEWEB)

    Lu, S; Streit, R D; Chou, C K

    1980-01-01

    This report summarizes work performed for the U.S. Nuclear Regulatory Commission (NRC) by the Load Combination Program at the Lawrence Livermore National Laboratory to establish a technical basis for the NRC to use in reassessing its requirement that earthquake and large loss-of-coolant accident (LOCA) loads be combined in the design of nuclear power plants. A systematic probabilistic approach is used to treat the random nature of earthquake and transient loading to estimate the probability of large LOCAs that are directly and indirectly induced by earthquakes. A large LOCA is defined in this report as a double-ended guillotine break of the primary reactor coolant loop piping (the hot leg, cold leg, and crossover) of a pressurized water reactor (PWR). Unit 1 of the Zion Nuclear Power Plant, a four-loop PWR-1, is used for this study. To estimate the probability of a large LOCA directly induced by earthquakes, only fatigue crack growth resulting from the combined effects of thermal, pressure, seismic, and other cyclic loads is considered. Fatigue crack growth is simulated with a deterministic fracture mechanics model that incorporates stochastic inputs of initial crack size distribution, material properties, stress histories, and leak detection probability. Results of the simulation indicate that the probability of a double-ended guillotine break, either with or without an earthquake, is very small (on the order of 10{sup -12}). The probability of a leak was found to be several orders of magnitude greater than that of a complete pipe rupture. A limited investigation involving engineering judgment of a double-ended guillotine break indirectly induced by an earthquake is also reported. (author)

  18. Large LOCA-earthquake combination probability assessment - Load combination program. Project 1 summary report

    International Nuclear Information System (INIS)

    Lu, S.; Streit, R.D.; Chou, C.K.

    1980-01-01

    This report summarizes work performed for the U.S. Nuclear Regulatory Commission (NRC) by the Load Combination Program at the Lawrence Livermore National Laboratory to establish a technical basis for the NRC to use in reassessing its requirement that earthquake and large loss-of-coolant accident (LOCA) loads be combined in the design of nuclear power plants. A systematic probabilistic approach is used to treat the random nature of earthquake and transient loading to estimate the probability of large LOCAs that are directly and indirectly induced by earthquakes. A large LOCA is defined in this report as a double-ended guillotine break of the primary reactor coolant loop piping (the hot leg, cold leg, and crossover) of a pressurized water reactor (PWR). Unit 1 of the Zion Nuclear Power Plant, a four-loop PWR-1, is used for this study. To estimate the probability of a large LOCA directly induced by earthquakes, only fatigue crack growth resulting from the combined effects of thermal, pressure, seismic, and other cyclic loads is considered. Fatigue crack growth is simulated with a deterministic fracture mechanics model that incorporates stochastic inputs of initial crack size distribution, material properties, stress histories, and leak detection probability. Results of the simulation indicate that the probability of a double-ended guillotine break, either with or without an earthquake, is very small (on the order of 10 -12 ). The probability of a leak was found to be several orders of magnitude greater than that of a complete pipe rupture. A limited investigation involving engineering judgment of a double-ended guillotine break indirectly induced by an earthquake is also reported. (author)

  19. Field Observations of Precursors to Large Earthquakes: Interpreting and Verifying Their Causes

    Science.gov (United States)

    Suyehiro, K.; Sacks, S. I.; Rydelek, P. A.; Smith, D. E.; Takanami, T.

    2017-12-01

    Many reports of precursory anomalies before large earthquakes exist. However, it has proven elusive to even identify these signals before their actual occurrences. They often only become evident in retrospect. A probabilistic cellular automaton model (Sacks and Rydelek, 1995) explains many of the statistical and dynamic natures of earthquakes including the observed b-value decrease towards a large earthquake or a small stress perturbation to have effect on earthquake occurrence pattern. It also reproduces dynamic characters of each earthquake rupture. This model is useful in gaining insights on causal relationship behind complexities. For example, some reported cases of background seismicity quiescence before a main shock only seen for events larger than M=3 4 at years time scale can be reproduced by this model, if only a small fraction ( 2%) of the component cells are strengthened by a small amount. Such an enhancement may physically occur if a tiny and scattered portion of the seismogenic crust undergoes dilatancy hardening. Such a process to occur will be dependent on the fluid migration and microcracks developments under tectonic loading. Eventual large earthquake faulting will be promoted by the intrusion of excess water from surrounding rocks into the zone capable of cascading slips to a large area. We propose this process manifests itself on the surface as hydrologic, geochemical, or macroscopic anomalies, for which so many reports exist. We infer from seismicity that the eastern Nankai Trough (Tokai) area of central Japan is already in the stage of M-dependent seismic quiescence. Therefore, we advocate that new observations sensitive to detecting water migration in Tokai should be implemented. In particular, vertical component strain, gravity, and/or electrical conductivity, should be observed for verification.

  20. Seismic hazard in Hawaii: High rate of large earthquakes and probabilistics ground-motion maps

    Science.gov (United States)

    Klein, F.W.; Frankel, A.D.; Mueller, C.S.; Wesson, R.L.; Okubo, P.G.

    2001-01-01

    The seismic hazard and earthquake occurrence rates in Hawaii are locally as high as that near the most hazardous faults elsewhere in the United States. We have generated maps of peak ground acceleration (PGA) and spectral acceleration (SA) (at 0.2, 0.3 and 1.0 sec, 5% critical damping) at 2% and 10% exceedance probabilities in 50 years. The highest hazard is on the south side of Hawaii Island, as indicated by the MI 7.0, MS 7.2, and MI 7.9 earthquakes, which occurred there since 1868. Probabilistic values of horizontal PGA (2% in 50 years) on Hawaii's south coast exceed 1.75g. Because some large earthquake aftershock zones and the geometry of flank blocks slipping on subhorizontal decollement faults are known, we use a combination of spatially uniform sources in active flank blocks and smoothed seismicity in other areas to model seismicity. Rates of earthquakes are derived from magnitude distributions of the modem (1959-1997) catalog of the Hawaiian Volcano Observatory's seismic network supplemented by the historic (1868-1959) catalog. Modern magnitudes are ML measured on a Wood-Anderson seismograph or MS. Historic magnitudes may add ML measured on a Milne-Shaw or Bosch-Omori seismograph or MI derived from calibrated areas of MM intensities. Active flank areas, which by far account for the highest hazard, are characterized by distributions with b slopes of about 1.0 below M 5.0 and about 0.6 above M 5.0. The kinked distribution means that large earthquake rates would be grossly under-estimated by extrapolating small earthquake rates, and that longer catalogs are essential for estimating or verifying the rates of large earthquakes. Flank earthquakes thus follow a semicharacteristic model, which is a combination of background seismicity and an excess number of large earthquakes. Flank earthquakes are geometrically confined to rupture zones on the volcano flanks by barriers such as rift zones and the seaward edge of the volcano, which may be expressed by a magnitude

  1. Observation of earthquake in the neighborhood of a large underground cavity. The Izu-Hanto-Toho-Oki earthquake, June 29, 1980

    Energy Technology Data Exchange (ETDEWEB)

    Komada, H; Hayashi, M [Central Research Inst. of Electric Power Industry, Abiko, Chiba (Japan). Civil Engineering Lab.

    1980-12-01

    Studies on the earthquake resistance design of underground site for such large important structures as nuclear power plants, high-level radioactive waste repositories, LNG tanks, petroleum tanks, big power transmission installations and compressed air energy storage installations have been examined at our research institute. The observations of earthquake have been examined at Shiroyama underground hydroelectric power station since July 1976 as one of the demonstration of the earthquake resistance, and the first report was already published. After the time accelerometers and dynamic strain meters were additionally installed. Good acceleration waves and dynamic strain waves of the Izu-Hanto-Toho-Oki Earthquake, June 29, 1980 were observed at Shiroyama site, at which the hypocentral distance is 77 km and the intensity scale is about 4. In this report, the characteristic of the oscillation wave in the neighborhood of underground cavity and the relationships among accelerations, velocities, deformations and dynamic strains are studied in detail on the above earthquake data.

  2. Relay chatter and operator response after a large earthquake: An improved PRA methodology with case studies

    International Nuclear Information System (INIS)

    Budnitz, R.J.; Lambert, H.E.; Hill, E.E.

    1987-08-01

    The purpose of this project has been to develop and demonstrate improvements in the PRA methodology used for analyzing earthquake-induced accidents at nuclear power reactors. Specifically, the project addresses methodological weaknesses in the PRA systems analysis used for studying post-earthquake relay chatter and for quantifying human response under high stress. An improved PRA methodology for relay-chatter analysis is developed, and its use is demonstrated through analysis of the Zion-1 and LaSalle-2 reactors as case studies. This demonstration analysis is intended to show that the methodology can be applied in actual cases, and the numerical values of core-damage frequency are not realistic. The analysis relies on SSMRP-based methodologies and data bases. For both Zion-1 and LaSalle-2, assuming that loss of offsite power (LOSP) occurs after a large earthquake and that there are no operator recovery actions, the analysis finds very many combinations (Boolean minimal cut sets) involving chatter of three or four relays and/or pressure switch contacts. The analysis finds that the number of min-cut-set combinations is so large that there is a very high likelihood (of the order of unity) that at least one combination will occur after earthquake-caused LOSP. This conclusion depends in detail on the fragility curves and response assumptions used for chatter. Core-damage frequencies are calculated, but they are probably pessimistic because assuming zero credit for operator recovery is pessimistic. The project has also developed an improved PRA methodology for quantifying operator error under high-stress conditions such as after a large earthquake. Single-operator and multiple-operator error rates are developed, and a case study involving an 8-step procedure (establishing feed-and-bleed in a PWR after an earthquake-initiated accident) is used to demonstrate the methodology

  3. Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)

    Science.gov (United States)

    Crowell, B. W.; Bock, Y.; Squibb, M. B.

    2010-12-01

    Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.

  4. Last millennium sedimentation in the Gulf of Cariaco (NE Venezuela): Evidence for morphological changes of gulf entrance and possible relations with large earthquakes

    Science.gov (United States)

    Aguilar, Iliana; Beck, Christian; Audemard, Franck; Develle, Anne-Lise; Boussafir, Mohammed; Campos, Corina; Crouzet, Christian

    2016-01-01

    The Cariaco Basin and the Gulf of Cariaco in Venezuela are two major basins along the seismogenic El Pilar right lateral fault, among which the Cariaco Basin is a pull-apart. Both basins are sites of anoxia and organic-rich deposits. To examine whether the sediments in the Gulf of Cariaco have recorded traces of historical or prehistorical earthquakes, we extracted and analyzed twelve 1 m-long gravity cores, sampling the last millennium sedimentation. We focused on analyzing the sediment sources with different techniques (particle size analysis, XRF, loss on ignition tests, magnetic properties, Rock-Eval pyrolysis, 14C dating). The results confirm that major upwelling occurs at the western gulf entrance and makes deep water flowing from the Cariaco Basin into the Gulf of Cariaco. These flows carry an organic-rich suspended load. Furthermore, we found evidence of a particular, widespread fine-grained siliciclastic deposit (named SiCL3) within the gulf, whose age suggests that it likely formed during the large 1853 AD earthquake that stroke the Cumaná city. We suggest that the earthquake-induced large submarine landslides that modified the topography of the gulf's entrance, which in turn promoted upwelling and open marine water flows from the Cariaco Basin. The layer SiCL3 would be the sediment load remobilized during this chain of events.

  5. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  6. Probabilistic Models For Earthquakes With Large Return Periods In Himalaya Region

    Science.gov (United States)

    Chaudhary, Chhavi; Sharma, Mukat Lal

    2017-12-01

    Determination of the frequency of large earthquakes is of paramount importance for seismic risk assessment as large events contribute to significant fraction of the total deformation and these long return period events with low probability of occurrence are not easily captured by classical distributions. Generally, with a small catalogue these larger events follow different distribution function from the smaller and intermediate events. It is thus of special importance to use statistical methods that analyse as closely as possible the range of its extreme values or the tail of the distributions in addition to the main distributions. The generalised Pareto distribution family is widely used for modelling the events which are crossing a specified threshold value. The Pareto, Truncated Pareto, and Tapered Pareto are the special cases of the generalised Pareto family. In this work, the probability of earthquake occurrence has been estimated using the Pareto, Truncated Pareto, and Tapered Pareto distributions. As a case study, the Himalayas whose orogeny lies in generation of large earthquakes and which is one of the most active zones of the world, has been considered. The whole Himalayan region has been divided into five seismic source zones according to seismotectonic and clustering of events. Estimated probabilities of occurrence of earthquakes have also been compared with the modified Gutenberg-Richter distribution and the characteristics recurrence distribution. The statistical analysis reveals that the Tapered Pareto distribution better describes seismicity for the seismic source zones in comparison to other distributions considered in the present study.

  7. Is colposcopy necessary at twelve months after large loop excision of the transformation zone? A clinical audit.

    Science.gov (United States)

    Thompson, Valerie; Marin, Raymond

    2013-12-01

    The purpose of this study was to review outcomes from LLETZ (large loop excision of the transformation zone) procedures carried out for high-grade cervical intraepithelial neoplasia (CIN), in particular findings at colposcopy, cytology and HR-HPV(high-risk human papilloma virus) result to assess whether colposcopy provides any additional information in the management of women at 12 months. We retrospectively analysed 252 patients who had a LLETZ procedure for a HSIL (high-grade squamous intraepithelial lesion) between January 2005 and December 2010. Eighty per cent of women who had a LLETZ procedure for HSIL were reviewed in our colposcopy clinic at 12 months after the procedure. Colposcopy at 12 months after LLETZ was documented as unsatisfactory for 30% of these women. The sensitivity of colposcopy at 12 months after LLETZ was 0.47, and the specificity was 0.95. Colposcopy examination is an insensitive tool for detection of persisting HPV-related change after excision of high-grade CIN. Its usefulness to investigate persistent or recurrent HSIL is further reduced by the high rate of unsatisfactory colposcopy examinations after a LLETZ procedure. Papanicolaou smear and HRHPV tests may be adequate follow-up at 12 months after LLETZ for women at low risk of recurrence of HSIL. © 2013 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.

  8. Long-Term Fault Memory: A New Time-Dependent Recurrence Model for Large Earthquake Clusters on Plate Boundaries

    Science.gov (United States)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.

    2017-12-01

    A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would

  9. Characterization of Aftershock Sequences from Large Strike-Slip Earthquakes Along Geometrically Complex Faults

    Science.gov (United States)

    Sexton, E.; Thomas, A.; Delbridge, B. G.

    2017-12-01

    Large earthquakes often exhibit complex slip distributions and occur along non-planar fault geometries, resulting in variable stress changes throughout the region of the fault hosting aftershocks. To better discern the role of geometric discontinuities on aftershock sequences, we compare areas of enhanced and reduced Coulomb failure stress and mean stress for systematic differences in the time dependence and productivity of these aftershock sequences. In strike-slip faults, releasing structures, including stepovers and bends, experience an increase in both Coulomb failure stress and mean stress during an earthquake, promoting fluid diffusion into the region and further failure. Conversely, Coulomb failure stress and mean stress decrease in restraining bends and stepovers in strike-slip faults, and fluids diffuse away from these areas, discouraging failure. We examine spatial differences in seismicity patterns along structurally complex strike-slip faults which have hosted large earthquakes, such as the 1992 Mw 7.3 Landers, the 2010 Mw 7.2 El-Mayor Cucapah, the 2014 Mw 6.0 South Napa, and the 2016 Mw 7.0 Kumamoto events. We characterize the behavior of these aftershock sequences with the Epidemic Type Aftershock-Sequence Model (ETAS). In this statistical model, the total occurrence rate of aftershocks induced by an earthquake is λ(t) = λ_0 + \\sum_{i:t_i

  10. Evidence for a twelfth large earthquake on the southern hayward fault in the past 1900 years

    Science.gov (United States)

    Lienkaemper, J.J.; Williams, P.L.; Guilderson, T.P.

    2010-01-01

    We present age and stratigraphic evidence for an additional paleoearthquake at the Tyson Lagoon site. The acquisition of 19 additional radiocarbon dates and the inclusion of this additional event has resolved a large age discrepancy in our earlier earthquake chronology. The age of event E10 was previously poorly constrained, thus increasing the uncertainty in the mean recurrence interval (RI), a critical factor in seismic hazard evaluation. Reinspection of many trench logs revealed substantial evidence suggesting that an additional earthquake occurred between E10 and E9 within unit u45. Strata in older u45 are faulted in the main fault zone and overlain by scarp colluviums in two locations.We conclude that an additional surfacerupturing event (E9.5) occurred between E9 and E10. Since 91 A.D. (??40 yr, 1??), 11 paleoearthquakes preceded the M 6:8 earthquake in 1868, yielding a mean RI of 161 ?? 65 yr (1??, standard deviation of recurrence intervals). However, the standard error of the mean (SEM) is well determined at ??10 yr. Since ~1300 A.D., the mean rate has increased slightly, but is indistinguishable from the overall rate within the uncertainties. Recurrence for the 12-event sequence seems fairly regular: the coefficient of variation is 0.40, and it yields a 30-yr earthquake probability of 29%. The apparent regularity in timing implied by this earthquake chronology lends support for the use of time-dependent renewal models rather than assuming a random process to forecast earthquakes, at least for the southern Hayward fault.

  11. Latitude-Time Total Electron Content Anomalies as Precursors to Japan's Large Earthquakes Associated with Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Jyh-Woei Lin

    2011-01-01

    Full Text Available The goal of this study is to determine whether principal component analysis (PCA can be used to process latitude-time ionospheric TEC data on a monthly basis to identify earthquake associated TEC anomalies. PCA is applied to latitude-time (mean-of-a-month ionospheric total electron content (TEC records collected from the Japan GEONET network to detect TEC anomalies associated with 18 earthquakes in Japan (M≥6.0 from 2000 to 2005. According to the results, PCA was able to discriminate clear TEC anomalies in the months when all 18 earthquakes occurred. After reviewing months when no M≥6.0 earthquakes occurred but geomagnetic storm activity was present, it is possible that the maximal principal eigenvalues PCA returned for these 18 earthquakes indicate earthquake associated TEC anomalies. Previously PCA has been used to discriminate earthquake-associated TEC anomalies recognized by other researchers, who found that statistical association between large earthquakes and TEC anomalies could be established in the 5 days before earthquake nucleation; however, since PCA uses the characteristics of principal eigenvalues to determine earthquake related TEC anomalies, it is possible to show that such anomalies existed earlier than this 5-day statistical window.

  12. Evaluating spatial and temporal relationships between an earthquake cluster near Entiat, central Washington, and the large December 1872 Entiat earthquake

    Science.gov (United States)

    Brocher, Thomas M.; Blakely, Richard J.; Sherrod, Brian

    2017-01-01

    We investigate spatial and temporal relations between an ongoing and prolific seismicity cluster in central Washington, near Entiat, and the 14 December 1872 Entiat earthquake, the largest historic crustal earthquake in Washington. A fault scarp produced by the 1872 earthquake lies within the Entiat cluster; the locations and areas of both the cluster and the estimated 1872 rupture surface are comparable. Seismic intensities and the 1–2 m of coseismic displacement suggest a magnitude range between 6.5 and 7.0 for the 1872 earthquake. Aftershock forecast models for (1) the first several hours following the 1872 earthquake, (2) the largest felt earthquakes from 1900 to 1974, and (3) the seismicity within the Entiat cluster from 1976 through 2016 are also consistent with this magnitude range. Based on this aftershock modeling, most of the current seismicity in the Entiat cluster could represent aftershocks of the 1872 earthquake. Other earthquakes, especially those with long recurrence intervals, have long‐lived aftershock sequences, including the Mw">MwMw 7.5 1891 Nobi earthquake in Japan, with aftershocks continuing 100 yrs after the mainshock. Although we do not rule out ongoing tectonic deformation in this region, a long‐lived aftershock sequence can account for these observations.

  13. Earthquake response characteristics of large structure 'JOYO' deeply embedded in quaternary ground, (3)

    International Nuclear Information System (INIS)

    Yajima, Hiroshi; Sawada, Yoshihiro; Hanada, Kazutake; Sawada, Makoto.

    1987-01-01

    In order to examine aseismicity of embedded structure and to clarify embedment effect, earthquake observations of the large structure 'JOYO' are carried out which is deeply embedded in quaternary ground, and the results are summarized as follows. (1) Amplification factors of horizontal component in ground surface is about 3 to 4 times against the bedrock. Contrastively on the structure, any amplification is not observed at the underground portion, however, little amplification exists at the ground portion of structure. (2) Transfer function of structure has several predominant peaks at frequencies of 4.3 Hz and 8.0 Hz which are well coincided with values obtained from force excitation tests. It is shown that transfer function between basement and ground surface is similar to that between ground of same level to basement and ground surface, suggesting the behavior of basement to be able to estimate by these under ground earthquake motion. (3) According to earthquake motion analysis using S-R models, without regard to consider or not the side ground stiffness, the calculated response values do not so much differ in each model and mostly correspond with observation data, provided that the underground earthquake motion at same level to basement is used as a input wave. Consequently, the behavior of these deeply embedded structure is subject to setting method of input wave rather than modeling method, and it is very useful in design that the most simple model without side ground stiffness can roughly represent the embedment effect. (author)

  14. The large earthquake on 29 June 1170 (Syria, Lebanon, and central southern Turkey)

    Science.gov (United States)

    Guidoboni, Emanuela; Bernardini, Filippo; Comastri, Alberto; Boschi, Enzo

    2004-07-01

    On 29 June 1170 a large earthquake hit a vast area in the Near Eastern Mediterranean, comprising the present-day territories of western Syria, central southern Turkey, and Lebanon. Although this was one of the strongest seismic events ever to hit Syria, so far no in-depth or specific studies have been available. Furthermore, the seismological literature (from 1979 until 2000) only elaborated a partial summary of it, mainly based solely on Arabic sources. The major effects area was very partial, making the derived seismic parameters unreliable. This earthquake is in actual fact one of the most highly documented events of the medieval Mediterranean. This is due to both the particular historical period in which it had occurred (between the second and the third Crusades) and the presence of the Latin states in the territory of Syria. Some 50 historical sources, written in eight different languages, have been analyzed: Latin (major contributions), Arabic, Syriac, Armenian, Greek, Hebrew, Vulgar French, and Italian. A critical analysis of this extraordinary body of historical information has allowed us to obtain data on the effects of the earthquake at 29 locations, 16 of which were unknown in the previous scientific literature. As regards the seismic dynamics, this study has set itself the question of whether there was just one or more than one strong earthquake. In the former case, the parameters (Me 7.7 ± 0.22, epicenter, and fault length 126.2 km) were calculated. Some hypotheses are outlined concerning the seismogenic zones involved.

  15. A systematic investigation into b values prior to coming large earthquakes

    Science.gov (United States)

    Nanjo, K.; Yoshida, A.

    2017-12-01

    The Gutenberg-Richter law for frequency-magnitude distribution of earthquakes is now well established in seismology. The b value, the slope of the distribution, is supposed to reflect heterogeneity of seismogenic region (e.g. Mogi 1962) and development of interplate coupling in subduction zone (e.g. Nanjo et al., 2012; Tormann et al. 2015). In the laboratory as well as in the Earth's crust, the b value is known to be inversely dependent on differential stresses (Scholz 1968, 2015). In this context, the b value could serve as a stress meter to help locate asperities, the highly-stressed patches, in fault planes where large rupture energy is released (e.g. Schorlemmer & Wiemer 2005). However, it still remains uncertain whether the b values of events prior to coming large earthquakes are always low significantly. To clarify this issue, we conducted a systematic investigation into b values prior to large earthquakes in the Japanese Mainland. Since no physical definition of mainshock, foreshock, and aftershock is known, we simply investigated b values of the events with magnitudes larger than the lower-cutoff magnitude, Mc, prior to earthquakes equal to or larger than a threshold magnitude, Mth, where Mth>Mc. Schorlemmer et al. (2005) showed that the b value for different fault types differs significantly, which is supposed to reflect the feature that the fracture stress depends on fault types. Therefore, we classified fault motions into normal, strike-slip, and thrust types based on the mechanism solution of earthquakes, and computed b values of events associated with each fault motion separately. We found that the target events (M≥Mth) and the events that occurred prior to the target events both show a common systematic change in b: normal faulting events have the highest b values, thrust events the lowest and strike-slip events intermediate values. Moreover, we found that the b values for the prior events (M≥Mc) are significantly lower than the b values for the

  16. The 2011 Tohoku-oki Earthquake related to a large velocity gradient within the Pacific plate

    Science.gov (United States)

    Matsubara, Makoto; Obara, Kazushige

    2015-04-01

    rays from the hypocenter around the coseismic region of the Tohoku-oki earthquake take off downward and pass through the Pacific plate. The landward low-V zone with a large anomaly corresponds to the western edge of the coseismic slip zone of the 2011 Tohoku-oki earthquake. The initial break point (hypocenter) is associated with the edge of a slightly low-V and low-Vp/Vs zone corresponding to the boundary of the low- and high-V zone. The trenchward low-V and low-Vp/Vs zone extending southwestward from the hypocenter may indicate the existence of a subducted seamount. The high-V zone and low-Vp/Vs zone might have accumulated the strain and resulted in the huge coseismic slip zone of the 2011 Tohoku earthquake. The low-V and low-Vp/Vs zone is a slight fluctuation within the high-V zone and might have acted as the initial break point of the 2011 Tohoku earthquake. Reference Matsubara, M. and K. Obara (2011) The 2011 Off the Pacific Coast of Tohoku earthquake related to a strong velocity gradient with the Pacific plate, Earth Planets Space, 63, 663-667. Okada, Y., K. Kasahara, S. Hori, K. Obara, S. Sekiguchi, H. Fujiwara, and A. Yamamoto (2004) Recent progress of seismic observation networks in Japan-Hi-net, F-net, K-NET and KiK-net, Research News Earth Planets Space, 56, xv-xxviii.

  17. Remotely Triggered Earthquakes Recorded by EarthScope's Transportable Array and Regional Seismic Networks: A Case Study Of Four Large Earthquakes

    Science.gov (United States)

    Velasco, A. A.; Cerda, I.; Linville, L.; Kilb, D. L.; Pankow, K. L.

    2013-05-01

    Changes in field stress required to trigger earthquakes have been classified in two basic ways: static and dynamic triggering. Static triggering occurs when an earthquake that releases accumulated strain along a fault stress loads a nearby fault. Dynamic triggering occurs when an earthquake is induced by the passing of seismic waves from a large mainshock located at least two or more fault lengths from the epicenter of the main shock. We investigate details of dynamic triggering using data collected from EarthScope's USArray and regional seismic networks located in the United States. Triggered events are identified using an optimized automated detector based on the ratio of short term to long term average (Antelope software). Following the automated processing, the flagged waveforms are individually analyzed, in both the time and frequency domains, to determine if the increased detection rates correspond to local earthquakes (i.e., potentially remotely triggered aftershocks). Here, we show results using this automated schema applied to data from four large, but characteristically different, earthquakes -- Chile (Mw 8.8 2010), Tokoku-Oki (Mw 9.0 2011), Baja California (Mw 7.2 2010) and Wells Nevada (Mw 6.0 2008). For each of our four mainshocks, the number of detections within the 10 hour time windows span a large range (1 to over 200) and statistically >20% of the waveforms show evidence of anomalous signals following the mainshock. The results will help provide for a better understanding of the physical mechanisms involved in dynamic earthquake triggering and will help identify zones in the continental U.S. that may be more susceptible to dynamic earthquake triggering.

  18. Principles for selecting earthquake motions in engineering design of large dams

    Science.gov (United States)

    Krinitzsky, E.L.; Marcuson, William F.

    1983-01-01

    This report gives a synopsis of the various tools and techniques used in selecting earthquake ground motion parameters for large dams. It presents 18 charts giving newly developed relations for acceleration, velocity, and duration versus site earthquake intensity for near- and far-field hard and soft sites and earthquakes having magnitudes above and below 7. The material for this report is based on procedures developed at the Waterways Experiment Station. Although these procedures are suggested primarily for large dams, they may also be applicable for other facilities. Because no standard procedure exists for selecting earthquake motions in engineering design of large dams, a number of precautions are presented to guide users. The selection of earthquake motions is dependent on which one of two types of engineering analyses are performed. A pseudostatic analysis uses a coefficient usually obtained from an appropriate contour map; whereas, a dynamic analysis uses either accelerograms assigned to a site or specified respunse spectra. Each type of analysis requires significantly different input motions. All selections of design motions must allow for the lack of representative strong motion records, especially near-field motions from earthquakes of magnitude 7 and greater, as well as an enormous spread in the available data. Limited data must be projected and its spread bracketed in order to fill in the gaps and to assure that there will be no surprises. Because each site may have differing special characteristics in its geology, seismic history, attenuation, recurrence, interpreted maximum events, etc., as integrated approach gives best results. Each part of the site investigation requires a number of decisions. In some cases, the decision to use a 'least ork' approach may be suitable, simply assuming the worst of several possibilities and testing for it. Because there are no standard procedures to follow, multiple approaches are useful. For example, peak motions at

  19. Earthquakes drive large-scale submarine canyon development and sediment supply to deep-ocean basins.

    Science.gov (United States)

    Mountjoy, Joshu J; Howarth, Jamie D; Orpin, Alan R; Barnes, Philip M; Bowden, David A; Rowden, Ashley A; Schimel, Alexandre C G; Holden, Caroline; Horgan, Huw J; Nodder, Scott D; Patton, Jason R; Lamarche, Geoffroy; Gerstenberger, Matthew; Micallef, Aaron; Pallentin, Arne; Kane, Tim

    2018-03-01

    Although the global flux of sediment and carbon from land to the coastal ocean is well known, the volume of material that reaches the deep ocean-the ultimate sink-and the mechanisms by which it is transferred are poorly documented. Using a globally unique data set of repeat seafloor measurements and samples, we show that the moment magnitude ( M w ) 7.8 November 2016 Kaikōura earthquake (New Zealand) triggered widespread landslides in a submarine canyon, causing a powerful "canyon flushing" event and turbidity current that traveled >680 km along one of the world's longest deep-sea channels. These observations provide the first quantification of seafloor landscape change and large-scale sediment transport associated with an earthquake-triggered full canyon flushing event. The calculated interevent time of ~140 years indicates a canyon incision rate of 40 mm year -1 , substantially higher than that of most terrestrial rivers, while synchronously transferring large volumes of sediment [850 metric megatons (Mt)] and organic carbon (7 Mt) to the deep ocean. These observations demonstrate that earthquake-triggered canyon flushing is a primary driver of submarine canyon development and material transfer from active continental margins to the deep ocean.

  20. Repetition of large stress drop earthquakes on Wairarapa fault, New Zealand, revealed by LiDAR data

    Science.gov (United States)

    Delor, E.; Manighetti, I.; Garambois, S.; Beaupretre, S.; Vitard, C.

    2013-12-01

    We have acquired high-resolution LiDAR topographic data over most of the onland trace of the 120 km-long Wairarapa strike-slip fault, New Zealand. The Wairarapa fault broke in a large earthquake in 1855, and this historical earthquake is suggested to have produced up to 18 m of lateral slip at the ground surface. This would make this earthquake a remarkable event having produced a stress drop much higher than commonly observed on other earthquakes worldwide. The LiDAR data allowed us examining the ground surface morphology along the fault at statistical analysis of the cumulative offsets per segment reveals that the alluvial morphology has well recorded, at every step along the fault, no more than a few (3-6), well distinct cumulative slips, all lower than 80 m. Plotted along the entire fault, the statistically defined cumulative slip values document four, fairly continuous slip profiles that we attribute to the four most recent large earthquakes on the Wairarapa fault. The four slip profiles have a roughly triangular and asymmetric envelope shape that is similar to the coseismic slip distributions described for most large earthquakes worldwide. The four slip profiles have their maximum slip at the same place, in the northeastern third of the fault trace. The maximum slips vary from one event to another in the range 7-15 m; the most recent 1855 earthquake produced a maximum coseismic slip of 15 × 2 m at the ground surface. Our results thus confirm that the Wairarapa fault breaks in remarkably large stress drop earthquakes. Those repeating large earthquakes share both similar (rupture length, slip-length distribution, location of maximum slip) and distinct (maximum slip amplitudes) characteristics. Furthermore, the seismic behavior of the Wairarapa fault is markedly different from that of nearby large strike-slip faults (Wellington, Hope). The reasons for those differences in rupture behavior might reside in the intrinsic properties of the broken faults, especially

  1. Resilience of nuclear power plants to withstand large earthquakes: an overview

    International Nuclear Information System (INIS)

    Usmani, A.; Saudy, A.

    2013-01-01

    This paper provides an overview of the experience gained from seismic assessments, component testing, insights from probabilistic seismic hazard analysis (PSHA), seismic PRAs and performance of structures, systems and components (SSCs) in actual earthquakes many of which have been very large and exceeded the plant design basis. The recent Fukushima earthquake has focused attention of the nuclear industry to assess and make provisions to cope with the beyond design basis events that lead to station blackout, flooding and loss of heat sinks. Based on the review of available information, the paper discusses assessments and strategies being followed by various countries. Recommendations are made to focus attention to the most vulnerable SSCs in a nuclear power plant. (author)

  2. Resilience of nuclear power plants to withstand large earthquakes: an overview

    Energy Technology Data Exchange (ETDEWEB)

    Usmani, A.; Saudy, A., E-mail: Aman.Usmani@amec.com [AMEC NSS, Power and Process Americas, Toronto, Ontario (Canada)

    2013-07-01

    This paper provides an overview of the experience gained from seismic assessments, component testing, insights from probabilistic seismic hazard analysis (PSHA), seismic PRAs and performance of structures, systems and components (SSCs) in actual earthquakes many of which have been very large and exceeded the plant design basis. The recent Fukushima earthquake has focused attention of the nuclear industry to assess and make provisions to cope with the beyond design basis events that lead to station blackout, flooding and loss of heat sinks. Based on the review of available information, the paper discusses assessments and strategies being followed by various countries. Recommendations are made to focus attention to the most vulnerable SSCs in a nuclear power plant. (author)

  3. Far-Field Effects of Large Earthquakes on South Florida's Confined Aquifer

    Science.gov (United States)

    Voss, N. K.; Wdowinski, S.

    2012-12-01

    The similarity between a seismometer and a well hydraulic head record during the passage of a seismic wave has long been documented. This is true even at large distances from earthquake epicenters. South Florida lacks a dense seismic array but does contain a comparably dense network of monitoring wells. The large spatial distribution of deep monitoring wells in South Florida provides an opportunity to study the variance of aquifer response to the passage of seismic waves. We conducted a preliminary study of hydraulic head data, provided by the South Florida Water Management District, from 9 deep wells in South Florida's confined Floridian Aquifer in response to 27 main shock events (January 2010- April 2012) with magnitude 6.9 or greater. Coseismic hydraulic head response was observed in 7 of the 27 events. In order to determine what governs aquifer response to seismic events, earthquake parameters were compared for the 7 positive events. Seismic energy density (SED), an empirical relationship between distance and magnitude, was also used to compare the relative energy between the events at each well site. SED is commonly used as a parameter for establishing thresholds for hydrologic events in the near and intermediate fields. Our analysis yielded a threshold SED for well response in South Florida as 8 x 10-3 J m-3, which is consistent with other studies. Deep earthquakes, with SED above this threshold, did not appear to trigger hydraulic head oscillations. The amplitude of hydraulic head oscillations had no discernable relationship to SED levels. Preliminary results indicate a need for a modification of the SED equation to better accommodate depth in order to be of use in the study of hydrologic response in the far field. We plan to conduct a more comprehensive study incorporating a larger subset (~60) of wells in South Florida in order to further examine the spatial variance of aquifers to the passing of seismic waves as well as better confine the relationship

  4. Foreshock patterns preceding large earthquakes in the subduction zone of Chile

    Science.gov (United States)

    Minadakis, George; Papadopoulos, Gerassimos A.

    2016-04-01

    Some of the largest earthquakes in the globe occur in the subduction zone of Chile. Therefore, it is of particular interest to investigate foreshock patterns preceding such earthquakes. Foreshocks in Chile were recognized as early as 1960. In fact, the giant (Mw9.5) earthquake of 22 May 1960, which was the largest ever instrumentally recorded, was preceded by 45 foreshocks in a time period of 33h before the mainshock, while 250 aftershocks were recorded in a 33h time period after the mainshock. Four foreshocks were bigger than magnitude 7.0, including a magnitude 7.9 on May 21 that caused severe damage in the Concepcion area. More recently, Brodsky and Lay (2014) and Bedford et al. (2015) reported on foreshock activity before the 1 April 2014 large earthquake (Mw8.2). However, 3-D foreshock patterns in space, time and size were not studied in depth so far. Since such studies require for good seismic catalogues to be available, we have investigated 3-D foreshock patterns only before the recent, very large mainshocks occurring on 27 February 2010 (Mw 8.8), 1 April 2014 (Mw8.2) and 16 September 2015 (Mw8.4). Although our analysis does not depend on a priori definition of short-term foreshocks, our interest focuses in the short-term time frame, that is in the last 5-6 months before the mainshock. The analysis of the 2014 event showed an excellent foreshock sequence consisting by an early-weak foreshock stage lasting for about 1.8 months and by a main-strong precursory foreshock stage that was evolved in the last 18 days before the mainshock. During the strong foreshock period the seismicity concentrated around the mainshock epicenter in a critical area of about 65 km mainly along the trench domain to the south of the mainshock epicenter. At the same time, the activity rate increased dramatically, the b-value dropped and the mean magnitude increased significantly, while the level of seismic energy released also increased. In view of these highly significant seismicity

  5. Impact of a Large San Andreas Fault Earthquake on Tall Buildings in Southern California

    Science.gov (United States)

    Krishnan, S.; Ji, C.; Komatitsch, D.; Tromp, J.

    2004-12-01

    In 1857, an earthquake of magnitude 7.9 occurred on the San Andreas fault, starting at Parkfield and rupturing in a southeasterly direction for more than 300~km. Such a unilateral rupture produces significant directivity toward the San Fernando and Los Angeles basins. The strong shaking in the basins due to this earthquake would have had a significant long-period content (2--8~s). If such motions were to happen today, they could have a serious impact on tall buildings in Southern California. In order to study the effects of large San Andreas fault earthquakes on tall buildings in Southern California, we use the finite source of the magnitude 7.9 2001 Denali fault earthquake in Alaska and map it onto the San Andreas fault with the rupture originating at Parkfield and proceeding southward over a distance of 290~km. Using the SPECFEM3D spectral element seismic wave propagation code, we simulate a Denali-like earthquake on the San Andreas fault and compute ground motions at sites located on a grid with a 2.5--5.0~km spacing in the greater Southern California region. We subsequently analyze 3D structural models of an existing tall steel building designed in 1984 as well as one designed according to the current building code (Uniform Building Code, 1997) subjected to the computed ground motion. We use a sophisticated nonlinear building analysis program, FRAME3D, that has the ability to simulate damage in buildings due to three-component ground motion. We summarize the performance of these structural models on contour maps of carefully selected structural performance indices. This study could benefit the city in laying out emergency response strategies in the event of an earthquake on the San Andreas fault, in undertaking appropriate retrofit measures for tall buildings, and in formulating zoning regulations for new construction. In addition, the study would provide risk data associated with existing and new construction to insurance companies, real estate developers, and

  6. State Vector: A New Approach to Prediction of the Failure of Brittle Heterogeneous Media and Large Earthquakes

    Science.gov (United States)

    Yu, Huai-Zhong; Yin, Xiang-Chu; Zhu, Qing-Yong; Yan, Yu-Ding

    2006-12-01

    The concept of state vector stems from statistical physics, where it is usually used to describe activity patterns of a physical field in its manner of coarsegrain. In this paper, we propose an approach by which the state vector was applied to describe quantitatively the damage evolution of the brittle heterogeneous systems, and some interesting results are presented, i.e., prior to the macro-fracture of rock specimens and occurrence of a strong earthquake, evolutions of the four relevant scalars time series derived from the state vectors changed anomalously. As retrospective studies, some prominent large earthquakes occurred in the Chinese Mainland (e.g., the M 7.4 Haicheng earthquake on February 4, 1975, and the M 7.8 Tangshan earthquake on July 28, 1976, etc) were investigated. Results show considerable promise that the time-dependent state vectors could serve as a kind of precursor to predict earthquakes.

  7. A Comparison of Geodetic and Geologic Rates Prior to Large Strike-Slip Earthquakes: A Diversity of Earthquake-Cycle Behaviors?

    Science.gov (United States)

    Dolan, James F.; Meade, Brendan J.

    2017-12-01

    Comparison of preevent geodetic and geologic rates in three large-magnitude (Mw = 7.6-7.9) strike-slip earthquakes reveals a wide range of behaviors. Specifically, geodetic rates of 26-28 mm/yr for the North Anatolian fault along the 1999 MW = 7.6 Izmit rupture are ˜40% faster than Holocene geologic rates. In contrast, geodetic rates of ˜6-8 mm/yr along the Denali fault prior to the 2002 MW = 7.9 Denali earthquake are only approximately half as fast as the latest Pleistocene-Holocene geologic rate of ˜12 mm/yr. In the third example where a sufficiently long pre-earthquake geodetic time series exists, the geodetic and geologic rates along the 2001 MW = 7.8 Kokoxili rupture on the Kunlun fault are approximately equal at ˜11 mm/yr. These results are not readily explicable with extant earthquake-cycle modeling, suggesting that they may instead be due to some combination of regional kinematic fault interactions, temporal variations in the strength of lithospheric-scale shear zones, and/or variations in local relative plate motion rate. Whatever the exact causes of these variable behaviors, these observations indicate that either the ratio of geodetic to geologic rates before an earthquake may not be diagnostic of the time to the next earthquake, as predicted by many rheologically based geodynamic models of earthquake-cycle behavior, or different behaviors characterize different fault systems in a manner that is not yet understood or predictable.

  8. Conditional Probabilities of Large Earthquake Sequences in California from the Physics-based Rupture Simulator RSQSim

    Science.gov (United States)

    Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.

    2017-12-01

    Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.

  9. Social tension as precursor of large damaging earthquake: legend or reality?

    Science.gov (United States)

    Molchanov, O.

    2008-11-01

    Using case study of earthquake (EQ) activity and war conflicts in Caucasus during 1975 2002 time interval and correlation analysis of global distribution of damaging EQs and war-related social tension during 1901 2005 period we conclude: There is a statistically reliable increase of social tension several years (or several months in case study) before damaging EQs, There is evident decrease of social tension several years after damaging EQs, probably due to society consolidation, Preseismic effect is absent for the large EQs in unpopulated areas, There is some factual background for legendary belief in Almighty retribution for social abnormal behavior.

  10. Earthquake prediction

    International Nuclear Information System (INIS)

    Ward, P.L.

    1978-01-01

    The state of the art of earthquake prediction is summarized, the possible responses to such prediction are examined, and some needs in the present prediction program and in research related to use of this new technology are reviewed. Three basic aspects of earthquake prediction are discussed: location of the areas where large earthquakes are most likely to occur, observation within these areas of measurable changes (earthquake precursors) and determination of the area and time over which the earthquake will occur, and development of models of the earthquake source in order to interpret the precursors reliably. 6 figures

  11. Physics-Based Hazard Assessment for Critical Structures Near Large Earthquake Sources

    Science.gov (United States)

    Hutchings, L.; Mert, A.; Fahjan, Y.; Novikova, T.; Golara, A.; Miah, M.; Fergany, E.; Foxall, W.

    2017-09-01

    We argue that for critical structures near large earthquake sources: (1) the ergodic assumption, recent history, and simplified descriptions of the hazard are not appropriate to rely on for earthquake ground motion prediction and can lead to a mis-estimation of the hazard and risk to structures; (2) a physics-based approach can address these issues; (3) a physics-based source model must be provided to generate realistic phasing effects from finite rupture and model near-source ground motion correctly; (4) wave propagations and site response should be site specific; (5) a much wider search of possible sources of ground motion can be achieved computationally with a physics-based approach; (6) unless one utilizes a physics-based approach, the hazard and risk to structures has unknown uncertainties; (7) uncertainties can be reduced with a physics-based approach, but not with an ergodic approach; (8) computational power and computer codes have advanced to the point that risk to structures can be calculated directly from source and site-specific ground motions. Spanning the variability of potential ground motion in a predictive situation is especially difficult for near-source areas, but that is the distance at which the hazard is the greatest. The basis of a "physical-based" approach is ground-motion syntheses derived from physics and an understanding of the earthquake process. This is an overview paper and results from previous studies are used to make the case for these conclusions. Our premise is that 50 years of strong motion records is insufficient to capture all possible ranges of site and propagation path conditions, rupture processes, and spatial geometric relationships between source and site. Predicting future earthquake scenarios is necessary; models that have little or no physical basis but have been tested and adjusted to fit available observations can only "predict" what happened in the past, which should be considered description as opposed to prediction

  12. Utility of temporary aftershock warning system in the immediate aftermath of large damaging earthquakes

    International Nuclear Information System (INIS)

    Harben, P.E.; Jarpe, S.P.; Hunter, S.; Johnston, C.A.

    1993-01-01

    An aftershock warning system (AWS) is a real-time warning system that is deployed immediately after a large damaging earthquake in the epicentral region of the main shock. The primary purpose of such a system is to warn rescue teams and workers within damaged structures of imminent destructive shaking. The authors have examined the utility of such a system (1) by evaluating historical data, and (2) by developing and testing a prototype system during the 1992 Landers, California, aftershock sequence. Analyzing historical data is important in determining when and where damaging aftershocks are likely to occur and the probable usefulness of an AWS in a particular region. As part of this study, they analyzed the spatial and temporal distribution of large (magnitude >5.0) aftershocks from earthquakes with magnitudes >6.0 that took place between 1942 and 1991 in California and Nevada. They found that one-quarter of these large aftershocks occurred from 2 days-2 months after the main event, nearly one-half occurred within the first two days of the main event, and greater than one-half occurred within 20 km of the main shock's epicenter. They also reviewed a case study of the 1985 Mexico City earthquake, which showed that an AWS could have given Mexico City a warning of ∼60 sec before the magnitude 7.6 aftershock that occurred 36 hr. after the main event. They deployed a four-station prototype AWS near Landers after a magnitude 7.4 earthquake occurred on June 28, 1992. The aftershock data, collected from July 3-10, showed that the aftershocks in the vicinity of the four stations varied in magnitude from 3.0-4.4. Using a two-station detection criterion to minimize false alarms, this AWS reliably discriminated between smaller and larger aftershocks within 3 sec of the origin time of the events. This prototype could have provided 6 sec of warning to Palm Springs and 20 sec of warning to San Bernardino of aftershocks occurring in the main-shock epicentral region

  13. Coupled large earthquakes in the Baikal rift system: Response to bifurcations in nonlinear resonance hysteresis

    Directory of Open Access Journals (Sweden)

    Anatoly V. Klyuchevskii

    2013-11-01

    Full Text Available The current lithospheric geodynamics and tectonophysics in the Baikal rift are discussed in terms of a nonlinear oscillator with dissipation. The nonlinear oscillator model is applicable to the area because stress change shows up as quasi-periodic inharmonic oscillations at rifting attractor structures (RAS. The model is consistent with the space-time patterns of regional seismicity in which coupled large earthquakes, proximal in time but distant in space, may be a response to bifurcations in nonlinear resonance hysteresis in a system of three oscillators corresponding to the rifting attractors. The space-time distribution of coupled MLH > 5.5 events has been stable for the period of instrumental seismicity, with the largest events occurring in pairs, one shortly after another, on two ends of the rift system and with couples of smaller events in the central part of the rift. The event couples appear as peaks of earthquake ‘migration’ rate with an approximately decadal periodicity. Thus the energy accumulated at RAS is released in coupled large events by the mechanism of nonlinear oscillators with dissipation. The new knowledge, with special focus on space-time rifting attractors and bifurcations in a system of nonlinear resonance hysteresis, may be of theoretical and practical value for earthquake prediction issues. Extrapolation of the results into the nearest future indicates the probability of such a bifurcation in the region, i.e., there is growing risk of a pending M ≈ 7 coupled event to happen within a few years.

  14. Changes and challenges following the 1997 Colfiorito earthquake: the evolution of the use of the Internet for large seismic events

    Directory of Open Access Journals (Sweden)

    R. Camassi

    2008-06-01

    Full Text Available The September 26, 1997 Central Italy earthquake represents the first Italian large seismic event on the occasion of which Internet was intensively exploited to exchange and disseminate data, information and news. The paper illustrates how national and international seismological institutions disseminate information about earthquakes ten years ago. A web evolution is sketched, and some features that can be of interest today in the seismological community are presented.

  15. Relationship between large slip area and static stress drop of aftershocks of inland earthquake :Example of the 2007 Noto Hanto earthquake

    Science.gov (United States)

    Urano, S.; Hiramatsu, Y.; Yamada, T.

    2013-12-01

    The 2007 Noto Hanto earthquake (MJMA 6.9; hereafter referred to the main shock) occurred at 0:41(UTC) on March 25, 2007 at a depth of 11km beneath the west coast of Noto Peninsula, central Japan. The dominant slip of the main shock was on a reverse fault with a right-lateral slip and the large slip area was distributed from hypocenter to the shallow part on the fault plane (Horikawa, 2008). The aftershocks are distributed not only in the small slip area but also in the large slip area (Hiramatsu et al., 2011). In this study, we estimate static stress drops of aftershocks on the fault plane of the main shock. We discuss the relationship between the static stress drops of the aftershocks and the large slip area of the main shock by investigating spatial pattern of the values of the static stress drops. We use the waveform data obtained by the group for the joint aftershock observations of the 2007 Noto Hanto Earthquake (Sakai et al., 2007). The sampling frequency of the waveform data is 100 Hz or 200 Hz. Focusing on similar aftershocks reported by Hiramatsu et al. (2011), we analyze static stress drops by using the method of empirical Green's function (EGF) (Hough, 1997) as follows. The smallest earthquake (MJMA≥2.0) of each group of similar earthquakes is set to the EGF earthquake, and the largest earthquake (MJMA≥2.5) is set to the target earthquake. We then deconvolve the waveform of an interested earthquake with that of the EGF earthquake at each station and obtain the spectral ratio of the sources that cancels the propagation effects (path and site effects). Following the procedure of Yamada et al. (2010), we finally estimate static stress drops for P- and S-waves from corner frequencies of the spectral ratio by using a model of Madariaga (1976). The estimated average value of static stress drop is 8.2×1.3 MPa (8.6×2.2 MPa for P-wave and 7.8×1.3 MPa for S-wave). These values are coincident approximately with the static stress drop of aftershocks of other

  16. Large Earthquakes at the Ibero-Maghrebian Region: Basis for an EEWS

    Science.gov (United States)

    Buforn, Elisa; Udías, Agustín; Pro, Carmen

    2015-09-01

    Large earthquakes (Mw > 6, Imax > VIII) occur at the Ibero-Maghrebian region, extending from a point (12ºW) southwest of Cape St. Vincent to Tunisia, with different characteristics depending on their location, which cause considerable damage and casualties. Seismic activity at this region is associated with the boundary between the lithospheric plates of Eurasia and Africa, which extends from the Azores Islands to Tunisia. The boundary at Cape St. Vincent, which has a clear oceanic nature in the westernmost part, experiences a transition from an oceanic to a continental boundary, with the interaction of the southern border of the Iberian Peninsula, the northern border of Africa, and the Alboran basin between them, corresponding to a wide area of deformation. Further to the east, the plate boundary recovers its oceanic nature following the northern coast of Algeria and Tunisia. The region has been divided into four zones with different seismic characteristics. From west to east, large earthquake occurrence, focal depth, total seismic moment tensor, and average seismic slip velocities for each zone along the region show the differences in seismic release of deformation. This must be taken into account in developing an EEWS for the region.

  17. The Long-Run Socio-Economic Consequences of a Large Disaster: The 1995 Earthquake in Kobe.

    Directory of Open Access Journals (Sweden)

    William duPont

    Full Text Available We quantify the 'permanent' socio-economic impacts of the Great Hanshin-Awaji (Kobe earthquake in 1995 by employing a large-scale panel dataset of 1,719 cities, towns, and wards from Japan over three decades. In order to estimate the counterfactual--i.e., the Kobe economy without the earthquake--we use the synthetic control method. Three important empirical patterns emerge: First, the population size and especially the average income level in Kobe have been lower than the counterfactual level without the earthquake for over fifteen years, indicating a permanent negative effect of the earthquake. Such a negative impact can be found especially in the central areas which are closer to the epicenter. Second, the surrounding areas experienced some positive permanent impacts in spite of short-run negative effects of the earthquake. Much of this is associated with movement of people to East Kobe, and consequent movement of jobs to the metropolitan center of Osaka, that is located immediately to the East of Kobe. Third, the furthest areas in the vicinity of Kobe seem to have been insulated from the large direct and indirect impacts of the earthquake.

  18. Incorporating Real-time Earthquake Information into Large Enrollment Natural Disaster Course Learning

    Science.gov (United States)

    Furlong, K. P.; Benz, H.; Hayes, G. P.; Villasenor, A.

    2010-12-01

    Although most would agree that the occurrence of natural disaster events such as earthquakes, volcanic eruptions, and floods can provide effective learning opportunities for natural hazards-based courses, implementing compelling materials into the large-enrollment classroom environment can be difficult. These natural hazard events derive much of their learning potential from their real-time nature, and in the modern 24/7 news-cycle where all but the most devastating events are quickly out of the public eye, the shelf life for an event is quite limited. To maximize the learning potential of these events requires that both authoritative information be available and course materials be generated as the event unfolds. Although many events such as hurricanes, flooding, and volcanic eruptions provide some precursory warnings, and thus one can prepare background materials to place the main event into context, earthquakes present a particularly confounding situation of providing no warning, but where context is critical to student learning. Attempting to implement real-time materials into large enrollment classes faces the additional hindrance of limited internet access (for students) in most lecture classrooms. In Earth 101 Natural Disasters: Hollywood vs Reality, taught as a large enrollment (150+ students) general education course at Penn State, we are collaborating with the USGS’s National Earthquake Information Center (NEIC) to develop efficient means to incorporate their real-time products into learning activities in the lecture hall environment. Over time (and numerous events) we have developed a template for presenting USGS-produced real-time information in lecture mode. The event-specific materials can be quickly incorporated and updated, along with key contextual materials, to provide students with up-to-the-minute current information. In addition, we have also developed in-class activities, such as student determination of population exposure to severe ground

  19. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    Science.gov (United States)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  20. Investigations of anomalous gravity signals prior to 71 large earthquakes based on a 4-years long superconducting gravimeter records

    Directory of Open Access Journals (Sweden)

    Dijin Wang

    2017-09-01

    Full Text Available Using continuous 1-Hz sampling time-series recorded by a SG (superconducting gravimeter at Hsinchu station, Taiwan of China, we investigate the anomalous gravity signals prior to 71 large earthquakes with moment magnitude larger than 7.0 (Mw7.0 occurred between 1 Jan 2008 and 31 Dec 2011. We firstly evaluate the noise level of the SG records at Hsinchu (HS station in microseismic bands from 0.05 Hz to 0.1 Hz by computing the PSD (power spectral density of seismically quiet days selected based on the RMS of records. Based on the analysis of the noise level and the spectral features of the seismically quiet SG records at HS station, we detect AGSs (anomalous gravity signals prior to large earthquakes. We apply HHT (Hilbert-Huang transformation to establish the TFEP (time-frequency-energy paradigms and MS (marginal spectra of the SG data before the large earthquakes, and the characteristics of TFEP and MS of the SGs data during the typhoon event are also analyzed. By comparing the spectral characteristics of the SGs data during seismically quiet period, three types of AGSs are found; and the occurrence rate of AGSs before 71 earthquakes is given in terms of the cases with different epicenter distance and different focal depth. The statistical results show that 56.3% of all the examined large earthquakes were preceded by AGSs; and if we constrain the epicenter distance to be smaller than 3500 km and focal depth less than 300 km, 75.3% of the examined large earthquakes can be associated with the AGSs. Especially, we note that for all the large earthquakes occurred in the Eurasian plate in recent four years, the precursory AGSs can always be found in the SG data recorded at HS station. Our investigations suggest that the AGSs prior to large earthquakes may be related to focal depth, epicentre distance and location.

  1. Rapid estimation of the moment magnitude of large earthquake from static strain changes

    Science.gov (United States)

    Itaba, S.

    2014-12-01

    The 2011 off the Pacific coast of Tohoku earthquake, of moment magnitude (Mw) 9.0, occurred on March 11, 2011. Based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency announced just after earthquake occurrence was 7.9, and it was considerably smaller than an actual value. On the other hand, using nine borehole strainmeters of Geological Survey of Japan, AIST, we estimated a fault model with Mw 8.7 for the earthquake on the boundary between the Pacific and North American plates. This model can be estimated about seven minutes after the origin time, and five minute after wave arrival. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami (e.g., Ohta et al., 2012). Our simple method of using strain steps is one of the strong methods for rapid estimation of the magnitude of great earthquakes.

  2. Slip in the 1857 and earlier large earthquakes along the Carrizo Plain, San Andreas Fault.

    Science.gov (United States)

    Zielke, Olaf; Arrowsmith, J Ramón; Grant Ludwig, Lisa; Akçiz, Sinan O

    2010-02-26

    The moment magnitude (Mw) 7.9 Fort Tejon earthquake of 1857, with a approximately 350-kilometer-long surface rupture, was the most recent major earthquake along the south-central San Andreas Fault, California. Based on previous measurements of its surface slip distribution, rupture along the approximately 60-kilometer-long Carrizo segment was thought to control the recurrence of 1857-like earthquakes. New high-resolution topographic data show that the average slip along the Carrizo segment during the 1857 event was 5.3 +/- 1.4 meters, eliminating the core assumption for a linkage between Carrizo segment rupture and recurrence of major earthquakes along the south-central San Andreas Fault. Earthquake slip along the Carrizo segment may recur in earthquake clusters with cumulative slip of approximately 5 meters.

  3. The Long-Run Socio-Economic Consequences of a Large Disaster: The 1995 Earthquake in Kobe

    Science.gov (United States)

    duPont IV, William; Noy, Ilan; Okuyama, Yoko; Sawada, Yasuyuki

    2015-01-01

    We quantify the ‘permanent’ socio-economic impacts of the Great Hanshin-Awaji (Kobe) earthquake in 1995 by employing a large-scale panel dataset of 1,719 cities, towns, and wards from Japan over three decades. In order to estimate the counterfactual—i.e., the Kobe economy without the earthquake—we use the synthetic control method. Three important empirical patterns emerge: First, the population size and especially the average income level in Kobe have been lower than the counterfactual level without the earthquake for over fifteen years, indicating a permanent negative effect of the earthquake. Such a negative impact can be found especially in the central areas which are closer to the epicenter. Second, the surrounding areas experienced some positive permanent impacts in spite of short-run negative effects of the earthquake. Much of this is associated with movement of people to East Kobe, and consequent movement of jobs to the metropolitan center of Osaka, that is located immediately to the East of Kobe. Third, the furthest areas in the vicinity of Kobe seem to have been insulated from the large direct and indirect impacts of the earthquake. PMID:26426998

  4. Interpretation of interseismic deformations and the seismic cycle associated with large subduction earthquakes

    Science.gov (United States)

    Trubienko, Olga; Fleitout, Luce; Garaud, Jean-Didier; Vigny, Christophe

    2013-03-01

    The deformations of the overriding and subducting plates during the seismic cycle associated with large subduction earthquakes are modelled using 2D and 3D finite element techniques. A particular emphasis is put on the interseismic velocities and on the impact of the rheology of the asthenosphere. The distance over which the seismic cycle perturbs significantly the velocities depends upon the ratio of the viscosity in the asthenosphere to the period of the seismic cycle and can reach several thousand km for rheological parameters deduced from the first years of deformation after the Aceh earthquake. For a same early postseismic velocity, a Burger rheology of the asthenosphere implies a smaller duration of the postseismic phase and thus smaller interseismic velocities than a Maxwell rheology. A low viscosity wedge (LVW) modifies very significantly the predicted horizontal and vertical motions in the near and middle fields. In particular, with a LVW, the peak in vertical velocity at the end of the cycle is predicted to be no longer above the deep end of the locked section of the fault but further away, above the continentward limit of the LVW. The lateral viscosity variations linked to the presence at depth of the subducting slab affect substantially the results. The north-south interseismic compression predicted by this preliminary 2D model over more than 1500 km within the Sunda block is in good agreement with the pre-2004 velocities with respect to South-China inferred from GPS observations in Thailand, Malaysia and Indonesia. In Japan, before the Tohoku earthquake, the eastern part of northern Honshu was subsiding while the western part was uplifting. This transition from subsidence to uplift so far away from the trench is well fitted by the predictions from our models involving a LVW. Most of the results obtained here in a 2D geometry are shown to provide a good estimate of the displacements for fault segments of finite lateral extent, with a 3D spherical

  5. Systematic deficiency of aftershocks in areas of high coseismic slip for large subduction zone earthquakes

    Science.gov (United States)

    Wetzler, Nadav; Lay, Thorne; Brodsky, Emily E.; Kanamori, Hiroo

    2018-01-01

    Fault slip during plate boundary earthquakes releases a portion of the shear stress accumulated due to frictional resistance to relative plate motions. Investigation of 101 large [moment magnitude (Mw) ≥ 7] subduction zone plate boundary mainshocks with consistently determined coseismic slip distributions establishes that 15 to 55% of all master event–relocated aftershocks with Mw ≥ 5.2 are located within the slip regions of the mainshock ruptures and few are located in peak slip regions, allowing for uncertainty in the slip models. For the preferred models, cumulative deficiency of aftershocks within the central three-quarters of the scaled slip regions ranges from 15 to 45%, increasing with the total number of observed aftershocks. The spatial gradients of the mainshock coseismic slip concentrate residual shear stress near the slip zone margins and increase stress outside the slip zone, driving both interplate and intraplate aftershock occurrence near the periphery of the mainshock slip. The shear stress reduction in large-slip regions during the mainshock is generally sufficient to preclude further significant rupture during the aftershock sequence, consistent with large-slip areas relocking and not rupturing again for a substantial time. PMID:29487902

  6. The twelve colourful stones

    International Nuclear Information System (INIS)

    Doria, R.M.

    1983-01-01

    A dynamics with twelve colourful stones is created based on the concepts of gauge and colour. It is associated different gauge fields to the same group. A group of gauge invariant Lagrangians is established. A gauge invariant mass term is introduced. The colourful stones physical insight is to be building blocks for quarks and leptons. (Author) [pt

  7. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    Science.gov (United States)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  8. Time-scale invariant changes in atmospheric radon concentration and crustal strain prior to a large earthquake

    Directory of Open Access Journals (Sweden)

    Y. Kawada

    2007-01-01

    Full Text Available Prior to large earthquakes (e.g. 1995 Kobe earthquake, Japan, an increase in the atmospheric radon concentration is observed, and this increase in the rate follows a power-law of the time-to-earthquake (time-to-failure. This phenomenon corresponds to the increase in the radon migration in crust and the exhalation into atmosphere. An irreversible thermodynamic model including time-scale invariance clarifies that the increases in the pressure of the advecting radon and permeability (hydraulic conductivity in the crustal rocks are caused by the temporal changes in the power-law of the crustal strain (or cumulative Benioff strain, which is associated with damage evolution such as microcracking or changing porosity. As the result, the radon flux and the atmospheric radon concentration can show a temporal power-law increase. The concentration of atmospheric radon can be used as a proxy for the seismic precursory processes associated with crustal dynamics.

  9. Vulnerability of Eastern Caribbean Islands Economies to Large Earthquakes: The Trinidad and Tobago Case Study

    Science.gov (United States)

    Lynch, L.

    2015-12-01

    The economies of most of the Anglo-phone Eastern Caribbean islands have tripled to quadrupled in size since independence from England. There has also been commensurate growth in human and physical development as indicated by macro-economic indices such as Human Development Index and Fixed Capital Formation. A significant proportion of the accumulated wealth is invested in buildings and infrastructure which are highly susceptible to strong ground motion since the region is located along an active plate boundary. In the case of Trinidad and Tobago, Fixed Capital Formation accumulation since 1980 is almost US200 billion dollars. Recent studies have indicated that this twin island state is at significant risk from several seismic sources, both on land and offshore. To effectively mitigate the risk it is necessary to prescribe long-term measures such as the development and implementation of building code and standards, structural retrofitting, land use planning, preparedness planning and risk transfer mechanisms. The record has shown that Trinidad and Tobago has been been slow in the prescribing such measures which has consequently compounded it vulnerability to large earthquakes. This assessment reveals that the losses from a large (magnitude 7+) on land or an extreme (magnitude 8+) event could result in losses of up to US28B and that current risk transfer measures will only cater for less than ten percent of such losses.

  10. Long‐term creep rates on the Hayward Fault: evidence for controls on the size and frequency of large earthquakes

    Science.gov (United States)

    Lienkaemper, James J.; McFarland, Forrest S.; Simpson, Robert W.; Bilham, Roger; Ponce, David A.; Boatwright, John; Caskey, S. John

    2012-01-01

    The Hayward fault (HF) in California exhibits large (Mw 6.5–7.1) earthquakes with short recurrence times (161±65 yr), probably kept short by a 26%–78% aseismic release rate (including postseismic). Its interseismic release rate varies locally over time, as we infer from many decades of surface creep data. Earliest estimates of creep rate, primarily from infrequent surveys of offset cultural features, revealed distinct spatial variation in rates along the fault, but no detectable temporal variation. Since the 1989 Mw 6.9 Loma Prieta earthquake (LPE), monitoring on 32 alinement arrays and 5 creepmeters has greatly improved the spatial and temporal resolution of creep rate. We now identify significant temporal variations, mostly associated with local and regional earthquakes. The largest rate change was a 6‐yr cessation of creep along a 5‐km length near the south end of the HF, attributed to a regional stress drop from the LPE, ending in 1996 with a 2‐cm creep event. North of there near Union City starting in 1991, rates apparently increased by 25% above pre‐LPE levels on a 16‐km‐long reach of the fault. Near Oakland in 2007 an Mw 4.2 earthquake initiated a 1–2 cm creep event extending 10–15 km along the fault. Using new better‐constrained long‐term creep rates, we updated earlier estimates of depth to locking along the HF. The locking depths outline a single, ∼50‐km‐long locked or retarded patch with the potential for an Mw∼6.8 event equaling the 1868 HF earthquake. We propose that this inferred patch regulates the size and frequency of large earthquakes on HF.

  11. On the problem of earthquake correlation in space and time over large distances

    Science.gov (United States)

    Georgoulas, G.; Konstantaras, A.; Maravelakis, E.; Katsifarakis, E.; Stylios, C. D.

    2012-04-01

    A quick examination of geographical maps with the epicenters of earthquakes marked on them reveals a strong tendency of these points to form compact clusters of irregular shapes and various sizes often traversing with other clusters. According to [Saleur et al. 1996] "earthquakes are correlated in space and time over large distances". This implies that seismic sequences are not formatted randomly but they follow a spatial pattern with consequent triggering of events. Seismic cluster formation is believed to be due to underlying geological natural hazards, which: a) act as the energy storage elements of the phenomenon, and b) tend to form a complex network of numerous interacting faults [Vallianatos and Tzanis, 1998]. Therefore it is imperative to "isolate" meaningful structures (clusters) in order to mine information regarding the underlying mechanism and at a second stage to test the causality effect implied by what is known as the Domino theory [Burgman, 2009]. Ongoing work by Konstantaras et al. 2011 and Katsifarakis et al. 2011 on clustering seismic sequences in the area of the Southern Hellenic Arc and progressively throughout the Greek vicinity and the entire Mediterranean region based on an explicit segmentation of the data based both on their temporal and spatial stamp, following modelling assumptions proposed by Dobrovolsky et al. 1989 and Drakatos et al. 2001, managed to identify geologically validated seismic clusters. These results suggest that that the time component should be included as a dimension during the clustering process as seismic cluster formation is dynamic and the emerging clusters propagate in time. Another issue that has not been investigated yet explicitly is the role of the magnitude of each seismic event. In other words the major seismic event should be treated differently compared to pre or post seismic sequences. Moreover the sometimes irregular and elongated shapes that appear on geophysical maps means that clustering algorithms

  12. Irregularities in Early Seismic Rupture Propagation for Large Events in a Crustal Earthquake Model

    Science.gov (United States)

    Lapusta, N.; Rice, J. R.; Rice, J. R.

    2001-12-01

    We study early seismic propagation of model earthquakes in a 2-D model of a vertical strike-slip fault with depth-variable rate and state friction properties. Our model earthquakes are obtained in fully dynamic simulations of sequences of instabilities on a fault subjected to realistically slow tectonic loading (Lapusta et al., JGR, 2000). This work is motivated by results of Ellsworth and Beroza (Science, 1995), who observe that for many earthquakes, far-field velocity seismograms during initial stages of dynamic rupture propagation have irregular fluctuations which constitute a "seismic nucleation phase". In our simulations, we find that such irregularities in velocity seismograms can be caused by two factors: (1) rupture propagation over regions of stress concentrations and (2) partial arrest of rupture in neighboring creeping regions. As rupture approaches a region of stress concentration, it sees increasing background stress and its moment acceleration (to which velocity seismographs in the far field are proportional) increases. After the peak in stress concentration, the rupture sees decreasing background stress and moment acceleration decreases. Hence a fluctuation in moment acceleration is created. If rupture starts sufficiently far from a creeping region, then partial arrest of rupture in the creeping region causes a decrease in moment acceleration. As the other parts of rupture continue to develop, moment acceleration then starts to grow again, and a fluctuation again results. Other factors may cause the irregularities in moment acceleration, e.g., phenomena such as branching and/or intermittent rupture propagation (Poliakov et al., submitted to JGR, 2001) which we have not studied here. Regions of stress concentration are created in our model by arrest of previous smaller events as well as by interactions with creeping regions. One such region is deep in the fault zone, and is caused by the temperature-induced transition from seismogenic to creeping

  13. What caused a large number of fatalities in the Tohoku earthquake?

    Science.gov (United States)

    Ando, M.; Ishida, M.; Nishikawa, Y.; Mizuki, C.; Hayashi, Y.

    2012-04-01

    The Mw9.0 earthquake caused 20,000 deaths and missing persons in northeastern Japan. 115 years prior to this event, there were three historical tsunamis that struck the region, one of which is a "tsunami earthquake" resulted with a death toll of 22,000. Since then, numerous breakwaters were constructed along the entire northeastern coasts and tsunami evacuation drills were carried out and hazard maps were distributed to local residents on numerous communities. However, despite the constructions and preparedness efforts, the March 11 Tohoku earthquake caused numerous fatalities. The strong shaking lasted three minutes or longer, thus all residents recognized that this is the strongest and longest earthquake that they had been ever experienced in their lives. The tsunami inundated an enormous area at about 560km2 over 35 cities along the coast of northeast Japan. To find out the reasons behind the high number of fatalities due to the March 11 tsunami, we interviewed 150 tsunami survivors at public evacuation shelters in 7 cities mainly in Iwate prefecture in mid-April and early June 2011. Interviews were done for about 30min or longer focused on their evacuation behaviors and those that they had observed. On the basis of the interviews, we found that residents' decisions not to evacuate immediately were partly due to or influenced by earthquake science results. Below are some of the factors that affected residents' decisions. 1. Earthquake hazard assessments turned out to be incorrect. Expected earthquake magnitudes and resultant hazards in northeastern Japan assessed and publicized by the government were significantly smaller than the actual Tohoku earthquake. 2. Many residents did not receive accurate tsunami warnings. The first tsunami warning were too small compared with the actual tsunami heights. 3. The previous frequent warnings with overestimated tsunami height influenced the behavior of the residents. 4. Many local residents above 55 years old experienced

  14. The twelve colourful stones

    International Nuclear Information System (INIS)

    Doria, R.M.

    1984-01-01

    The gauge symmetry is extended. It is associated differents matter and gauge fields to the same group. A group of gauge invariant Lagrangians is established. A gauge invariant mass term is introduced. A massive Yang Mills is obtained. A dynamics with twelve colourful stones is created based on the concepts of gauge and colour. Structures identified as quarks and leptons are generated. A discussion about colour meaning is presented. (Author) [pt

  15. Numerical modeling of the deformations associated with large subduction earthquakes through the seismic cycle

    Science.gov (United States)

    Fleitout, L.; Trubienko, O.; Garaud, J.; Vigny, C.; Cailletaud, G.; Simons, W. J.; Satirapod, C.; Shestakov, N.

    2012-12-01

    A 3D finite element code (Zebulon-Zset) is used to model deformations through the seismic cycle in the areas surrounding the last three large subduction earthquakes: Sumatra, Japan and Chile. The mesh featuring a broad spherical shell portion with a viscoelastic asthenosphere is refined close to the subduction zones. The model is constrained by 6 years of postseismic data in Sumatra area and over a year of data for Japan and Chile plus preseismic data in the three areas. The coseismic displacements on the subduction plane are inverted from the coseismic displacements using the finite element program and provide the initial stresses. The predicted horizontal postseismic displacements depend upon the thicknesses of the elastic plate and of the low viscosity asthenosphere. Non-dimensionalized by the coseismic displacements, they present an almost uniform value between 500km and 1500km from the trench for elastic plates 80km thick. The time evolution of the velocities is function of the creep law (Maxwell, Burger or power-law creep). Moreover, the forward models predict a sizable far-field subsidence, also with a spatial distribution which varies with the geometry of the asthenosphere and lithosphere. Slip on the subduction interface does not induce such a subsidence. The observed horizontal velocities, divided by the coseismic displacement, present a similar pattern as function of time and distance from trench for the three areas, indicative of similar lithospheric and asthenospheric thicknesses and asthenospheric viscosity. This pattern cannot be fitted with power-law creep in the asthenosphere but indicates a lithosphere 60 to 90km thick and an asthenosphere of thickness of the order of 100km with a burger rheology represented by a Kelvin-Voigt element with a viscosity of 3.1018Pas and μKelvin=μelastic/3. A second Kelvin-Voigt element with very limited amplitude may explain some characteristics of the short time-scale signal. The postseismic subsidence is

  16. Analog earthquakes

    International Nuclear Information System (INIS)

    Hofmann, R.B.

    1995-01-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository

  17. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    Science.gov (United States)

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  18. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    Directory of Open Access Journals (Sweden)

    Junjie Ren

    2013-01-01

    Full Text Available Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9 occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF and the Guanxian-Jiangyou fault (GJF. However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS and Interferometric Synthetic Aperture Radar (InSAR data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3 × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  19. The large 1956 earthquake in the South Aegean: Macroseismic field configuration, faulting, and neotectonics of Amorgos Island

    Science.gov (United States)

    Papadopoulos, Gerassimos A.; Pavlides, Spyros B.

    1992-10-01

    New field observations of the seismic intensity distribution of the large (M s = 7.4) South Aegean (Amorgos) earthquake of 9 July 1956 are presented. Interpretations based on local ground conditions, structural properties of buildings and peculiarities of the rupture process lead to a re-evaluation of the macroseismic field configuration. This, together with the aftershock epicentral distribution, quite well defines the earthquake rupture zone, which trends NE-SW and coincides with the Amorgos Astypalea trough. The lateral extent of the rupture zone, however, is about 40% smaller than that predicted for Aegean earthquakes of M s = 7.4. This discrepancy could be attributed to sea-bottom topography changes, which seem to control the rupture terminations, and to relatively high stressdrop with respect to other Aegean earthquakes. Fault plane solutions obtained by several authors indicate either mainly normal faulting with a significant right-lateral strike-slip component or predominantly strike-slip motion. The neotectonism of Amorgos Island, based on new field observations, aerial photograph analysis and fault mechanisms, is consistent with the dip-slip interpretation. The neotectonic master fault of Amorgos and the 1956 seismic faulting appear to belong to the same tectonic phase (NE-SW strike and a southeasterly dip). However, the significant right-lateral strike-slip component supports the idea that the Amorgos region deviates from the simple description for pure extension in back-arc conditions.

  20. Spatiotemporal seismic velocity change in the Earth's subsurface associated with large earthquake: contribution of strong ground motion and crustal deformation

    Science.gov (United States)

    Sawazaki, K.

    2016-12-01

    It is well known that seismic velocity of the subsurface medium changes after a large earthquake. The cause of the velocity change is roughly attributed to strong ground motion (dynamic strain change), crustal deformation (static strain change), and fracturing around the fault zone. Several studies have revealed that the velocity reduction down to several percent concentrates at the depths shallower than several hundred meters. The amount of velocity reduction correlates well with the intensity of strong ground motion, which indicates that the strong motion is the primary cause of the velocity reduction. Although some studies have proposed contributions of coseismic static strain change and fracturing around fault zone to the velocity change, separation of their contributions from the site-related velocity change is usually difficult. Velocity recovery after a large earthquake is also widely observed. The recovery process is generally proportional to logarithm of the lapse time, which is similar to the behavior of "slow dynamics" recognized in laboratory experiments. The time scale of the recovery is usually months to years in field observations, while it is several hours in laboratory experiments. Although the factor that controls the recovery speed is not well understood, cumulative strain change due to post-seismic deformation, migration of underground water, mechanical and chemical reactions on the crack surface could be the candidate. In this study, I summarize several observations that revealed spatiotemporal distribution of seismic velocity change due to large earthquakes; especially I focus on the case of the M9.0 2011 Tohoku earthquake. Combining seismograms of Hi-net (high-sensitivity) and KiK-net (strong motion), geodetic records of GEONET and the seafloor GPS/Acoustic ranging, I investigate contribution of the strong ground motion and crustal deformation to the velocity change associated with the Tohoku earthquake, and propose a gross view of

  1. Large early afterslip following the 1995/10/09 Mw 8 Jalisco, Mexico earthquake

    Science.gov (United States)

    Hjörleifsdóttir, Vala; Sánchez Reyes, Hugo Samuel; Ruiz-Angulo, Angel; Ramirez-Herrera, Maria Teresa; Castillo-Aja, Rosío; Krishna Singh, Shri; Ji, Chen

    2017-04-01

    The behaviour of slip close to the trench during earthquakes is not well understood, with some earthquakes breaking only the near trench area, most earthquakes breaking only the deeper part of the fault interface, whereas a few break both simultaneously. Observations of multiple earthquakes breaking different down dip segments of the same subduction segment are rare. The 1995 Mw 8 Jalisco earthquake, seems to have broken the near trench area, as evidenced by anomalously small accelerations for its size, the excitation of a tsunami, a small Ms relative to Mw and a small ratio between the radiated energy and moment (Pacheco et al 1997). However, slip models obtained using GPS campaign data, indicate slip near shore (Melbourne et al 1997, Hutton et al 2001). We invert tele seismic P- and S-waves, Rayleigh and Love waves, as well as the static offsets measured by campaign GPS models, to obtain the slip distribution on the fault as a function of time, during the earthquake. We confirm that the slip models obtained using only seismic data are most consistent with slip near the trench, whereas those obtained using only GPS data are consistent with slip closer to the coast. We find remarkable similarity with models of other researchers (Hutton et al 2001, Mendoza et al 1999) using the same datasets, even though the slip distributions from each dataset are almost complementary. To resolve this inconsistency we jointly invert the datasets. However, we find that the joint inversions do not produce adequate fits to both seismic and GPS data. Furthermore, we model tsunami observations on the coast, to constrain further the plausible slip models. Assuming that the discrepancy stems from slip that occurred within the time window between the campaign GPS measurements, but not during the earthquake, we model the residual displacements by very localised slip on the interface down dip from the coseismic slip. Aftershocks (Pacheco et al 1997) align on mostly between the non

  2. Real-Time Magnitude Characterization of Large Earthquakes Using the Predominant Period Derived From 1 Hz GPS Data

    Science.gov (United States)

    Psimoulis, Panos A.; Houlié, Nicolas; Behr, Yannik

    2018-01-01

    Earthquake early warning (EEW) systems' performance is driven by the trade-off between the need for a rapid alert and the accuracy of each solution. A challenge for many EEW systems has been the magnitude saturation for large events (MW > 7) and the resulting underestimation of seismic moment magnitude. In this study, we test the performance of high-rate (1 Hz) GPS, based on seven seismic events, to evaluate whether long-period ground motions can be measured well enough to infer reliably earthquake predominant periods. We show that high-rate GPS data allow the computation of a GPS-based predominant period (τg) to estimate lower bounds for the magnitude of earthquakes and distinguish between large (MW > 7) and great (MW > 8) events and thus extend the capability of EEW systems for larger events. It has also identified the impact of the different values of the smoothing factor α on the τg results and how the sampling rate and the computation process differentiate τg from the commonly used τp.

  3. Distribution of large-earthquake input energy in viscous damped outrigger structures

    NARCIS (Netherlands)

    Morales Beltran, M.G.; Turan, Gursoy; Yildirim, Umut

    2017-01-01

    This article provides an analytical framework to assess the distribution of seismic energy in outrigger structures equipped with viscous dampers. The principle of damped outriggers for seismic control applications lies on the assumption that the total earthquake energy will be absorbed by the

  4. Aftershocks of Chile's Earthquake for an Ongoing, Large-Scale Experimental Evaluation

    Science.gov (United States)

    Moreno, Lorenzo; Trevino, Ernesto; Yoshikawa, Hirokazu; Mendive, Susana; Reyes, Joaquin; Godoy, Felipe; Del Rio, Francisca; Snow, Catherine; Leyva, Diana; Barata, Clara; Arbour, MaryCatherine; Rolla, Andrea

    2011-01-01

    Evaluation designs for social programs are developed assuming minimal or no disruption from external shocks, such as natural disasters. This is because extremely rare shocks may not make it worthwhile to account for them in the design. Among extreme shocks is the 2010 Chile earthquake. Un Buen Comienzo (UBC), an ongoing early childhood program in…

  5. Does paleoseismology forecast the historic rates of large earthquakes on the San Andreas fault system?

    Science.gov (United States)

    Biasi, Glenn; Scharer, Katherine M.; Weldon, Ray; Dawson, Timothy E.

    2016-01-01

    The 98-year open interval since the most recent ground-rupturing earthquake in the greater San Andreas boundary fault system would not be predicted by the quasi-periodic recurrence statistics from paleoseismic data. We examine whether the current hiatus could be explained by uncertainties in earthquake dating. Using seven independent paleoseismic records, 100 year intervals may have occurred circa 1150, 1400, and 1700 AD, but they occur in a third or less of sample records drawn at random. A second method sampling from dates conditioned on the existence of a gap of varying length suggests century-long gaps occur 3-10% of the time. A combined record with more sites would lead to lower probabilities. Systematic data over-interpretation is considered an unlikely explanation. Instead some form of non-stationary behaviour seems required, perhaps through long-range fault interaction. Earthquake occurrence since 1000 AD is not inconsistent with long-term cyclicity suggested from long runs of earthquake simulators.

  6. Effects of deep basins on structural collapse during large subduction earthquakes

    Science.gov (United States)

    Marafi, Nasser A.; Eberhard, Marc O.; Berman, Jeffrey W.; Wirth, Erin A.; Frankel, Arthur

    2017-01-01

    Deep sedimentary basins are known to increase the intensity of ground motions, but this effect is implicitly considered in seismic hazard maps used in U.S. building codes. The basin amplification of ground motions from subduction earthquakes is particularly important in the Pacific Northwest, where the hazard at long periods is dominated by such earthquakes. This paper evaluates the effects of basins on spectral accelerations, ground-motion duration, spectral shape, and structural collapse using subduction earthquake recordings from basins in Japan that have similar depths as the Puget Lowland basin. For three of the Japanese basins and the Puget Lowland basin, the spectral accelerations were amplified by a factor of 2 to 4 for periods above 2.0 s. The long-duration subduction earthquakes and the effects of basins on spectral shape combined, lower the spectral accelerations at collapse for a set of building archetypes relative to other ground motions. For the hypothetical case in which these motions represent the entire hazard, the archetypes would need to increase up to 3.3 times its strength to compensate for these effects.

  7. Twelve years at DESY

    International Nuclear Information System (INIS)

    Anon.

    1993-01-01

    As reported in our previous issue (page 27), on 28 February Volker Soergel stepped down after serving as Chairman of the Board of the DESY Laboratory in Hamburg since January 1981, when the previous chairman, Herwig Schopper, moved to become Director General of CERN. DESY is now headed by Bjorn Wiik. During the twelve years of Soergel's mandate, DESY substantially evolved and progressed. Dominating the landscape was the big HERA electron-proton collider - the world's first - proposed, approved, constructed and commissioned under Soergel's leadership. As well as pioneering electron-proton collisions, HERA also broke new ground in international collaboration. At the approval of the project by the German government, it had already been made clear that both the machine and its experiments had to be built with full international cooperation, using material contributions from foreign institutes. With the difficult task of transforming these requirements into hard reality, Volker Soergel succeeded brilliantly. The 'HERA model', with interested countries pledging contributions in equipment and/or manpower, established a new route to major project involvement. For HERA, the substantial Italian contribution, organized by Antonino Zichichi, was vital to the success of the project

  8. Large-scale risk assessment of polycyclic aromatic hydrocarbons in shoreline sediments from Saudi Arabia: Environmental legacy after twelve years of the Gulf war oil spill

    Energy Technology Data Exchange (ETDEWEB)

    Bejarano, Adriana C., E-mail: ABejarano@researchplanning.co [Research Planning Inc., 1121 Park St., Columbia, SC 29201 (United States); Michel, Jacqueline [Research Planning Inc., 1121 Park St., Columbia, SC 29201 (United States)

    2010-05-15

    A large-scale assessment of polycyclic aromatic hydrocarbons (PAHs) from the 1991 Gulf War oil spill was performed for 2002-2003 sediment samples (n = 1679) collected from habitats along the shoreline of Saudi Arabia. Benthic sediment toxicity was characterized using the Equilibrium Partitioning Sediment Benchmark Toxic Unit approach for 43 PAHs (ESBTU{sub FCV,43}). Samples were assigned to risk categories according to ESBTU{sub FCV,43} values: no-risk (<=1), low (>1-<=2), low-medium (>2-<=3), medium (>3-<=5) and high-risk (>5). Sixty seven percent of samples had ESBTU{sub FCV,43} > 1 indicating potential adverse ecological effects. Sediments from the 0-30 cm layer from tidal flats, and the >30-<60 cm layer from heavily oiled halophytes and mangroves had high frequency of high-risk samples. No-risk samples were characterized by chrysene enrichment and depletion of lighter molecular weight PAHs, while high-risk samples showed little oil weathering and PAH patterns similar to 1993 samples. North of Safaniya sediments were not likely to pose adverse ecological effects contrary to sediments south of Tanaqib. Landscape and geomorphology has played a role on the distribution and persistence in sediments of oil from the Gulf War. - Risk Assessment of PAHs in shoreline sediments 12 years after the Gulf War oil spill.

  9. Large-scale risk assessment of polycyclic aromatic hydrocarbons in shoreline sediments from Saudi Arabia: environmental legacy after twelve years of the Gulf war oil spill.

    Science.gov (United States)

    Bejarano, Adriana C; Michel, Jacqueline

    2010-05-01

    A large-scale assessment of polycyclic aromatic hydrocarbons (PAHs) from the 1991 Gulf War oil spill was performed for 2002-2003 sediment samples (n = 1679) collected from habitats along the shoreline of Saudi Arabia. Benthic sediment toxicity was characterized using the Equilibrium Partitioning Sediment Benchmark Toxic Unit approach for 43 PAHs (ESBTU(FCV,43)). Samples were assigned to risk categories according to ESBTU(FCV,43) values: no-risk (1 - 2 - 3 - 5). Sixty seven percent of samples had ESBTU(FCV,43) > 1 indicating potential adverse ecological effects. Sediments from the 0-30 cm layer from tidal flats, and the >30 - oiled halophytes and mangroves had high frequency of high-risk samples. No-risk samples were characterized by chrysene enrichment and depletion of lighter molecular weight PAHs, while high-risk samples showed little oil weathering and PAH patterns similar to 1993 samples. North of Safaniya sediments were not likely to pose adverse ecological effects contrary to sediments south of Tanaqib. Landscape and geomorphology has played a role on the distribution and persistence in sediments of oil from the Gulf War. Copyright 2009 Elsevier Ltd. All rights reserved.

  10. Large-scale risk assessment of polycyclic aromatic hydrocarbons in shoreline sediments from Saudi Arabia: Environmental legacy after twelve years of the Gulf war oil spill

    International Nuclear Information System (INIS)

    Bejarano, Adriana C.; Michel, Jacqueline

    2010-01-01

    A large-scale assessment of polycyclic aromatic hydrocarbons (PAHs) from the 1991 Gulf War oil spill was performed for 2002-2003 sediment samples (n = 1679) collected from habitats along the shoreline of Saudi Arabia. Benthic sediment toxicity was characterized using the Equilibrium Partitioning Sediment Benchmark Toxic Unit approach for 43 PAHs (ESBTU FCV,43 ). Samples were assigned to risk categories according to ESBTU FCV,43 values: no-risk (≤1), low (>1-≤2), low-medium (>2-≤3), medium (>3-≤5) and high-risk (>5). Sixty seven percent of samples had ESBTU FCV,43 > 1 indicating potential adverse ecological effects. Sediments from the 0-30 cm layer from tidal flats, and the >30-<60 cm layer from heavily oiled halophytes and mangroves had high frequency of high-risk samples. No-risk samples were characterized by chrysene enrichment and depletion of lighter molecular weight PAHs, while high-risk samples showed little oil weathering and PAH patterns similar to 1993 samples. North of Safaniya sediments were not likely to pose adverse ecological effects contrary to sediments south of Tanaqib. Landscape and geomorphology has played a role on the distribution and persistence in sediments of oil from the Gulf War. - Risk Assessment of PAHs in shoreline sediments 12 years after the Gulf War oil spill.

  11. Analysis of recorded earthquake response data at the Hualien large-scale seismic test site

    International Nuclear Information System (INIS)

    Hyun, C.H.; Tang, H.T.; Dermitzakis, S.; Esfandiari, S.

    1997-01-01

    A soil-structure interaction (SSI) experiment is being conducted in a seismically active region in Hualien, Taiwan. To obtain earthquake data for quantifying SSI effects and providing a basis to benchmark analysis methods, a 1/4-th scale cylindrical concrete containment model similar in shape to that of a nuclear power plant containment was constructed in the field where both the containment model and its surrounding soil, surface and sub-surface, are extensively instrumented to record earthquake data. In between September 1993 and May 1995, eight earthquakes with Richter magnitudes ranging from 4.2 to 6.2 were recorded. The author focuses on studying and analyzing the recorded data to provide information on the response characteristics of the Hualien soil-structure system, the SSI effects and the ground motion characteristics. An effort was also made to directly determine the site soil physical properties based on correlation analysis of the recorded data. No modeling simulations were attempted to try to analytically predict the SSI response of the soil and the structure. These will be the scope of a subsequent study

  12. Analysis of P and Pdiff Coda Arrivals for Water Reverberations to Evaluate Shallow Slip Extent in Large Megathrust Earthquakes

    Science.gov (United States)

    Rhode, A.; Lay, T.

    2017-12-01

    Determining the up-dip rupture extent of large megathrust ruptures is important for understanding their tsunami excitation, frictional properties of the shallow megathrust, and potential for separate tsunami earthquake occurrence. On land geodetic data have almost no resolution of the up-dip extent of faulting and teleseismic observations have limited resolution that is strongly influenced by typically poorly known shallow seismic velocity structure near the toe of the accretionary prism. The increase in ocean depth as slip on the megathrust approaches the trench has significant influence on the strength and azimuthal distribution of water reverberations in the far-field P wave coda. For broadband P waves from large earthquakes with dominant signal periods of about 10 s, water reverberations generated by shallow fault slip under deep water may persist for over a minute after the direct P phases have passed, giving a clear signal of slip near the trench. As the coda waves can be quickly evaluated following the P signal, recognition of slip extending to the trench and associated enhanced tsunamigenic potential could be achieved within a few minutes after the P arrival, potentially contributing to rapid tsunami hazard assessment. We examine the broadband P wave coda at distances from 80 to 120° for a large number of recent major and great earthquakes with independently determined slip distributions and known tsunami excitation to evaluate the prospect for rapidly constraining up-dip rupture extent of large megathrust earthquakes. Events known to have significant shallow slip, at least locally extending to the trench (e.g., 2016 Illapel, Chile; 2010 Maule, 2010 Mentawai) do have relatively enhanced coda levels at all azimuths, whereas events that do not rupture the shallow megathrust (e.g., 2007 Sumatra, 2014 Iquique, 2003 Hokkaido) do not. Some events with slip models lacking shallow slip show strong coda generation, raising questions about the up-dip resolution of

  13. Precursory enhancement of EIA in the morning sector: Contribution from mid-latitude large earthquakes in the north-east Asian region

    Science.gov (United States)

    Ryu, Kwangsun; Oyama, Koh-Ichiro; Bankov, Ludmil; Chen, Chia-Hung; Devi, Minakshi; Liu, Huixin; Liu, Jann-Yenq

    2016-01-01

    To investigate whether the link between seismic activity and EIA (equatorial ionization anomaly) enhancement is valid for mid-latitude seismic activity, DEMETER observations around seven large earthquakes in the north-east Asian region were fully analyzed (M ⩾ 6.8). In addition, statistical analysis was performed for 35 large earthquakes (M ⩾ 6.0) that occurred during the DEMETER observation period. The results suggest that mid-latitude earthquakes do contribute to EIA enhancement, represented as normalized equatorial Ne , and that ionospheric change precedes seismic events, as has been reported in previous studies. According to statistical studies, the normalized equatorial density enhancement is sensitive and proportional to both the magnitude and the hypocenter depth of an earthquake. The mechanisms that can explain the contribution of mid-latitude seismic activity to EIA variation are briefly discussed based on current explanations of the geochemical and ionospheric processes involved in lithosphere-ionosphere interaction.

  14. S-net : Construction of large scale seafloor observatory network for tsunamis and earthquakes along the Japan Trench

    Science.gov (United States)

    Mochizuki, M.; Uehira, K.; Kanazawa, T.; Shiomi, K.; Kunugi, T.; Aoi, S.; Matsumoto, T.; Sekiguchi, S.; Yamamoto, N.; Takahashi, N.; Nakamura, T.; Shinohara, M.; Yamada, T.

    2017-12-01

    NIED has launched the project of constructing a seafloor observatory network for tsunamis and earthquakes after the occurrence of the 2011 Tohoku Earthquake to enhance reliability of early warnings of tsunamis and earthquakes. The observatory network was named "S-net". The S-net project has been financially supported by MEXT.The S-net consists of 150 seafloor observatories which are connected in line with submarine optical cables. The total length of submarine optical cable is about 5,500 km. The S-net covers the focal region of the 2011 Tohoku Earthquake and its vicinity regions. Each observatory equips two units of a high sensitive pressure gauges as a tsunami meter and four sets of three-component seismometers. The S-net is composed of six segment networks. Five of six segment networks had been already installed. Installation of the last segment network covering the outer rise area have been finally finished by the end of FY2016. The outer rise segment has special features like no other five segments of the S-net. Those features are deep water and long distance. Most of 25 observatories on the outer rise segment are located at the depth of deeper than 6,000m WD. Especially, three observatories are set on the seafloor of deeper than about 7.000m WD, and then the pressure gauges capable of being used even at 8,000m WD are equipped on those three observatories. Total length of the submarine cables of the outer rise segment is about two times longer than those of the other segments. The longer the cable system is, the higher voltage supply is needed, and thus the observatories on the outer rise segment have high withstanding voltage characteristics. We employ a dispersion management line of a low loss formed by combining a plurality of optical fibers for the outer rise segment cable, in order to achieve long-distance, high-speed and large-capacity data transmission Installation of the outer rise segment was finished and then full-scale operation of S-net has started

  15. A non-accelerating foreshock sequence followed by a short period of quiescence for a large inland earthquake

    Science.gov (United States)

    Doi, I.; Kawakata, H.

    2012-12-01

    Laboratory experiments [e.g. Scholz, 1968; Lockner et al., 1992] and field observations [e.g. Dodge et al., 1996; Helmstetter and Sornette, 2003; Bouchon et al., 2011] have elucidated part of foreshock behavior and mechanism, but we cannot identify foreshocks while they are occurring. Recently, in Japan, a dense seismic network, Hi-net (High Sensitivity Seismograph Network), provides continuous waveform records for regional seismic events. The data from this network enable us to analyze small foreshocks which occur on long period time scales prior to a major event. We have an opportunity to grasp the more detailed pattern of foreshock generation. Using continuous waveforms recorded at a seismic station located in close proximity to the epicenter of the 2008 Iwate-Miyagi inland earthquake, we conducted a detailed investigation of its foreshocks. In addition to the two officially recognized foreshocks, calculation of cross-correlation coefficients between the continuous waveform record and one of the previously recognized foreshocks revealed that 20 micro foreshocks occurred within the same general area. Our analysis also shows that all of these foreshocks occurred within the same general area relative to the main event. Over the two week period leading up to the Iwate-Miyagi earthquake, such foreshocks only occurred during the last 45 minutes, specifically over a 35 minute period followed by a 10 minute period of quiescence just before the mainshock. We found no evidence of acceleration of this foreshock sequence. Rock fracturing experiments using a constant loading rate or creep tests have consistently shown that the occurrence rate of small fracturing events (acoustic emissions; AEs) increases before the main rupture [Scholz, 1968]. This accelerative pattern of preceding events was recognized in case of the 1999 Izmit earthquake [Bouchon et al., 2011]. Large earthquakes however need not be accompanied by acceleration of foreshocks if a given fault's host rock

  16. Structure of the Koyna-Warna Seismic Zone, Maharashtra, India: A possible model for large induced earthquakes elsewhere

    Science.gov (United States)

    Catchings, Rufus D.; Dixit, M.M.; Goldman, Mark R.; Kumar, S.

    2015-01-01

    The Koyna-Warna area of India is one of the best worldwide examples of reservoir-induced seismicity, with the distinction of having generated the largest known induced earthquake (M6.3 on 10 December 1967) and persistent moderate-magnitude (>M5) events for nearly 50 years. Yet, the fault structure and tectonic setting that has accommodated the induced seismicity is poorly known, in part because the seismic events occur beneath a thick sequence of basalt layers. On the basis of the alignment of earthquake epicenters over an ~50 year period, lateral variations in focal mechanisms, upper-crustal tomographic velocity images, geophysical data (aeromagnetic, gravity, and magnetotelluric), geomorphic data, and correlation with similar structures elsewhere, we suggest that the Koyna-Warna area lies within a right step between northwest trending, right-lateral faults. The sub-basalt basement may form a local structural depression (pull-apart basin) caused by extension within the step-over zone between the right-lateral faults. Our postulated model accounts for the observed pattern of normal faulting in a region that is dominated by north-south directed compression. The right-lateral faults extend well beyond the immediate Koyna-Warna area, possibly suggesting a more extensive zone of seismic hazards for the central India area. Induced seismic events have been observed many places worldwide, but relatively large-magnitude induced events are less common because critically stressed, preexisting structures are a necessary component. We suggest that releasing bends and fault step-overs like those we postulate for the Koyna-Warna area may serve as an ideal tectonic environment for generating moderate- to large- magnitude induced (reservoir, injection, etc.) earthquakes.

  17. Comparison of Different Approach of Back Projection Method in Retrieving the Rupture Process of Large Earthquakes

    Science.gov (United States)

    Tan, F.; Wang, G.; Chen, C.; Ge, Z.

    2016-12-01

    Back-projection of teleseismic P waves [Ishii et al., 2005] has been widely used to image the rupture of earthquakes. Besides the conventional narrowband beamforming in time domain, approaches in frequency domain such as MUSIC back projection (Meng 2011) and compressive sensing (Yao et al, 2011), are proposed to improve the resolution. Each method has its advantages and disadvantages and should be properly used in different cases. Therefore, a thorough research to compare and test these methods is needed. We write a GUI program, which puts the three methods together so that people can conveniently use different methods to process the same data and compare the results. Then we use all the methods to process several earthquake data, including 2008 Wenchuan Mw7.9 earthquake and 2011 Tohoku-Oki Mw9.0 earthquake, and theoretical seismograms of both simple sources and complex ruptures. Our results show differences in efficiency, accuracy and stability among the methods. Quantitative and qualitative analysis are applied to measure their dependence on data and parameters, such as station number, station distribution, grid size, calculate window length and so on. In general, back projection makes it possible to get a good result in a very short time using less than 20 lines of high-quality data with proper station distribution, but the swimming artifact can be significant. Some ways, for instance, combining global seismic data, could help ameliorate this method. Music back projection needs relatively more data to obtain a better and more stable result, which means it needs a lot more time since its runtime accumulates obviously faster than back projection with the increase of station number. Compressive sensing deals more effectively with multiple sources in a same time window, however, costs the longest time due to repeatedly solving matrix. Resolution of all the methods is complicated and depends on many factors. An important one is the grid size, which in turn influences

  18. Unusual Animal Behavior Preceding the 2011 Earthquake off the Pacific Coast of Tohoku, Japan: A Way to Predict the Approach of Large Earthquakes

    Directory of Open Access Journals (Sweden)

    Hiroyuki Yamauchi

    2014-04-01

    Full Text Available Unusual animal behaviors (UABs have been observed before large earthquakes (EQs, however, their mechanisms are unclear. While information on UABs has been gathered after many EQs, few studies have focused on the ratio of emerged UABs or specific behaviors prior to EQs. On 11 March 2011, an EQ (Mw 9.0 occurred in Japan, which took about twenty thousand lives together with missing and killed persons. We surveyed UABs of pets preceding this EQ using a questionnaire. Additionally, we explored whether dairy cow milk yields varied before this EQ in particular locations. In the results, 236 of 1,259 dog owners and 115 of 703 cat owners observed UABs in their pets, with restless behavior being the most prominent change in both species. Most UABs occurred within one day of the EQ. The UABs showed a precursory relationship with epicentral distance. Interestingly, cow milk yields in a milking facility within 340 km of the epicenter decreased significantly about one week before the EQ. However, cows in facilities farther away showed no significant decreases. Since both the pets’ behavior and the dairy cows’ milk yields were affected prior to the EQ, with careful observation they could contribute to EQ predictions.

  19. Dating Informed Correlations and Large Earthquake Recurrence at the Hokuri Creek Paleoseismic Site, Alpine Fault, South Island, New Zealand

    Science.gov (United States)

    Biasi, G. P.; Clark, K.; Berryman, K. R.; Cochran, U. A.; Prior, C.

    2010-12-01

    -correlate sections at the site. Within a series of dates from a section, ordering with intrinsic precision of the dates indicates an uncertainty at event horizons on the order of 50 years, while the transitions from peat to silt indicating an earthquake are separated by several times this amount. The effect is to create a stair-stepping date sequence that often allows us to link sections and improve dating resolution in both sections. The combined section provides clear evidence for at least 18 earthquake-induced cycles. Event recurrence would be about 390 years in a simple average. Internal evidence and close examination of date sequences provide preliminary indications of as many as 22 earthquakes could be represented at Hokuri Creek, and a recurrence interval of ~320 years. Both sequences indicate a middle sequence from 3800 to 1000 BC in which recurrence intervals are resolvably longer than average. Variability in recurrence is relatively small - relatively few intervals are even >1.5x the average. This indicates that large earthquakes on the Alpine Fault of South Island, New Zealand are best fit by a time-predictable model.

  20. Study of seismological evasion. Part III. Evaluation of evasion possibilities using codas of large earthquakes

    International Nuclear Information System (INIS)

    Evernden, J.F.

    1976-01-01

    The seismological aspects of various proposed means of obscuring or hiding the seismic signatures of explosions from a surveillance network are discussed. These so-called evasion schemes are discussed from the points of view of both the evader and the monitor. The analysis will be conducted in terms of the USSR solely because that country is so vast and the geological/geophysical complexities of the country are so great that the complete spectrum of hypothesized evasion schemes requires discussion. Techniques appropriate for use when the seismic noise problem is interference due to codas of P and surface waves from earthquakes are described, and the capabilities of several seismological networks to restrain use of such codas for effective evasion are analyzed

  1. Source to Sink Tectonic Fate of Large Oceanic Turbidite Systems and the Rupturing of Great and Giant Megathrust Earthquakes (Invited)

    Science.gov (United States)

    Scholl, D. W.; Kirby, S. H.; von Huene, R.

    2010-12-01

    OCEAN FLOOR OBSERVATIONS: Oceanic turbidite systems accumulate above igneous oceanic crust and are commonly huge in areal and volumetric dimensions. For example, the volume of the Zodiac fan of the Gulf of Alaska is roughly 300,000 cubic km. Other large oceanic systems construct the Amazon cone, flood the Bay of Bengal abyss, and accumulate along trench axes to thickness of 1 to 7 km and lengths of 1000 to 3000 km, e.g., the Aleutian-Alaska, Sumatra-Andaman, Makran, and south central Chile Trenches. THE ROCK RECORD: Despite the large dimensions of oceanic turbidite systems, they are poorly preserved in the rock record. This includes oceanic systems deposited in passive-margin oceans, e.g., the Paleozoic Iapetus and Rheric oceans of the Atlantic realm, This circumstance does not apply to Cretaceous and E. Tertiary rock sequences of the north Pacific rim where oceanic turbidite deposits are preserved as accretionary complexes, e.g., the Catalina-Pelona-Orocopia-Rand schist of California and the Chugach-Kodiak complex of Alaska. These rock bodies are exhumed crustal underplates of once deeply (15-30 km) subducted oceanic turbidite systems. PATH FROM SOURCE TO TECTONIC SINK: The fate of most oceanic turbidite systems is to be removed from the sea floor and, ultimately, destroyed. This circumstance is unavoidable because most of them are deposited on lower plate crust destined for destruction in a subduction zone. During the past 4-5 myr alone a volume of 1-1.5 million cubic km of sediment sourced from the glaciated drainages of the Gulf of Alaska flooded the 3000-km-long Aleutian-Alaska trench axis. A small part of this volume accumulated tectonically as a narrow, 10-30-km wide accretionary frontal prism. But about 80 percent was subducted and entered the subduction channel separating the two plates. The subduction channel, roughly 1 km thick, conveys the trench turbidite deposits landward down dip along the rupturing width of the seismogenic zone. SEISMIC CONSEQUENCE

  2. On the dependency of the decay of ground motion peak values with distance for small and large earthquakes

    Science.gov (United States)

    Dujardin, Alain; Courboulex, Françoise; Causse, Matthieu; Traversa, Paola; Monfret, Tony

    2013-04-01

    Ground motion decay with distance presents a clear magnitude dependence, PGA values of small events decreasing faster than those of larger events. This observation is now widely accepted and often taken into account in recent ground motion prediction equations (Anderson 2005, Akkar & Bommer 2010). The aim of this study is to investigate the origin of this dependence, which has not been clearly identified yet. Two main hypotheses are considered. On one hand the difference of ground motion decay is related to an attenuation effect, on the other hand the difference is related to an effect of extended fault (Anderson 2000). To study the role of attenuation, we realized synthetic tests using the stochastic simulation program SMSIM from Boore (2005). We build a set of simulations from several magnitudes and epicentral distances, and observe that the decay in PGA values is strongly dependent on the spectral shape of the Fourier spectra, which in turn strongly depends on the attenuation factor (Q(f) or kappa). We found that, for a point source approximation and an infinite value of Q (no attenuation) there is no difference between small and large events and that this difference increases when Q decreases. Theses results show that the influence of attenuation on spectral shape is different for earthquakes of different magnitude. In fact the influence of attenuation, which is more important at higher frequency, is larger for small earthquakes, whose Fourier acceleration spectrum has predominantly higher frequencies. We then study the effect of extended source using complete waveform simulations in a 1D model. We find that when the duration of the source time function increases, there is a larger probability to obtain large PGA values at equivalent distances. This effect could also play an important role in the PGA decay with magnitude and distance. Finally we compare these results with real datasets from the Japanese accelerometric network KIK-net.

  3. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    Science.gov (United States)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of

  4. SCARDEC: a new technique for the rapid determination of seismic moment magnitude, focal mechanism and source time functions for large earthquakes using body-wave deconvolution

    Science.gov (United States)

    Vallée, M.; Charléty, J.; Ferreira, A. M. G.; Delouis, B.; Vergoz, J.

    2011-01-01

    Accurate and fast magnitude determination for large, shallow earthquakes is of key importance for post-seismic response and tsumami alert purposes. When no local real-time data are available, which is today the case for most subduction earthquakes, the first information comes from teleseismic body waves. Standard body-wave methods give accurate magnitudes for earthquakes up to Mw= 7-7.5. For larger earthquakes, the analysis is more complex, because of the non-validity of the point-source approximation and of the interaction between direct and surface-reflected phases. The latter effect acts as a strong high-pass filter, which complicates the magnitude determination. We here propose an automated deconvolutive approach, which does not impose any simplifying assumptions about the rupture process, thus being well adapted to large earthquakes. We first determine the source duration based on the length of the high frequency (1-3 Hz) signal content. The deconvolution of synthetic double-couple point source signals—depending on the four earthquake parameters strike, dip, rake and depth—from the windowed real data body-wave signals (including P, PcP, PP, SH and ScS waves) gives the apparent source time function (STF). We search the optimal combination of these four parameters that respects the physical features of any STF: causality, positivity and stability of the seismic moment at all stations. Once this combination is retrieved, the integration of the STFs gives directly the moment magnitude. We apply this new approach, referred as the SCARDEC method, to most of the major subduction earthquakes in the period 1990-2010. Magnitude differences between the Global Centroid Moment Tensor (CMT) and the SCARDEC method may reach 0.2, but values are found consistent if we take into account that the Global CMT solutions for large, shallow earthquakes suffer from a known trade-off between dip and seismic moment. We show by modelling long-period surface waves of these events that

  5. Two large earthquakes in western Switzerland in the sixteenth century: 1524 in Ardon (VS) and 1584 in Aigle (VD)

    Science.gov (United States)

    Schwarz-Zanetti, Gabriela; Fäh, Donat; Gache, Sylvain; Kästli, Philipp; Loizeau, Jeanluc; Masciadri, Virgilio; Zenhäusern, Gregor

    2018-03-01

    The Valais is the most seismically active region of Switzerland. Strong damaging events occurred in 1755, 1855, and 1946. Based on historical documents, we discuss two known damaging events in the sixteenth century: the 1524 Ardon and the 1584 Aigle earthquakes. For the 1524, a document describes damage in Ardon, Plan-Conthey, and Savièse, and a stone tablet at the new bell tower of the Ardon church confirms the reconstruction of the bell tower after the earthquake. Additionally, a significant construction activity in the Upper Valais churches during the second quarter of the sixteenth century is discussed that however cannot be clearly related to this event. The assessed moment magnitude Mw of the 1524 event is 5.8, with an error of about 0.5 units corresponding to one standard deviation. The epicenter is at 46.27 N, 7.27 E with a high uncertainty of about 50 km corresponding to one standard deviation. The assessed moment magnitude Mw of the 1584 main shock is 5.9, with an error of about 0.25 units corresponding to one standard deviation. The epicenter is at 46.33 N and 6.97 E with an uncertainty of about 25 km corresponding to one standard deviation. Exceptional movements in the Lake Geneva wreaked havoc along the shore of the Rhone delta. The large dimension of the induced damage can be explained by an expanded subaquatic slide with resultant tsunami and seiche in Lake Geneva. The strongest of the aftershocks occurred on March 14 with magnitude 5.4 and triggered a destructive landslide covering the villages Corbeyrier and Yvorne, VD.

  6. The 2007 Mentawai earthquake sequence on the Sumatra megathrust

    Science.gov (United States)

    Konca, A.; Avouac, J.; Sladen, A.; Meltzner, A. J.; Kositsky, A. P.; Sieh, K.; Fang, P.; Li, Z.; Galetzka, J.; Genrich, J.; Chlieh, M.; Natawidjaja, D. H.; Bock, Y.; Fielding, E. J.; Helmberger, D. V.

    2008-12-01

    The Sumatra Megathrust has recently produced a flurry of large interplate earthquakes starting with the giant Mw 9.15, Aceh earthquake of 2004. All of these earthquakes occurred within the area monitored by the Sumatra Geodetic Array (SuGAr), which provided exceptional records of near-field co-seismic and postseismic ground displacements. The most recent of these major earthquakes, an Mw 8.4 earthquake and an Mw 7.9 earthquake twelve hours later, occurred in the Mentawai islands area where devastating historical earthquakes had happened in 1797 and 1833. The 2007 earthquake sequence provides an exceptional opportunity to understand the variability of the earthquakes along megathrusts and their relation to interseismic coupling. The InSAR, GPS and teleseismic modeling shows that 2007 earthquakes ruptured a fraction of the strongly coupled Mentawai patch of the megathrust, which is also only a fraction of the 1833 rupture area. It also released a much smaller moment than the one released in 1833, or than the deficit of moment that has accumulated since. Both earthquakes of 2007 consist of 2 sub-events which are 50 to 100 km apart from each other. On the other hand, the northernmost slip patch of 8.4 and southern slip patch of 7.9 earthquakes abut each other, but they ruptured 12 hours apart. Sunda megathrust earthquakes of recent years include a rupture of a strongly coupled patch that closely mimics a prior rupture of that patch and which is well correlated with the interseismic coupling pattern (Nias-Simeulue section), as well as a rupture sequence of a strongly coupled patch that differs substantially in the details from its most recent predecessors (Mentawai section). We conclude that (1) seismic asperities are probably persistent features which arise form heterogeneous strain build up in the interseismic period; and (2) the same portion of a megathrust can rupture in different ways depending on whether asperities break as isolated events or cooperate to produce

  7. Reassessing the 2006 Guerrero slow-slip event, Mexico : Implications for large earthquakes in the Guerrero Gap

    NARCIS (Netherlands)

    Bekaert, D.P.S.; Hooper, A.; Wright, T.J.

    2015-01-01

    In Guerrero, Mexico, slow-slip events have been observed in a seismic gap, where no earthquakes have occurred since 1911. A rupture of the entire gap today could result in a Mw 8.2–8.4 earthquake. However, it remains unclear how slow-slip events change the stress field in the Guerrero seismic region

  8. Simple procedure for evaluating earthquake response spectra of large-event motions based on site amplification factors derived from smaller-event records

    International Nuclear Information System (INIS)

    Dan, Kazuo; Miyakoshi, Jun-ichi; Yashiro, Kazuhiko.

    1996-01-01

    A primitive procedure was proposed for evaluating earthquake response spectra of large-event motions to make use of records from smaller events. The result of the regression analysis of the response spectra was utilized to obtain the site amplification factors in the proposed procedure, and the formulation of the seismic-source term in the regression analysis was examined. A linear form of the moment magnitude, Mw, is good for scaling the source term of moderate earthquakes with Mw of 5.5 to 7.0, while a quadratic form of Mw and the ω-square source-spectrum model is appropriate for scaling the source term of smaller and greater earthquakes, respectively. (author). 52 refs

  9. Large magnitude (M > 7.5) offshore earthquakes in 2012: few examples of absent or little tsunamigenesis, with implications for tsunami early warning

    Science.gov (United States)

    Pagnoni, Gianluca; Armigliato, Alberto; Tinti, Stefano

    2013-04-01

    We take into account some examples of offshore earthquakes occurred worldwide in year 2012 that were characterised by a "large" magnitude (Mw equal or larger than 7.5) but which produced no or little tsunami effects. Here, "little" is intended as "lower than expected on the basis of the parent earthquake magnitude". The examples we analyse include three earthquakes occurred along the Pacific coasts of Central America (20 March, Mw=7.8, Mexico; 5 September, Mw=7.6, Costa Rica; 7 November, Mw=7.5, Mexico), the Mw=7.6 and Mw=7.7 earthquakes occurred respectively on 31 August and 28 October offshore Philippines and offshore Alaska, and the two Indian Ocean earthquakes registered on a single day (11 April) and characterised by Mw=8.6 and Mw=8.2. For each event, we try to face the problem related to its tsunamigenic potential from two different perspectives. The first can be considered purely scientific and coincides with the question: why was the ensuing tsunami so weak? The answer can be related partly to the particular tectonic setting in the source area, partly to the particular position of the source with respect to the coastline, and finally to the focal mechanism of the earthquake and to the slip distribution on the ruptured fault. The first two pieces of information are available soon after the earthquake occurrence, while the third requires time periods in the order of tens of minutes. The second perspective is more "operational" and coincides with the tsunami early warning perspective, for which the question is: will the earthquake generate a significant tsunami and if so, where will it strike? The Indian Ocean events of 11 April 2012 are perfect examples of the fact that the information on the earthquake magnitude and position alone may not be sufficient to produce reliable tsunami warnings. We emphasise that it is of utmost importance that the focal mechanism determination is obtained in the future much more quickly than it is at present and that this

  10. Giant seismites and megablock uplift in the East African Rift: evidence for Late Pleistocene large magnitude earthquakes.

    Science.gov (United States)

    Hilbert-Wolf, Hannah Louise; Roberts, Eric M

    2015-01-01

    In lieu of comprehensive instrumental seismic monitoring, short historical records, and limited fault trench investigations for many seismically active areas, the sedimentary record provides important archives of seismicity in the form of preserved horizons of soft-sediment deformation features, termed seismites. Here we report on extensive seismites in the Late Quaternary-Recent (≤ ~ 28,000 years BP) alluvial and lacustrine strata of the Rukwa Rift Basin, a segment of the Western Branch of the East African Rift System. We document examples of the most highly deformed sediments in shallow, subsurface strata close to the regional capital of Mbeya, Tanzania. This includes a remarkable, clastic 'megablock complex' that preserves remobilized sediment below vertically displaced blocks of intact strata (megablocks), some in excess of 20 m-wide. Documentation of these seismites expands the database of seismogenic sedimentary structures, and attests to large magnitude, Late Pleistocene-Recent earthquakes along the Western Branch of the East African Rift System. Understanding how seismicity deforms near-surface sediments is critical for predicting and preparing for modern seismic hazards, especially along the East African Rift and other tectonically active, developing regions.

  11. The 2011 M = 9.0 Tohoku oki earthquake more than doubled the probability of large shocks beneath Tokyo

    Science.gov (United States)

    Toda, Shinji; Stein, Ross S.

    2013-01-01

    1] The Kanto seismic corridor surrounding Tokyo has hosted four to five M ≥ 7 earthquakes in the past 400 years. Immediately after the Tohoku earthquake, the seismicity rate in the corridor jumped 10-fold, while the rate of normal focal mechanisms dropped in half. The seismicity rate decayed for 6–12 months, after which it steadied at three times the pre-Tohoku rate. The seismicity rate jump and decay to a new rate, as well as the focal mechanism change, can be explained by the static stress imparted by the Tohoku rupture and postseismic creep to Kanto faults. We therefore fit the seismicity observations to a rate/state Coulomb model, which we use to forecast the time-dependent probability of large earthquakes in the Kanto seismic corridor. We estimate a 17% probability of a M ≥ 7.0 shock over the 5 year prospective period 11 March 2013 to 10 March 2018, two-and-a-half times the probability had the Tohoku earthquake not struck

  12. Testing earthquake source inversion methodologies

    KAUST Repository

    Page, Morgan T.; Mai, Paul Martin; Schorlemmer, Danijel

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data

  13. Holocene slip rates along the San Andreas Fault System in the San Gorgonio Pass and implications for large earthquakes in southern California

    Science.gov (United States)

    Heermance, Richard V.; Yule, Doug

    2017-06-01

    The San Gorgonio Pass (SGP) in southern California contains a 40 km long region of structural complexity where the San Andreas Fault (SAF) bifurcates into a series of oblique-slip faults with unknown slip history. We combine new 10Be exposure ages (Qt4: 8600 (+2100, -2200) and Qt3: 5700 (+1400, -1900) years B.P.) and a radiocarbon age (1260 ± 60 years B.P.) from late Holocene terraces with scarp displacement of these surfaces to document a Holocene slip rate of 5.7 (+2.7, -1.5) mm/yr combined across two faults. Our preferred slip rate is 37-49% of the average slip rates along the SAF outside the SGP (i.e., Coachella Valley and San Bernardino sections) and implies that strain is transferred off the SAF in this area. Earthquakes here most likely occur in very large, throughgoing SAF events at a lower recurrence than elsewhere on the SAF, so that only approximately one third of SAF ruptures penetrate or originate in the pass.Plain Language SummaryHow large are earthquakes on the southern San Andreas Fault? The answer to this question depends on whether or not the earthquake is contained only along individual fault sections, such as the Coachella Valley section north of Palm Springs, or the rupture crosses multiple sections including the area through the San Gorgonio Pass. We have determined the age and offset of faulted stream deposits within the San Gorgonio Pass to document slip rates of these faults over the last 10,000 years. Our results indicate a long-term slip rate of 6 mm/yr, which is almost 1/2 of the rates east and west of this area. These new rates, combined with faulted geomorphic surfaces, imply that large magnitude earthquakes must occasionally rupture a 300 km length of the San Andreas Fault from the Salton Sea to the Mojave Desert. Although many ( 65%) earthquakes along the southern San Andreas Fault likely do not rupture through the pass, our new results suggest that large >Mw 7.5 earthquakes are possible on the southern San Andreas Fault and likely

  14. Evidence of a Large Triggered Event in the Nepal Himalaya Following the Gorkha Earthquake: Implications Toward Enhanced Seismic Hazard

    Science.gov (United States)

    Mandal, Prantik

    2018-03-01

    A DC (double couple) constrained multiple point-source moment-tensor inversion is performed on the band-passed (0.008-0.10 Hz) displacement data of the 25 April (M w 7.8) 2015 Nepal mainshock, from 17 broadband stations in India. Our results reveal that the 25 April event (strike = 324°, dip = 14°, rake = 88°) ruptured the north-dipping main Himalayan thrust (MHT) at 16 km depth. We modeled the Coulomb failure stress changes (ΔCFS) produced by the slip on the fault plane of the 25 April Nepal mainshock. A strong correlation with occurrences of aftershocks and regions of increased positive ΔCFS is obtained below the aftershock zone of the 2015 Nepal mainshock. We notice that predicted ΔCFS at 16 km depth show a positive Coulomb stress of 0.06 MPa at the location of the 12 May 2015 event. These small modeled stress changes can lead to trigger events if the crust is already near to failure, but these small stresses can also advance the occurrence of future earthquakes. The main finding of our ΔCFS modeling implies that the 25 April event increased the Coulomb stress changes by 0.06 MPa at 16 km depth below the site of the 12 May event, and thus, this event can be termed as triggered. We propose that the seismic hazard in the Himalaya is not only caused by the mainshock slip on the MHT; rather, the occurrence of large triggered event on the MHT can also enhance our understanding of the seismic hazard in the Nepal Himalaya.

  15. The Challenge of Centennial Earthquakes to Improve Modern Earthquake Engineering

    International Nuclear Information System (INIS)

    Saragoni, G. Rodolfo

    2008-01-01

    The recent commemoration of the centennial of the San Francisco and Valparaiso 1906 earthquakes has given the opportunity to reanalyze their damages from modern earthquake engineering perspective. These two earthquakes plus Messina Reggio Calabria 1908 had a strong impact in the birth and developing of earthquake engineering. The study of the seismic performance of some up today existing buildings, that survive centennial earthquakes, represent a challenge to better understand the limitations of our in use earthquake design methods. Only Valparaiso 1906 earthquake, of the three considered centennial earthquakes, has been repeated again as the Central Chile, 1985, Ms = 7.8 earthquake. In this paper a comparative study of the damage produced by 1906 and 1985 Valparaiso earthquakes is done in the neighborhood of Valparaiso harbor. In this study the only three centennial buildings of 3 stories that survived both earthquakes almost undamaged were identified. Since for 1985 earthquake accelerogram at El Almendral soil conditions as well as in rock were recoded, the vulnerability analysis of these building is done considering instrumental measurements of the demand. The study concludes that good performance of these buildings in the epicentral zone of large earthquakes can not be well explained by modern earthquake engineering methods. Therefore, it is recommended to use in the future of more suitable instrumental parameters, such as the destructiveness potential factor, to describe earthquake demand

  16. Earthquake Culture: A Significant Element in Earthquake Disaster Risk Assessment and Earthquake Disaster Risk Management

    OpenAIRE

    Ibrion, Mihaela

    2018-01-01

    This book chapter brings to attention the dramatic impact of large earthquake disasters on local communities and society and highlights the necessity of building and enhancing the earthquake culture. Iran was considered as a research case study and fifteen large earthquake disasters in Iran were investigated and analyzed over more than a century-time period. It was found that the earthquake culture in Iran was and is still conditioned by many factors or parameters which are not integrated and...

  17. Earthquakes in Action: Incorporating Multimedia, Internet Resources, Large-scale Seismic Data, and 3-D Visualizations into Innovative Activities and Research Projects for Today's High School Students

    Science.gov (United States)

    Smith-Konter, B.; Jacobs, A.; Lawrence, K.; Kilb, D.

    2006-12-01

    The most effective means of communicating science to today's "high-tech" students is through the use of visually attractive and animated lessons, hands-on activities, and interactive Internet-based exercises. To address these needs, we have developed Earthquakes in Action, a summer high school enrichment course offered through the California State Summer School for Mathematics and Science (COSMOS) Program at the University of California, San Diego. The summer course consists of classroom lectures, lab experiments, and a final research project designed to foster geophysical innovations, technological inquiries, and effective scientific communication (http://topex.ucsd.edu/cosmos/earthquakes). Course content includes lessons on plate tectonics, seismic wave behavior, seismometer construction, fault characteristics, California seismicity, global seismic hazards, earthquake stress triggering, tsunami generation, and geodetic measurements of the Earth's crust. Students are introduced to these topics through lectures-made-fun using a range of multimedia, including computer animations, videos, and interactive 3-D visualizations. These lessons are further enforced through both hands-on lab experiments and computer-based exercises. Lab experiments included building hand-held seismometers, simulating the frictional behavior of faults using bricks and sandpaper, simulating tsunami generation in a mini-wave pool, and using the Internet to collect global earthquake data on a daily basis and map earthquake locations using a large classroom map. Students also use Internet resources like Google Earth and UNAVCO/EarthScope's Jules Verne Voyager Jr. interactive mapping tool to study Earth Science on a global scale. All computer-based exercises and experiments developed for Earthquakes in Action have been distributed to teachers participating in the 2006 Earthquake Education Workshop, hosted by the Visualization Center at Scripps Institution of Oceanography (http

  18. REVIEW ARTICLE: A comparison of site response techniques using earthquake data and ambient seismic noise analysis in the large urban areas of Santiago de Chile

    Science.gov (United States)

    Pilz, Marco; Parolai, Stefano; Leyton, Felipe; Campos, Jaime; Zschau, Jochen

    2009-08-01

    Situated in an active tectonic region, Santiago de Chile, the country's capital with more than six million inhabitants, faces tremendous earthquake risk. Macroseismic data for the 1985 Valparaiso event show large variations in the distribution of damage to buildings within short distances, indicating strong effects of local sediments on ground motion. Therefore, a temporary seismic network was installed in the urban area for recording earthquake activity and a study was carried out aiming to estimate site amplification derived from horizontal-to-vertical (H/V) spectral ratios from earthquake data (EHV) and ambient noise (NHV), as well as using the standard spectral ratio (SSR) technique with a nearby reference station located on igneous rock. The results lead to the following conclusions: (1) The analysis of earthquake data shows significant dependence on the local geological structure with respect to amplitude and duration. (2) An amplification of ground motion at frequencies higher than the fundamental one can be found. This amplification would not be found when looking at NHV ratios alone. (3) The analysis of NHV spectral ratios shows that they can only provide a lower bound in amplitude for site amplification. (4) P-wave site responses always show lower amplitudes than those derived by S waves, and sometimes even fail to provide some frequencies of amplification. (5) No variability in terms of time and amplitude is observed in the analysis of the H/V ratio of noise. (6) Due to the geological conditions in some parts of the investigated area, the fundamental resonance frequency of a site is difficult to estimate following standard criteria proposed by the SESAME consortium, suggesting that these are too restrictive under certain circumstances.

  19. Defeating Earthquakes

    Science.gov (United States)

    Stein, R. S.

    2012-12-01

    The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by

  20. A record of large earthquakes during the past two millennia on the southern Green Valley Fault, California

    Science.gov (United States)

    Lienkaemper, James J.; Baldwin, John N.; Turner, Robert; Sickler, Robert R.; Brown, Johnathan

    2013-01-01

    We document evidence for surface-rupturing earthquakes (events) at two trench sites on the southern Green Valley fault, California (SGVF). The 75-80-km long dextral SGVF creeps ~1-4 mm/yr. We identify stratigraphic horizons disrupted by upward-flowering shears and in-filled fissures unlikely to have formed from creep alone. The Mason Rd site exhibits four events from ~1013 CE to the Present. The Lopes Ranch site (LR, 12 km to the south) exhibits three events from 18 BCE to Present including the most recent event (MRE), 1610 ±52 yr CE (1σ) and a two-event interval (18 BCE-238 CE) isolated by a millennium of low deposition. Using Oxcal to model the timing of the 4-event earthquake sequence from radiocarbon data and the LR MRE yields a mean recurrence interval (RI or μ) of 199 ±82 yr (1σ) and ±35 yr (standard error of the mean), the first based on geologic data. The time since the most recent earthquake (open window since MRE) is 402 yr ±52 yr, well past μ~200 yr. The shape of the probability density function (pdf) of the average RI from Oxcal resembles a Brownian Passage Time (BPT) pdf (i.e., rather than normal) that permits rarer longer ruptures potentially involving the Berryessa and Hunting Creek sections of the northernmost GVF. The model coefficient of variation (cv, σ/μ) is 0.41, but a larger value (cv ~0.6) fits better when using BPT. A BPT pdf with μ of 250 yr and cv of 0.6 yields 30-yr rupture probabilities of 20-25% versus a Poisson probability of 11-17%.

  1. Transanal rectopexy - twelve case studies

    Directory of Open Access Journals (Sweden)

    Rubens Henrique Oleques Fernandes

    2012-06-01

    Full Text Available OBJECTIVES: This study analyzed the results of transanal rectopexy and showed the benefits of this surgical technique. METHOD: Twelve patients were submitted to rectopexy between 1997 and 2011. The surgical technique used was transanal rectopexy, where the mesorectum was fixed to the sacrum with nonabsorbable suture. Three patients had been submitted to previous surgery, two by the Delorme technique and one by the Thiersch technique. RESULTS: Postoperative hospital stay ranged from 1 to 4 days. One patient (8.3% had intraoperative hematoma, which was treated with local compression and antibiotics. One patient (8.3% had residual mucosal prolapse, which was resected. Prolapse recurrence was seen in one case (8.3%. Improved incontinence occurred in 75% of patients and one patient reported obstructed evacuation in the first month after surgery. No death occurred. CONCLUSION: Transanal rectopexy is a simple, low cost technique, which has shown good efficacy in rectal prolapse control.OBJETIVO: O presente estudo analisou os resultados da retopexia pela via transanal e expôs os benefícios desta técnica cirúrgica. MÉTODO: Doze pacientes com prolapso foram operados no período de 1997 a 2011. A técnica cirúrgica usada foi a retopexia transanal, onde o mesorreto foi fixado ao sacro com fio inabsorvível. Três pacientes tinham cirurgia prévia, dois pela técnica de Delorme e um pela técnica de Thiersch. RESULTADOS: A permanência hospitalar pós-operatória variou de 1- 4 dias. Uma paciente (8,3% apresentou hematoma transoperatório que foi tratado com compressão local e antibioticoterapia. Um paciente apresentou prolapso mucoso residual (8,3%, que foi ressecado. Houve recidiva da procidência em um caso (8,3%. A melhora da incontinência ocorreu em 75% dos pacientes e uma paciente apresentou bloqueio evacuatório no primeiro mês após a cirurgia. Não houve mortalidade entre os pacientes operados. CONCLUSÃO: A retopexia transanal é uma t

  2. The HayWired Earthquake Scenario—Earthquake Hazards

    Science.gov (United States)

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  3. On the relationship between structure, morphology and large coseismic slip: A case study of the Mw 8.8 Maule, Chile 2010 earthquake

    Science.gov (United States)

    Contreras-Reyes, Eduardo; Maksymowicz, Andrei; Lange, Dietrich; Grevemeyer, Ingo; Muñoz-Linford, Pamela; Moscoso, Eduardo

    2017-11-01

    Subduction megathrust earthquakes show complex rupture behaviour and large lateral variations of slip. However, the factors controlling seismic slip are still under debate. Here, we present 2-D velocity-depth tomographic models across four trench-perpendicular wide angle seismic profiles complemented with high resolution bathymetric data in the area of maximum coseismic slip of the Mw 8.8 Maule 2010 megathrust earthquake (central Chile, 34°-36°S). Results show an abrupt lateral velocity gradient in the trench-perpendicular direction (from 5.0 to 6.0 km/s) interpreted as the contact between the accretionary prism and continental framework rock whose superficial expression spatially correlates with the slope-shelf break. The accretionary prism is composed of two bodies: (1) an outer accretionary wedge (5-10 km wide) characterized by low seismic velocities of 1.8-3.0 km/s interpreted as an outer frontal prism of poorly compacted and hydrated sediment, and (2) the middle wedge (∼50 km wide) with velocities of 3.0-5.0 km/s interpreted as a middle prism composed by compacted and lithified sediment. In addition, the maximum average coseismic slip of the 2010 megathrust event is fairly coincident with the region where the accretionary prism and continental slope are widest (50-60 km wide), and the continental slope angle is low (event, published differential multibeam bathymetric data confirms that coseismic slip must have propagated up to ∼6 km landwards of the deformation front and hence practically the entire base of the middle prism. Sediment dewatering and compaction processes might explain the competent rheology of the middle prism allowing shallow earthquake rupture. In contrast, the outer frontal prism made of poorly consolidated sediment has impeded the rupture up to the deformation front as high resolution seismic reflection and multibeam bathymetric data have not showed evidence for new deformation in the trench region.

  4. Twelve tips for assessment psychometrics.

    Science.gov (United States)

    Coombes, Lee; Roberts, Martin; Zahra, Daniel; Burr, Steven

    2016-01-01

    It is incumbent on medical schools to show, both to regulatory bodies and to the public at large, that their graduating students are "fit for purpose" as tomorrow's doctors. Since students graduate by virtue of passing assessments, it is vital that schools quality assure their assessment procedures, standards, and outcomes. An important part of this quality assurance process is the appropriate use of psychometric analyses. This begins with development of an empowering, evidence-based culture in which assessment validity can be demonstrated. Preparation prior to an assessment requires the establishment of appropriate rules, test blueprinting and standard setting. When an assessment has been completed, the reporting of test results should consider reliability, assessor, demographic, and long-term analyses across multiple levels, in an integrated way to ensure the information conveyed to all stakeholders is meaningful.

  5. Nowcasting Earthquakes and Tsunamis

    Science.gov (United States)

    Rundle, J. B.; Turcotte, D. L.

    2017-12-01

    The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk

  6. Hazard-to-Risk: High-Performance Computing Simulations of Large Earthquake Ground Motions and Building Damage in the Near-Fault Region

    Science.gov (United States)

    Miah, M.; Rodgers, A. J.; McCallen, D.; Petersson, N. A.; Pitarka, A.

    2017-12-01

    We are running high-performance computing (HPC) simulations of ground motions for large (magnitude, M=6.5-7.0) earthquakes in the near-fault region (steel moment frame buildings throughout the near-fault domain. For ground motions, we are using SW4, a fourth order summation-by-parts finite difference time-domain code running on 10,000-100,000's of cores. Earthquake ruptures are generated using the Graves and Pitarka (2017) method. We validated ground motion intensity measurements against Ground Motion Prediction Equations. We considered two events (M=6.5 and 7.0) for vertical strike-slip ruptures with three-dimensional (3D) basin structures, including stochastic heterogeneity. We have also considered M7.0 scenarios for a Hayward Fault rupture scenario which effects the San Francisco Bay Area and northern California using both 1D and 3D earth structure. Dynamic, inelastic response of canonical buildings is computed with the NEVADA, a nonlinear, finite-deformation finite element code. Canonical buildings include 3-, 9-, 20- and 40-story steel moment frame buildings. Damage potential is tracked by the peak inter-story drift (PID) ratio, which measures the maximum displacement between adjacent floors of the building and is strongly correlated with damage. PID ratios greater 1.0 generally indicate non-linear response and permanent deformation of the structure. We also track roof displacement to identify permanent deformation. PID (damage) for a given earthquake scenario (M, slip distribution, hypocenter) is spatially mapped throughout the SW4 domain with 1-2 km resolution. Results show that in the near fault region building damage is correlated with peak ground velocity (PGV), while farther away (> 20 km) it is better correlated with peak ground acceleration (PGA). We also show how simulated ground motions have peaks in the response spectra that shift to longer periods for larger magnitude events and for locations of forward directivity, as has been reported by

  7. Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake

    Science.gov (United States)

    Durukal, E.; Sesetyan, K.; Erdik, M.

    2009-04-01

    The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing

  8. Geophysical Anomalies and Earthquake Prediction

    Science.gov (United States)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require

  9. Earthquake hazard evaluation for Switzerland

    International Nuclear Information System (INIS)

    Ruettener, E.

    1995-01-01

    Earthquake hazard analysis is of considerable importance for Switzerland, a country with moderate seismic activity but high economic values at risk. The evaluation of earthquake hazard, i.e. the determination of return periods versus ground motion parameters, requires a description of earthquake occurrences in space and time. In this study the seismic hazard for major cities in Switzerland is determined. The seismic hazard analysis is based on historic earthquake records as well as instrumental data. The historic earthquake data show considerable uncertainties concerning epicenter location and epicentral intensity. A specific concept is required, therefore, which permits the description of the uncertainties of each individual earthquake. This is achieved by probability distributions for earthquake size and location. Historical considerations, which indicate changes in public earthquake awareness at various times (mainly due to large historical earthquakes), as well as statistical tests have been used to identify time periods of complete earthquake reporting as a function of intensity. As a result, the catalog is judged to be complete since 1878 for all earthquakes with epicentral intensities greater than IV, since 1750 for intensities greater than VI, since 1600 for intensities greater than VIII, and since 1300 for intensities greater than IX. Instrumental data provide accurate information about the depth distribution of earthquakes in Switzerland. In the Alps, focal depths are restricted to the uppermost 15 km of the crust, whereas below the northern Alpine foreland earthquakes are distributed throughout the entire crust (30 km). This depth distribution is considered in the final hazard analysis by probability distributions. (author) figs., tabs., refs

  10. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elasti...

  11. Mythematics Solving the Twelve Labors of Hercules

    CERN Document Server

    Huber, Michael

    2009-01-01

    How might Hercules, the most famous of the Greek heroes, have used mathematics to complete his astonishing Twelve Labors? From conquering the Nemean Lion and cleaning out the Augean Stables, to capturing the Erymanthean Boar and entering the Underworld to defeat the three-headed dog Cerberus, Hercules and his legend are the inspiration for this book of fun and original math puzzles. While Hercules relied on superhuman strength to accomplish the Twelve Labors, Mythematics shows how math could have helped during his quest. How does Hercules defeat the Lernean Hydra and stop its heads from multip

  12. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elastic...... within the yield limits and ideally plastic outside these without accumulating eigenstresses. Within the elastic domain the frame is modeled as a linearly damped oscillator. The white noise excitation acts on the mass of the first floor making the movement of the elastic bottom floors simulate a ground...

  13. Modeling, Forecasting and Mitigating Extreme Earthquakes

    Science.gov (United States)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  14. Charles Darwin's earthquake reports

    Science.gov (United States)

    Galiev, Shamil

    2010-05-01

    As it is the 200th anniversary of Darwin's birth, 2009 has also been marked as 170 years since the publication of his book Journal of Researches. During the voyage Darwin landed at Valdivia and Concepcion, Chile, just before, during, and after a great earthquake, which demolished hundreds of buildings, killing and injuring many people. Land was waved, lifted, and cracked, volcanoes awoke and giant ocean waves attacked the coast. Darwin was the first geologist to observe and describe the effects of the great earthquake during and immediately after. These effects sometimes repeated during severe earthquakes; but great earthquakes, like Chile 1835, and giant earthquakes, like Chile 1960, are rare and remain completely unpredictable. This is one of the few areas of science, where experts remain largely in the dark. Darwin suggested that the effects were a result of ‘ …the rending of strata, at a point not very deep below the surface of the earth…' and ‘…when the crust yields to the tension, caused by its gradual elevation, there is a jar at the moment of rupture, and a greater movement...'. Darwin formulated big ideas about the earth evolution and its dynamics. These ideas set the tone for the tectonic plate theory to come. However, the plate tectonics does not completely explain why earthquakes occur within plates. Darwin emphasised that there are different kinds of earthquakes ‘...I confine the foregoing observations to the earthquakes on the coast of South America, or to similar ones, which seem generally to have been accompanied by elevation of the land. But, as we know that subsidence has gone on in other quarters of the world, fissures must there have been formed, and therefore earthquakes...' (we cite the Darwin's sentences following researchspace. auckland. ac. nz/handle/2292/4474). These thoughts agree with results of the last publications (see Nature 461, 870-872; 636-639 and 462, 42-43; 87-89). About 200 years ago Darwin gave oneself airs by the

  15. Consideration on the applicability of the design seismic coefficient of a large cutting slope under the strong earthquake

    International Nuclear Information System (INIS)

    Ito, Hiroshi; Sawada, Yoshihiro; Satou, Kiyotaka

    1989-01-01

    In this study, the characteristic of equivalent seismic coefficient and the applicability of the design seismic coefficient of a large cutting rock slope around Nuclear Power Plant were examined by analytical parameter survey. As the results, the equivalent seismic coefficient by dynamic analysis become great with increase of transverse elastic wave velocity and the case of long period motion. That is, as the wave length of rock mass become longer, the equivalent seismic coefficient become great parabolically. Moreover, there is a inverse proportion relation between the ratio (dynamic safety factor/static safety factor) and wave length. In addition, the graph to forecast the dynamic sliding safety factor under the input seismic motion of the max. Acceleration 500 gal from the result of static simple method was proposed and the applicable range of design seismic coefficient of rock slope was indicated. (author)

  16. Extreme value distribution of earthquake magnitude

    Science.gov (United States)

    Zi, Jun Gan; Tung, C. C.

    1983-07-01

    Probability distribution of maximum earthquake magnitude is first derived for an unspecified probability distribution of earthquake magnitude. A model for energy release of large earthquakes, similar to that of Adler-Lomnitz and Lomnitz, is introduced from which the probability distribution of earthquake magnitude is obtained. An extensive set of world data for shallow earthquakes, covering the period from 1904 to 1980, is used to determine the parameters of the probability distribution of maximum earthquake magnitude. Because of the special form of probability distribution of earthquake magnitude, a simple iterative scheme is devised to facilitate the estimation of these parameters by the method of least-squares. The agreement between the empirical and derived probability distributions of maximum earthquake magnitude is excellent.

  17. Earthquake number forecasts testing

    Science.gov (United States)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  18. What Can Sounds Tell Us About Earthquake Interactions?

    Science.gov (United States)

    Aiken, C.; Peng, Z.

    2012-12-01

    It is important not only for seismologists but also for educators to effectively convey information about earthquakes and the influences earthquakes can have on each other. Recent studies using auditory display [e.g. Kilb et al., 2012; Peng et al. 2012] have depicted catastrophic earthquakes and the effects large earthquakes can have on other parts of the world. Auditory display of earthquakes, which combines static images with time-compressed sound of recorded seismic data, is a new approach to disseminating information to a general audience about earthquakes and earthquake interactions. Earthquake interactions are influential to understanding the underlying physics of earthquakes and other seismic phenomena such as tremors in addition to their source characteristics (e.g. frequency contents, amplitudes). Earthquake interactions can include, for example, a large, shallow earthquake followed by increased seismicity around the mainshock rupture (i.e. aftershocks) or even a large earthquake triggering earthquakes or tremors several hundreds to thousands of kilometers away [Hill and Prejean, 2007; Peng and Gomberg, 2010]. We use standard tools like MATLAB, QuickTime Pro, and Python to produce animations that illustrate earthquake interactions. Our efforts are focused on producing animations that depict cross-section (side) views of tremors triggered along the San Andreas Fault by distant earthquakes, as well as map (bird's eye) views of mainshock-aftershock sequences such as the 2011/08/23 Mw5.8 Virginia earthquake sequence. These examples of earthquake interactions include sonifying earthquake and tremor catalogs as musical notes (e.g. piano keys) as well as audifying seismic data using time-compression. Our overall goal is to use auditory display to invigorate a general interest in earthquake seismology that leads to the understanding of how earthquakes occur, how earthquakes influence one another as well as tremors, and what the musical properties of these

  19. Testing earthquake source inversion methodologies

    KAUST Repository

    Page, Morgan T.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  20. Connecting slow earthquakes to huge earthquakes

    OpenAIRE

    Obara, Kazushige; Kato, Aitaro

    2016-01-01

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of th...

  1. Measuring the size of an earthquake

    Science.gov (United States)

    Spence, W.; Sipkin, S.A.; Choy, G.L.

    1989-01-01

    Earthquakes range broadly in size. A rock-burst in an Idaho silver mine may involve the fracture of 1 meter of rock; the 1965 Rat Island earthquake in the Aleutian arc involved a 650-kilometer length of the Earth's crust. Earthquakes can be even smaller and even larger. If an earthquake is felt or causes perceptible surface damage, then its intensity of shaking can be subjectively estimated. But many large earthquakes occur in oceanic areas or at great focal depths and are either simply not felt or their felt pattern does not really indicate their true size.

  2. The Application of a Multi-Beam Echo-Sounder in the Analysis of the Sedimentation Situation of a Large Reservoir after an Earthquake

    Directory of Open Access Journals (Sweden)

    Zhong-Luan Yan

    2018-04-01

    Full Text Available The Wenchuan Earthquake took place in the upper reach catchment of the Min River. It resulted in large amounts of loose materials gathering in the river channel, leading to changes in the sediment transport system in this area. The Zipingpu Reservoir is the last and the largest reservoir located in the upper reach of the Min River. It is near the epicenter and receives sediment from upstream. This paper puts forward a study on the reservoir sedimentation and storage capacity of the Zipingpu Reservoir, employing a multi-beam echo-sounder system in December 2012. Then, the data were merged with digital line graphics and shuttle radar topography mission data in ArcGIS to build a digital elevation model and triangulate the irregular network of Zipingpu Reservoir. Via the analysis of the bathymetric data, the results show the following: (1 The main channels of the reservoir gradually aggrade to a flat bottom from the deep-cutting valley. Sedimentation forms a reach with a W-shaped longitudinal thalweg profile and an almost zero slope reach in the upstream section of the reservoir due to the natural barrier induced by a landslide; (2 The loss ratios of the wetted cross-section surface are higher than 10% in the upstream section of the reservoir and higher than 40% in the natural barrier area; (3 Comparing the surveyed area storage capacity of December 2012 with March 2008, the Zipingpu Reservoir has lost 15.28% of its capacity at the dead storage water level and 10.49% of its capacity at the flood limit water level.

  3. Study of street-blockades caused by a large earthquake; Daishinsaiji ni okeru doro heisoku ni knsuru kenkyu (Hanshin Awaji daishinsai ni okeru jittai bunseki)

    Energy Technology Data Exchange (ETDEWEB)

    Imaizumi, K. [Kajima Corp., Tokyo (Japan); Asami, Y. [The University of Tokyo, Tokyo (Japan)

    1999-09-30

    The Great Hanshin-Awaji Earthquake caused great damage to roads. Not only on highways, but even in regional communities, a considerable number of streets became unusable because of falls of buildings, etc. to roads and occurrence of bumps. It provided obstacles in refuge and rescue activities. In building cities preparing for earthquake in future, it is important to re-evaluate how to make the road network including narrow streets. Therefore, paying attention to the physical distance of the roads in earthquake and the number of the points which people cannot reach, clarified was the relation between those phenomena and the characteristics of towns/roads which the region has. As an example of Higashinada-ward, Kobe-city, this report analyzed the data on the actual state from the aspects written below which become especially important in actions taken immediately after earthquake, and described the information/knowledge obtained therefrom: (1) difference in arrival distance between usually and in earthquake in case of walking from residence place to refuge place; (2) state of occurrence of the points where people cannot reach in going to hospitals by ambulance. (NEDO)

  4. Safety of superconducting fusion magnets: twelve problem areas

    International Nuclear Information System (INIS)

    Turner, L.R.

    1979-05-01

    Twelve problem areas of superconducting magnets for fusion reaction are described. These are: Quench Detection and Energy Dump, Stationary Normal Region of Conductor, Current Leads, Electrical Arcing, Electrical Shorts, Conductor Joints, Forces from Unequal Currents, Eddy Current Effects, Cryostat Rupture, Vacuum Failure, Fringing Field and Instrumentation for Safety. Each is described under the five categories: Identification and Definition, Possible Safety Effects, Current Practice, Adequacy of Current Practice for Fusion Magnets and Areas Requiring Further Analytical and Experimental Study. Priorities among these areas are suggested; application is made to the Large Coil Project at Oak Ridge National Laboratory

  5. Earthquake potential revealed by tidal influence on earthquake size-frequency statistics

    Science.gov (United States)

    Ide, Satoshi; Yabe, Suguru; Tanaka, Yoshiyuki

    2016-11-01

    The possibility that tidal stress can trigger earthquakes is long debated. In particular, a clear causal relationship between small earthquakes and the phase of tidal stress is elusive. However, tectonic tremors deep within subduction zones are highly sensitive to tidal stress levels, with tremor rate increasing at an exponential rate with rising tidal stress. Thus, slow deformation and the possibility of earthquakes at subduction plate boundaries may be enhanced during periods of large tidal stress. Here we calculate the tidal stress history, and specifically the amplitude of tidal stress, on a fault plane in the two weeks before large earthquakes globally, based on data from the global, Japanese, and Californian earthquake catalogues. We find that very large earthquakes, including the 2004 Sumatran, 2010 Maule earthquake in Chile and the 2011 Tohoku-Oki earthquake in Japan, tend to occur near the time of maximum tidal stress amplitude. This tendency is not obvious for small earthquakes. However, we also find that the fraction of large earthquakes increases (the b-value of the Gutenberg-Richter relation decreases) as the amplitude of tidal shear stress increases. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. This suggests that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. We conclude that large earthquakes are more probable during periods of high tidal stress.

  6. Earthquake Catalogue of the Caucasus

    Science.gov (United States)

    Godoladze, T.; Gok, R.; Tvaradze, N.; Tumanova, N.; Gunia, I.; Onur, T.

    2016-12-01

    The Caucasus has a documented historical catalog stretching back to the beginning of the Christian era. Most of the largest historical earthquakes prior to the 19th century are assumed to have occurred on active faults of the Greater Caucasus. Important earthquakes include the Samtskhe earthquake of 1283 (Ms˜7.0, Io=9); Lechkhumi-Svaneti earthquake of 1350 (Ms˜7.0, Io=9); and the Alaverdi earthquake of 1742 (Ms˜6.8, Io=9). Two significant historical earthquakes that may have occurred within the Javakheti plateau in the Lesser Caucasus are the Tmogvi earthquake of 1088 (Ms˜6.5, Io=9) and the Akhalkalaki earthquake of 1899 (Ms˜6.3, Io =8-9). Large earthquakes that occurred in the Caucasus within the period of instrumental observation are: Gori 1920; Tabatskuri 1940; Chkhalta 1963; Racha earthquake of 1991 (Ms=7.0), is the largest event ever recorded in the region; Barisakho earthquake of 1992 (M=6.5); Spitak earthquake of 1988 (Ms=6.9, 100 km south of Tbilisi), which killed over 50,000 people in Armenia. Recently, permanent broadband stations have been deployed across the region as part of the various national networks (Georgia (˜25 stations), Azerbaijan (˜35 stations), Armenia (˜14 stations)). The data from the last 10 years of observation provides an opportunity to perform modern, fundamental scientific investigations. In order to improve seismic data quality a catalog of all instrumentally recorded earthquakes has been compiled by the IES (Institute of Earth Sciences/NSMC, Ilia State University) in the framework of regional joint project (Armenia, Azerbaijan, Georgia, Turkey, USA) "Probabilistic Seismic Hazard Assessment (PSHA) in the Caucasus. The catalogue consists of more then 80,000 events. First arrivals of each earthquake of Mw>=4.0 have been carefully examined. To reduce calculation errors, we corrected arrivals from the seismic records. We improved locations of the events and recalculate Moment magnitudes in order to obtain unified magnitude

  7. Earthquake Facts

    Science.gov (United States)

    ... North Dakota, and Wisconsin. The core of the earth was the first internal structural element to be identified. In 1906 R.D. Oldham discovered it from his studies of earthquake records. The inner core is solid, and the outer core is liquid and so does not transmit ...

  8. Understanding Earthquakes

    Science.gov (United States)

    Davis, Amanda; Gray, Ron

    2018-01-01

    December 26, 2004 was one of the deadliest days in modern history, when a 9.3 magnitude earthquake--the third largest ever recorded--struck off the coast of Sumatra in Indonesia (National Centers for Environmental Information 2014). The massive quake lasted at least 10 minutes and devastated the Indian Ocean. The quake displaced an estimated…

  9. The mechanism of earthquake

    Science.gov (United States)

    Lu, Kunquan; Cao, Zexian; Hou, Meiying; Jiang, Zehui; Shen, Rong; Wang, Qiang; Sun, Gang; Liu, Jixing

    2018-03-01

    The physical mechanism of earthquake remains a challenging issue to be clarified. Seismologists used to attribute shallow earthquake to the elastic rebound of crustal rocks. The seismic energy calculated following the elastic rebound theory and with the data of experimental results upon rocks, however, shows a large discrepancy with measurement — a fact that has been dubbed as “the heat flow paradox”. For the intermediate-focus and deep-focus earthquakes, both occurring in the region of the mantle, there is not reasonable explanation either. This paper will discuss the physical mechanism of earthquake from a new perspective, starting from the fact that both the crust and the mantle are discrete collective system of matters with slow dynamics, as well as from the basic principles of physics, especially some new concepts of condensed matter physics emerged in the recent years. (1) Stress distribution in earth’s crust: Without taking the tectonic force into account, according to the rheological principle of “everything flows”, the normal stress and transverse stress must be balanced due to the effect of gravitational pressure over a long period of time, thus no differential stress in the original crustal rocks is to be expected. The tectonic force is successively transferred and accumulated via stick-slip motions of rock blocks to squeeze the fault gouge and then exerted upon other rock blocks. The superposition of such additional lateral tectonic force and the original stress gives rise to the real-time stress in crustal rocks. The mechanical characteristics of fault gouge are different from rocks as it consists of granular matters. The elastic moduli of the fault gouges are much less than those of rocks, and they become larger with increasing pressure. This peculiarity of the fault gouge leads to a tectonic force increasing with depth in a nonlinear fashion. The distribution and variation of the tectonic stress in the crust are specified. (2) The

  10. Radon observation for earthquake prediction

    Energy Technology Data Exchange (ETDEWEB)

    Wakita, Hiroshi [Tokyo Univ. (Japan)

    1998-12-31

    Systematic observation of groundwater radon for the purpose of earthquake prediction began in Japan in late 1973. Continuous observations are conducted at fixed stations using deep wells and springs. During the observation period, significant precursory changes including the 1978 Izu-Oshima-kinkai (M7.0) earthquake as well as numerous coseismic changes were observed. At the time of the 1995 Kobe (M7.2) earthquake, significant changes in chemical components, including radon dissolved in groundwater, were observed near the epicentral region. Precursory changes are presumably caused by permeability changes due to micro-fracturing in basement rock or migration of water from different sources during the preparation stage of earthquakes. Coseismic changes may be caused by seismic shaking and by changes in regional stress. Significant drops of radon concentration in groundwater have been observed after earthquakes at the KSM site. The occurrence of such drops appears to be time-dependent, and possibly reflects changes in the regional stress state of the observation area. The absence of radon drops seems to be correlated with periods of reduced regional seismic activity. Experience accumulated over the two past decades allows us to reach some conclusions: 1) changes in groundwater radon do occur prior to large earthquakes; 2) some sites are particularly sensitive to earthquake occurrence; and 3) the sensitivity changes over time. (author)

  11. [Twelve years of liver transplantation in Lausanne].

    Science.gov (United States)

    Mosimann, F; Bettschart, V; Gardaz, J P; Fontolliet, C; Tissot, J D; Meuwly, J Y; Chioléro, R; Gillet, M

    2001-02-01

    From 1988 to June 2000 138 transplantations were performed in 129 adult patients. Actuarial patient and graft survivals have been 80.7% and 75.4% at one year and 67.8% and 63.5% at 10 years. This compares favourably with the statistics of the European Liver Transplant Registry that collected data from more than 30,000 grafts. Over the twelve years of activity, the indications have become more liberal and the techniques have been simplified. The waiting list has therefore grown and some patients are now unfortunately dying before a graft can be found because the number of brain dead donors remains stable. In order to palliate this shortage, older donors are now being accepted even with co-morbidities and/or moderate alterations of the liver function tests. The use of live donors and the split of the best cadaveric grafts for two recipients will also reduce the gap between the demand and the offer.

  12. Twelve Girls' Band' A Modern Miracle of Traditional Music

    Institute of Scientific and Technical Information of China (English)

    YaoZhanxiong

    2004-01-01

    Twelve antique traditional instruments. Twelve spirited, pretty girls. "Twelve Girls' Band" is a traditional instrument orchestra playing well-known folk music in the form of pop. Besides age-old traditional instruments peculiar to China, such as zheng (ancient 21 to 25-stringed plucked instrument), qin (seven-stringed plucked instrument) and erhu (two-stringed Chinese fiddle),

  13. Ionospheric phenomena before strong earthquakes

    Directory of Open Access Journals (Sweden)

    A. S. Silina

    2001-01-01

    Full Text Available A statistical analysis of several ionospheric parameters before earthquakes with magnitude M > 5.5 located less than 500 km from an ionospheric vertical sounding station is performed. Ionospheric effects preceding "deep" (depth h > 33 km and "crust" (h 33 km earthquakes were analysed separately. Data of nighttime measurements of the critical frequencies foF2 and foEs, the frequency fbEs and Es-spread at the middle latitude station Dushanbe were used. The frequencies foF2 and fbEs are proportional to the square root of the ionization density at heights of 300 km and 100 km, respectively. It is shown that two days before the earthquakes the values of foF2 averaged over the morning hours (00:00 LT–06:00 LT and of fbEs averaged over the nighttime hours (18:00 LT–06:00 LT decrease; the effect is stronger for the "deep" earthquakes. Analysing the coefficient of semitransparency which characterizes the degree of small-scale turbulence, it was shown that this value increases 1–4 days before "crust" earthquakes, and it does not change before "deep" earthquakes. Studying Es-spread which manifests itself as diffuse Es track on ionograms and characterizes the degree of large-scale turbulence, it was found that the number of Es-spread observations increases 1–3 days before the earthquakes; for "deep" earthquakes the effect is more intensive. Thus it may be concluded that different mechanisms of energy transfer from the region of earthquake preparation to the ionosphere occur for "deep" and "crust" events.

  14. Connecting slow earthquakes to huge earthquakes.

    Science.gov (United States)

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  15. Earthquake Early Warning Systems

    OpenAIRE

    Pei-Yang Lin

    2011-01-01

    Because of Taiwan’s unique geographical environment, earthquake disasters occur frequently in Taiwan. The Central Weather Bureau collated earthquake data from between 1901 and 2006 (Central Weather Bureau, 2007) and found that 97 earthquakes had occurred, of which, 52 resulted in casualties. The 921 Chichi Earthquake had the most profound impact. Because earthquakes have instant destructive power and current scientific technologies cannot provide precise early warnings in advance, earthquake ...

  16. Antifouling activity of twelve demosponges from Brazil

    Directory of Open Access Journals (Sweden)

    SM. Ribeiro

    Full Text Available Benthic marine organisms are constantly exposed to fouling, which is harmful to most host species. Thus, the production of secondary metabolites containing antifouling properties is an important ecological advantage for sessile organisms and may also provide leading compounds for the development of antifouling paints. High antifouling potential of sponges has been demonstrated in the Indian and Pacific oceans and in the Caribbean and Mediterranean seas. Brazilian sponges remain understudied concerning antifouling activities. Only two scientific articles reported this activity in sponges of Brazil. The objective of this study was to test crude extracts of twelve species of sponges from Brazil against the attachment of the mussel Perna perna through laboratorial assays, and highlight promising species for future studies. The species Petromica citrina, Amphimedon viridis, Desmapsamma anchorata, Chondrosia sp., Polymastia janeirensis, Tedania ignis, Aplysina fulva, Mycale angulosa, Hymeniacidon heliophila, Dysidea etheria, Tethya rubra, and Tethya maza were frozen and freeze-dried before extraction with acetone or dichloromethane. The crude extract of four species significantly inhibited the attachment of byssus: Tethya rubra (p = 0.0009, Tethya maza (p = 0.0039, Petromica citrina (p = 0.0277, and Hymeniacidon heliophila (p = 0.00003. These species, specially, should be the target of future studies to detail the substances involved in the ability antifouling well as to define its amplitude of action.

  17. Energy and greenhouse effect. Twelve short notes

    International Nuclear Information System (INIS)

    Prevot, Henri

    2013-12-01

    The author proposes twelve brief notes aimed at discussing the reduction of fossil energy consumption in order to reduce CO 2 emissions and to improve the French energy supply security, without any useless expense. These notes address the reason for energy savings, the cost and price of a CO 2 ton, the issue of thermal regulation for buildings (it's not in compliance with the law, and results in higher expenses and increased CO 2 emissions), the introduction of a carbon tax to incite investments for energy saving, the status and health of the CO 2 European market, the support of actions aimed at reducing fossil energy consumption, the fact that bio-heat is ten times more efficient than bio-fuel and that therefore car holders should finance bio-heat, the development of hybrid uses of energy to avoid the difficulty of energy storage, the reduction of CO 2 emissions at low cost (by consuming as much renewable energy as nuclear energy but without wind or photovoltaic energy), the cost of less CO 2 , less fossil energy and less nuclear, and the interest of France to act on its own to reduce CO 2 emissions. The author proposes a brief synthesis of these notes and some proposals regarding thermal regulation for buildings, taxes, the European CO 2 market, the forest biomass, electricity production, and the European and word dimensions of these issues

  18. The limits of earthquake early warning: Timeliness of ground motion estimates

    OpenAIRE

    Minson, Sarah E.; Meier, Men-Andrin; Baltay, Annemarie S.; Hanks, Thomas C.; Cochran, Elizabeth S.

    2018-01-01

    The basic physics of earthquakes is such that strong ground motion cannot be expected from an earthquake unless the earthquake itself is very close or has grown to be very large. We use simple seismological relationships to calculate the minimum time that must elapse before such ground motion can be expected at a distance from the earthquake, assuming that the earthquake magnitude is not predictable. Earthquake early warning (EEW) systems are in operation or development for many regions aroun...

  19. Countermeasures to earthquakes in nuclear plants

    International Nuclear Information System (INIS)

    Sato, Kazuhide

    1979-01-01

    The contribution of atomic energy to mankind is unmeasured, but the danger of radioactivity is a special thing. Therefore in the design of nuclear power plants, the safety has been regarded as important, and in Japan where earthquakes occur frequently, the countermeasures to earthquakes have been incorporated in the examination of safety naturally. The radioactive substances handled in nuclear power stations and spent fuel reprocessing plants are briefly explained. The occurrence of earthquakes cannot be predicted effectively, and the disaster due to earthquakes is apt to be remarkably large. In nuclear plants, the prevention of damage in the facilities and the maintenance of the functions are required at the time of earthquakes. Regarding the location of nuclear plants, the history of earthquakes, the possible magnitude of earthquakes, the properties of ground and the position of nuclear plants should be examined. After the place of installation has been decided, the earthquake used for design is selected, evaluating live faults and determining the standard earthquakes. As the fundamentals of aseismatic design, the classification according to importance, the earthquakes for design corresponding to the classes of importance, the combination of loads and allowable stress are explained. (Kako, I.)

  20. Twitter earthquake detection: Earthquake monitoring in a social world

    Science.gov (United States)

    Earle, Paul S.; Bowden, Daniel C.; Guy, Michelle R.

    2011-01-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets) with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word "earthquake" clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  1. Earthquake evaluation of a substation network

    International Nuclear Information System (INIS)

    Matsuda, E.N.; Savage, W.U.; Williams, K.K.; Laguens, G.C.

    1991-01-01

    The impact of the occurrence of a large, damaging earthquake on a regional electric power system is a function of the geographical distribution of strong shaking, the vulnerability of various types of electric equipment located within the affected region, and operational resources available to maintain or restore electric system functionality. Experience from numerous worldwide earthquake occurrences has shown that seismic damage to high-voltage substation equipment is typically the reason for post-earthquake loss of electric service. In this paper, the authors develop and apply a methodology to analyze earthquake impacts on Pacific Gas and Electric Company's (PG and E's) high-voltage electric substation network in central and northern California. The authors' objectives are to identify and prioritize ways to reduce the potential impact of future earthquakes on our electric system, refine PG and E's earthquake preparedness and response plans to be more realistic, and optimize seismic criteria for future equipment purchases for the electric system

  2. Earthquake damage to underground facilities

    International Nuclear Information System (INIS)

    Pratt, H.R.; Hustrulid, W.A.; Stephenson, D.E.

    1978-11-01

    The potential seismic risk for an underground nuclear waste repository will be one of the considerations in evaluating its ultimate location. However, the risk to subsurface facilities cannot be judged by applying intensity ratings derived from the surface effects of an earthquake. A literature review and analysis were performed to document the damage and non-damage due to earthquakes to underground facilities. Damage from earthquakes to tunnels, s, and wells and damage (rock bursts) from mining operations were investigated. Damage from documented nuclear events was also included in the study where applicable. There are very few data on damage in the subsurface due to earthquakes. This fact itself attests to the lessened effect of earthquakes in the subsurface because mines exist in areas where strong earthquakes have done extensive surface damage. More damage is reported in shallow tunnels near the surface than in deep mines. In mines and tunnels, large displacements occur primarily along pre-existing faults and fractures or at the surface entrance to these facilities.Data indicate vertical structures such as wells and shafts are less susceptible to damage than surface facilities. More analysis is required before seismic criteria can be formulated for the siting of a nuclear waste repository

  3. Global earthquake fatalities and population

    Science.gov (United States)

    Holzer, Thomas L.; Savage, James C.

    2013-01-01

    Modern global earthquake fatalities can be separated into two components: (1) fatalities from an approximately constant annual background rate that is independent of world population growth and (2) fatalities caused by earthquakes with large human death tolls, the frequency of which is dependent on world population. Earthquakes with death tolls greater than 100,000 (and 50,000) have increased with world population and obey a nonstationary Poisson distribution with rate proportional to population. We predict that the number of earthquakes with death tolls greater than 100,000 (50,000) will increase in the 21st century to 8.7±3.3 (20.5±4.3) from 4 (7) observed in the 20th century if world population reaches 10.1 billion in 2100. Combining fatalities caused by the background rate with fatalities caused by catastrophic earthquakes (>100,000 fatalities) indicates global fatalities in the 21st century will be 2.57±0.64 million if the average post-1900 death toll for catastrophic earthquakes (193,000) is assumed.

  4. Development of a Standardized Methodology for the Use of COSI-Corr Sub-Pixel Image Correlation to Determine Surface Deformation Patterns in Large Magnitude Earthquakes.

    Science.gov (United States)

    Milliner, C. W. D.; Dolan, J. F.; Hollingsworth, J.; Leprince, S.; Ayoub, F.

    2014-12-01

    Coseismic surface deformation is typically measured in the field by geologists and with a range of geophysical methods such as InSAR, LiDAR and GPS. Current methods, however, either fail to capture the near-field coseismic surface deformation pattern where vital information is needed, or lack pre-event data. We develop a standardized and reproducible methodology to fully constrain the surface, near-field, coseismic deformation pattern in high resolution using aerial photography. We apply our methodology using the program COSI-corr to successfully cross-correlate pairs of aerial, optical imagery before and after the 1992, Mw 7.3 Landers and 1999, Mw 7.1 Hector Mine earthquakes. This technique allows measurement of the coseismic slip distribution and magnitude and width of off-fault deformation with sub-pixel precision. This technique can be applied in a cost effective manner for recent and historic earthquakes using archive aerial imagery. We also use synthetic tests to constrain and correct for the bias imposed on the result due to use of a sliding window during correlation. Correcting for artificial smearing of the tectonic signal allows us to robustly measure the fault zone width along a surface rupture. Furthermore, the synthetic tests have constrained for the first time the measurement precision and accuracy of estimated fault displacements and fault-zone width. Our methodology provides the unique ability to robustly understand the kinematics of surface faulting while at the same time accounting for both off-fault deformation and measurement biases that typically complicates such data. For both earthquakes we find that our displacement measurements derived from cross-correlation are systematically larger than the field displacement measurements, indicating the presence of off-fault deformation. We show that the Landers and Hector Mine earthquake accommodated 46% and 38% of displacement away from the main primary rupture as off-fault deformation, over a mean

  5. How fault geometry controls earthquake magnitude

    Science.gov (United States)

    Bletery, Q.; Thomas, A.; Karlstrom, L.; Rempel, A. W.; Sladen, A.; De Barros, L.

    2016-12-01

    Recent large megathrust earthquakes, such as the Mw9.3 Sumatra-Andaman earthquake in 2004 and the Mw9.0 Tohoku-Oki earthquake in 2011, astonished the scientific community. The first event occurred in a relatively low-convergence-rate subduction zone where events of its size were unexpected. The second event involved 60 m of shallow slip in a region thought to be aseismicaly creeping and hence incapable of hosting very large magnitude earthquakes. These earthquakes highlight gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution. Here we show that gradients in dip angle exert a primary control on mega-earthquake occurrence. We calculate the curvature along the major subduction zones of the world and show that past mega-earthquakes occurred on flat (low-curvature) interfaces. A simplified analytic model demonstrates that shear strength heterogeneity increases with curvature. Stress loading on flat megathrusts is more homogeneous and hence more likely to be released simultaneously over large areas than on highly-curved faults. Therefore, the absence of asperities on large faults might counter-intuitively be a source of higher hazard.

  6. Earthquake Triggering in the September 2017 Mexican Earthquake Sequence

    Science.gov (United States)

    Fielding, E. J.; Gombert, B.; Duputel, Z.; Huang, M. H.; Liang, C.; Bekaert, D. P.; Moore, A. W.; Liu, Z.; Ampuero, J. P.

    2017-12-01

    Southern Mexico was struck by four earthquakes with Mw > 6 and numerous smaller earthquakes in September 2017, starting with the 8 September Mw 8.2 Tehuantepec earthquake beneath the Gulf of Tehuantepec offshore Chiapas and Oaxaca. We study whether this M8.2 earthquake triggered the three subsequent large M>6 quakes in southern Mexico to improve understanding of earthquake interactions and time-dependent risk. All four large earthquakes were extensional despite the the subduction of the Cocos plate. The traditional definition of aftershocks: likely an aftershock if it occurs within two rupture lengths of the main shock soon afterwards. Two Mw 6.1 earthquakes, one half an hour after the M8.2 beneath the Tehuantepec gulf and one on 23 September near Ixtepec in Oaxaca, both fit as traditional aftershocks, within 200 km of the main rupture. The 19 September Mw 7.1 Puebla earthquake was 600 km away from the M8.2 shock, outside the standard aftershock zone. Geodetic measurements from interferometric analysis of synthetic aperture radar (InSAR) and time-series analysis of GPS station data constrain finite fault total slip models for the M8.2, M7.1, and M6.1 Ixtepec earthquakes. The early M6.1 aftershock was too close in time and space to the M8.2 to measure with InSAR or GPS. We analyzed InSAR data from Copernicus Sentinel-1A and -1B satellites and JAXA ALOS-2 satellite. Our preliminary geodetic slip model for the M8.2 quake shows significant slip extended > 150 km NW from the hypocenter, longer than slip in the v1 finite-fault model (FFM) from teleseismic waveforms posted by G. Hayes at USGS NEIC. Our slip model for the M7.1 earthquake is similar to the v2 NEIC FFM. Interferograms for the M6.1 Ixtepec quake confirm the shallow depth in the upper-plate crust and show centroid is about 30 km SW of the NEIC epicenter, a significant NEIC location bias, but consistent with cluster relocations (E. Bergman, pers. comm.) and with Mexican SSN location. Coulomb static stress

  7. Moment-ration imaging of seismic regions for earthquake prediction

    Science.gov (United States)

    Lomnitz, Cinna

    1993-10-01

    An algorithm for predicting large earthquakes is proposed. The reciprocal ratio (mri) of the residual seismic moment to the total moment release in a region is used for imaging seismic moment precursors. Peaks in mri predict recent major earthquakes, including the 1985 Michoacan, 1985 central Chile, and 1992 Eureka, California earthquakes.

  8. Fault lubrication during earthquakes.

    Science.gov (United States)

    Di Toro, G; Han, R; Hirose, T; De Paola, N; Nielsen, S; Mizoguchi, K; Ferri, F; Cocco, M; Shimamoto, T

    2011-03-24

    The determination of rock friction at seismic slip rates (about 1 m s(-1)) is of paramount importance in earthquake mechanics, as fault friction controls the stress drop, the mechanical work and the frictional heat generated during slip. Given the difficulty in determining friction by seismological methods, elucidating constraints are derived from experimental studies. Here we review a large set of published and unpublished experiments (∼300) performed in rotary shear apparatus at slip rates of 0.1-2.6 m s(-1). The experiments indicate a significant decrease in friction (of up to one order of magnitude), which we term fault lubrication, both for cohesive (silicate-built, quartz-built and carbonate-built) rocks and non-cohesive rocks (clay-rich, anhydrite, gypsum and dolomite gouges) typical of crustal seismogenic sources. The available mechanical work and the associated temperature rise in the slipping zone trigger a number of physicochemical processes (gelification, decarbonation and dehydration reactions, melting and so on) whose products are responsible for fault lubrication. The similarity between (1) experimental and natural fault products and (2) mechanical work measures resulting from these laboratory experiments and seismological estimates suggests that it is reasonable to extrapolate experimental data to conditions typical of earthquake nucleation depths (7-15 km). It seems that faults are lubricated during earthquakes, irrespective of the fault rock composition and of the specific weakening mechanism involved.

  9. Laboratory generated M -6 earthquakes

    Science.gov (United States)

    McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.

    2014-01-01

    We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.

  10. End-User Applications of Real-Time Earthquake Information in Europe

    Science.gov (United States)

    Cua, G. B.; Gasparini, P.; Giardini, D.; Zschau, J.; Filangieri, A. R.; Reakt Wp7 Team

    2011-12-01

    The primary objective of European FP7 project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction) is to improve the efficiency of real-time earthquake risk mitigation methods and their capability of protecting structures, infrastructures, and populations. REAKT aims to address the issues of real-time earthquake hazard and response from end-to-end, with efforts directed along the full spectrum of methodology development in earthquake forecasting, earthquake early warning, and real-time vulnerability systems, through optimal decision-making, and engagement and cooperation of scientists and end users for the establishment of best practices for use of real-time information. Twelve strategic test cases/end users throughout Europe have been selected. This diverse group of applications/end users includes civil protection authorities, railway systems, hospitals, schools, industrial complexes, nuclear plants, lifeline systems, national seismic networks, and critical structures. The scale of target applications covers a wide range, from two school complexes in Naples, to individual critical structures, such as the Rion Antirion bridge in Patras, and the Fatih Sultan Mehmet bridge in Istanbul, to large complexes, such as the SINES industrial complex in Portugal and the Thessaloniki port area, to distributed lifeline and transportation networks and nuclear plants. Some end-users are interested in in-depth feasibility studies for use of real-time information and development of rapid response plans, while others intend to install real-time instrumentation and develop customized automated control systems. From the onset, REAKT scientists and end-users will work together on concept development and initial implementation efforts using the data products and decision-making methodologies developed with the goal of improving end-user risk mitigation. The aim of this scientific/end-user partnership is to ensure that scientific efforts are applicable to operational

  11. Real Time Earthquake Information System in Japan

    Science.gov (United States)

    Doi, K.; Kato, T.

    2003-12-01

    An early earthquake notification system in Japan had been developed by the Japan Meteorological Agency (JMA) as a governmental organization responsible for issuing earthquake information and tsunami forecasts. The system was primarily developed for prompt provision of a tsunami forecast to the public with locating an earthquake and estimating its magnitude as quickly as possible. Years after, a system for a prompt provision of seismic intensity information as indices of degrees of disasters caused by strong ground motion was also developed so that concerned governmental organizations can decide whether it was necessary for them to launch emergency response or not. At present, JMA issues the following kinds of information successively when a large earthquake occurs. 1) Prompt report of occurrence of a large earthquake and major seismic intensities caused by the earthquake in about two minutes after the earthquake occurrence. 2) Tsunami forecast in around three minutes. 3) Information on expected arrival times and maximum heights of tsunami waves in around five minutes. 4) Information on a hypocenter and a magnitude of the earthquake, the seismic intensity at each observation station, the times of high tides in addition to the expected tsunami arrival times in 5-7 minutes. To issue information above, JMA has established; - An advanced nationwide seismic network with about 180 stations for seismic wave observation and about 3,400 stations for instrumental seismic intensity observation including about 2,800 seismic intensity stations maintained by local governments, - Data telemetry networks via landlines and partly via a satellite communication link, - Real-time data processing techniques, for example, the automatic calculation of earthquake location and magnitude, the database driven method for quantitative tsunami estimation, and - Dissemination networks, via computer-to-computer communications and facsimile through dedicated telephone lines. JMA operationally

  12. Twitter earthquake detection: earthquake monitoring in a social world

    Directory of Open Access Journals (Sweden)

    Daniel C. Bowden

    2011-06-01

    Full Text Available The U.S. Geological Survey (USGS is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word “earthquake” clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  13. Earthquake forecast for the Wasatch Front region of the Intermountain West

    Science.gov (United States)

    DuRoss, Christopher B.

    2016-04-18

    The Working Group on Utah Earthquake Probabilities has assessed the probability of large earthquakes in the Wasatch Front region. There is a 43 percent probability of one or more magnitude 6.75 or greater earthquakes and a 57 percent probability of one or more magnitude 6.0 or greater earthquakes in the region in the next 50 years. These results highlight the threat of large earthquakes in the region.

  14. Review of Van earthquakes form an orthopaedic perspective: a multicentre retrospective study.

    Science.gov (United States)

    Guner, Savas; Guner, Sukriye Ilkay; Isik, Yasemin; Gormeli, Gokay; Kalender, Ali Murat; Turktas, Ugur; Gokalp, Mehmet Ata; Gozen, Abdurrahim; Isik, Mustafa; Ozkan, Sezai; Turkozu, Tulin; Karadas, Sevdegul; Ceylan, Mehmet Fethi; Ediz, Levent; Bulut, Mehmet; Gunes, Yusuf; Gormeli, Ayse; Erturk, Cemil; Eseoglu, Metehan; Dursun, Recep

    2013-01-01

    This is a descriptive analysis, of victims of Turkey's October 23, 2011 and November 21, 2011 Van earthquakes. The goal of this study is investigated the injury profile of the both earthquakes in relation to musculoskeletal trauma. We retrospectively reviewed medical records of 3,965 patients admitted to in seven hospitals. A large share of these injuries were soft tissue injuries, followed by fractures, crush injuries, crush syndromes, nerve injuries, vascular injuries, compartment syndrome and joint dislocations. A total of 73 crush injuries were diagnosed and 31 of them were developed compartment syndrome. The patients with closed undisplaced fractures were treated with casting braces. For closed unstable fractures with good skin and soft-tissue conditions, open reduction and internal fixation was performed. All patients with open fracture had an external fixator applied after adequate debridement. Thirty one of 40 patients with compartment syndrome were treated by fasciotomy. For twelve of them, amputation was necessary. The most common procedure performed was debridement, followed by open reduction and internal fixation and closed reduction-casting, respectively. The results of this study may provide the basis for future development of strategy to optimise attempts at rescue and plan treatment of survivors with musculoskeletal injuries after earthquakes.

  15. Detecting Significant Stress Drop Variations in Large Micro-Earthquake Datasets: A Comparison Between a Convergent Step-Over in the San Andreas Fault and the Ventura Thrust Fault System, Southern California

    Science.gov (United States)

    Goebel, T. H. W.; Hauksson, E.; Plesch, A.; Shaw, J. H.

    2017-06-01

    A key parameter in engineering seismology and earthquake physics is seismic stress drop, which describes the relative amount of high-frequency energy radiation at the source. To identify regions with potentially significant stress drop variations, we perform a comparative analysis of source parameters in the greater San Gorgonio Pass (SGP) and Ventura basin (VB) in southern California. The identification of physical stress drop variations is complicated by large data scatter as a result of attenuation, limited recording bandwidth and imprecise modeling assumptions. In light of the inherently high uncertainties in single stress drop measurements, we follow the strategy of stacking large numbers of source spectra thereby enhancing the resolution of our method. We analyze more than 6000 high-quality waveforms between 2000 and 2014, and compute seismic moments, corner frequencies and stress drops. Significant variations in stress drop estimates exist within the SGP area. Moreover, the SGP also exhibits systematically higher stress drops than VB and shows more scatter. We demonstrate that the higher scatter in SGP is not a generic artifact of our method but an expression of differences in underlying source processes. Our results suggest that higher differential stresses, which can be deduced from larger focal depth and more thrust faulting, may only be of secondary importance for stress drop variations. Instead, the general degree of stress field heterogeneity and strain localization may influence stress drops more strongly, so that more localized faulting and homogeneous stress fields favor lower stress drops. In addition, higher loading rates, for example, across the VB potentially result in stress drop reduction whereas slow loading rates on local fault segments within the SGP region result in anomalously high stress drop estimates. Our results show that crustal and fault properties systematically influence earthquake stress drops of small and large events and should

  16. Book review: Earthquakes and water

    Science.gov (United States)

    Bekins, Barbara A.

    2012-01-01

    It is really nice to see assembled in one place a discussion of the documented and hypothesized hydrologic effects of earthquakes. The book is divided into chapters focusing on particular hydrologic phenomena including liquefaction, mud volcanism, stream discharge increases, groundwater level, temperature and chemical changes, and geyser period changes. These hydrologic effects are inherently fascinating, and the large number of relevant publications in the past decade makes this summary a useful milepost. The book also covers hydrologic precursors and earthquake triggering by pore pressure. A natural need to limit the topics covered resulted in the omission of tsunamis and the vast literature on the role of fluids and pore pressure in frictional strength of faults. Regardless of whether research on earthquake-triggered hydrologic effects ultimately provides insight into the physics of earthquakes, the text provides welcome common ground for interdisciplinary collaborations between hydrologists and seismologists. Such collaborations continue to be crucial for investigating hypotheses about the role of fluids in earthquakes and slow slip. 

  17. Spatial Evaluation and Verification of Earthquake Simulators

    Science.gov (United States)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  18. Digitally controlled twelve-pulse firing generator

    International Nuclear Information System (INIS)

    Berde, D.; Ferrara, A.A.

    1981-01-01

    Control System Studies for the Tokamak Fusion Test Reactor (TFTR) indicate that accurate thyristor firing in the AC-to-DC conversion system is required in order to achieve good regulation of the various field currents. Rapid update and exact firing angle control are required to avoid instabilities, large eddy currents, or parasitic oscillations. The Prototype Firing Generator was designed to satisfy these requirements. To achieve the required /plus or minus/0.77/degree/firing accuracy, a three-phase-locked loop reference was designed; otherwise, the Firing Generator employs digital circuitry. The unit, housed in a standard CAMAC crate, operates under microcomputer control. Functions are performed under program control, which resides in nonvolatile read-only memory. Communication with CICADA control system is provided via an 11-bit parallel interface

  19. Earthquakes: hydrogeochemical precursors

    Science.gov (United States)

    Ingebritsen, Steven E.; Manga, Michael

    2014-01-01

    Earthquake prediction is a long-sought goal. Changes in groundwater chemistry before earthquakes in Iceland highlight a potential hydrogeochemical precursor, but such signals must be evaluated in the context of long-term, multiparametric data sets.

  20. Ground water and earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Ts' ai, T H

    1977-11-01

    Chinese folk wisdom has long seen a relationship between ground water and earthquakes. Before an earthquake there is often an unusual change in the ground water level and volume of flow. Changes in the amount of particulate matter in ground water as well as changes in color, bubbling, gas emission, and noises and geysers are also often observed before earthquakes. Analysis of these features can help predict earthquakes. Other factors unrelated to earthquakes can cause some of these changes, too. As a first step it is necessary to find sites which are sensitive to changes in ground stress to be used as sensor points for predicting earthquakes. The necessary features are described. Recording of seismic waves of earthquake aftershocks is also an important part of earthquake predictions.

  1. Earthquake location in island arcs

    Science.gov (United States)

    Engdahl, E.R.; Dewey, J.W.; Fujita, K.

    1982-01-01

    A comprehensive data set of selected teleseismic P-wave arrivals and local-network P- and S-wave arrivals from large earthquakes occurring at all depths within a small section of the central Aleutians is used to examine the general problem of earthquake location in island arcs. Reference hypocenters for this special data set are determined for shallow earthquakes from local-network data and for deep earthquakes from combined local and teleseismic data by joint inversion for structure and location. The high-velocity lithospheric slab beneath the central Aleutians may displace hypocenters that are located using spherically symmetric Earth models; the amount of displacement depends on the position of the earthquakes with respect to the slab and on whether local or teleseismic data are used to locate the earthquakes. Hypocenters for trench and intermediate-depth events appear to be minimally biased by the effects of slab structure on rays to teleseismic stations. However, locations of intermediate-depth events based on only local data are systematically displaced southwards, the magnitude of the displacement being proportional to depth. Shallow-focus events along the main thrust zone, although well located using only local-network data, are severely shifted northwards and deeper, with displacements as large as 50 km, by slab effects on teleseismic travel times. Hypocenters determined by a method that utilizes seismic ray tracing through a three-dimensional velocity model of the subduction zone, derived by thermal modeling, are compared to results obtained by the method of joint hypocenter determination (JHD) that formally assumes a laterally homogeneous velocity model over the source region and treats all raypath anomalies as constant station corrections to the travel-time curve. The ray-tracing method has the theoretical advantage that it accounts for variations in travel-time anomalies within a group of events distributed over a sizable region of a dipping, high

  2. Twelve clinically significant points in medulloblastoma

    International Nuclear Information System (INIS)

    Sugiyama, Kazuhiko; Yamasaki, Fumiyuki; Kajiwara, Yoshinori; Watanabe, Yosuke; Takayasu, Takeshi; Kurisu, Kaoru

    2011-01-01

    Though medulloblastoma is the most common malignant brain tumor during childhood, only 80 newly-diagnosed tumors develos every year, as the annual incidence is extremely rare with an occurrence of 0.5 per 100,000 children younger than 15-year-old and of 0.7 per 100,000 for the entire population. Images obtained of medulloblastoma are characterized by a round heterogeously-enhanced mass in or adjacent to the VIth ventricle. Objectives of surgical treatment are the maximum resectioning of the main mass and the relief of the obstructive hydrocephalus. Cerebellar mutism occurs a few days after one fourth of medulloblastoma surgery, and lasts approximately for 50 days followed by subsequent dysarthria. Pathological subtypes include classic medulloblastoma, desmoplastic/nodular medulloblastoma, medulloblastoma with extensive nodularity, large cell/anaplastic medulloblastoma, all corresponding to World Health Organization (WHO) grade IV. According to age, residual tumor size, and disseminated staging, patients are divided into average-risk group, high-risk group, or baby-medulloblastoma after surgery. Standard treatment in average-risk group includes 23.4-Gy cranio-spinal irradiation (CSI) with posterior boost followed by chemotherapy consisting of cisplatin (CDDP), alkylating agents, and vincristine. Patients in high-risk group receive over 36-Gy CSI with boost radiotherapy to nodular lesions before, concomitantly with, or followed by dose-intensity chemotherapy. In cases with gross total removal, or desmoplastic/nodular pathology radiotherapy for patients younger than 3-year-old are often delayed until they turn 3-year-old, and are able to survive for long time by appropriate chemotherapy alone. Adolescent survivors with childhood medulloblastoma have a number of late adverse effects regarding another neoplasm, neuro-cognitive function, endocrine activity, cardiovascular organs, and skeletal system. Comprehensive follow-up and support system are mandatory. (author)

  3. Bacteriological And Clinical Evaluation Of Twelve Cases Of Post ...

    African Journals Online (AJOL)

    Bacteriological And Clinical Evaluation Of Twelve Cases Of Post-Surgical Sepsis Of Odontogenic Tumours At A ... East African Medical Journal ... Intervention: Adequate review of patient\\'s medical history, bacteriological investigations and

  4. Vegetative propagation of twelve fodder tree species indigenous to ...

    African Journals Online (AJOL)

    Vegetative propagation of twelve fodder tree species indigenous to the Sahel, West Africa. Catherine Ky-Dembele, Jules Bayala, Antoine Kalinganire, Fatoumata Tata Traoré, Bréhima Koné, Alain Olivier ...

  5. Ionospheric earthquake precursors

    International Nuclear Information System (INIS)

    Bulachenko, A.L.; Oraevskij, V.N.; Pokhotelov, O.A.; Sorokin, V.N.; Strakhov, V.N.; Chmyrev, V.M.

    1996-01-01

    Results of experimental study on ionospheric earthquake precursors, program development on processes in the earthquake focus and physical mechanisms of formation of various type precursors are considered. Composition of experimental cosmic system for earthquake precursors monitoring is determined. 36 refs., 5 figs

  6. Children's Ideas about Earthquakes

    Science.gov (United States)

    Simsek, Canan Lacin

    2007-01-01

    Earthquake, a natural disaster, is among the fundamental problems of many countries. If people know how to protect themselves from earthquake and arrange their life styles in compliance with this, damage they will suffer will reduce to that extent. In particular, a good training regarding earthquake to be received in primary schools is considered…

  7. Earthquake outlook for the San Francisco Bay region 2014–2043

    Science.gov (United States)

    Aagaard, Brad T.; Blair, James Luke; Boatwright, John; Garcia, Susan H.; Harris, Ruth A.; Michael, Andrew J.; Schwartz, David P.; DiLeo, Jeanne S.; Jacques, Kate; Donlin, Carolyn

    2016-06-13

    Using information from recent earthquakes, improved mapping of active faults, and a new model for estimating earthquake probabilities, the 2014 Working Group on California Earthquake Probabilities updated the 30-year earthquake forecast for California. They concluded that there is a 72 percent probability (or likelihood) of at least one earthquake of magnitude 6.7 or greater striking somewhere in the San Francisco Bay region before 2043. Earthquakes this large are capable of causing widespread damage; therefore, communities in the region should take simple steps to help reduce injuries, damage, and disruption, as well as accelerate recovery from these earthquakes.

  8. Megathrust earthquakes in Central Chile: What is next after the Maule 2010 earthquake?

    Science.gov (United States)

    Madariaga, R.

    2013-05-01

    The 27 February 2010 Maule earthquake occurred in a well identified gap in the Chilean subduction zone. The event has now been studied in detail using both far-field, near field seismic and geodetic data, we will review this information gathered so far. The event broke a region that was much longer along strike than the gap left over from the 1835 Concepcion earthquake, sometimes called the Darwin earthquake because he was in the area when the earthquake occurred and made many observations. Recent studies of contemporary documents by Udias et al indicate that the area broken by the Maule earthquake in 2010 had previously broken by a similar earthquake in 1751, but several events in the magnitude 8 range occurred in the area principally in 1835 already mentioned and, more recently on 1 December 1928 to the North and on 21 May 1960 (1 1/2 days before the big Chilean earthquake of 1960). Currently the area of the 2010 earthquake and the region immediately to the North is undergoing a very large increase in seismicity with numerous clusters of seismicity that move along the plate interface. Examination of the seismicity of Chile of the 18th and 19th century show that the region immediately to the North of the 2010 earthquake broke in a very large megathrust event in July 1730. this is the largest known earthquake in central Chile. The region where this event occurred has broken in many occasions with M 8 range earthquakes in 1822, 1880, 1906, 1971 and 1985. Is it preparing for a new very large megathrust event? The 1906 earthquake of Mw 8.3 filled the central part of the gap but it has broken again on several occasions in 1971, 1973 and 1985. The main question is whether the 1906 earthquake relieved enough stresses from the 1730 rupture zone. Geodetic data shows that most of the region that broke in 1730 is currently almost fully locked from the northern end of the Maule earthquake at 34.5°S to 30°S, near the southern end of the of the Mw 8.5 Atacama earthquake of 11

  9. Comparative analysis of family poultry production in twelve African countries

    International Nuclear Information System (INIS)

    Goodger, W.J.; Bennett, T.B.; Dwinger, R.H.

    2002-01-01

    The purpose of the research was to conduct a survey on family poultry to obtain information on disease prevalence, feeding practices, and the management of poultry housing in twelve African countries. The survey data were collected during both the wet and dry seasons and summarised (average and standard deviation) by country, village/region, season, and survey question. The disease data results show that three (greenish/bloody diarrhoea, swollen head, and coughing) of top four reported symptoms are part of Newcastle disease's presenting signs. Chick mortality was also higher in the wet season, when there is a higher incidence of Newcastle disease. This was also supported by the individual country data in that those countries with high chick mortality data also had low hatchability in the wet season with Egypt being the only exception. The types of housing used for shelter for family poultry was quite variable and presented a challenge to determine the level of cleaning/sanitation to assist in controlling Newcastle disease. On the one hand, a large percentage of households reported never cleaning the poultry house (e.g., Cameroon, Morocco, Mauritius, and Sudan). On the other hand, 34% of the responses to housing type were either trees or other forms of housing that would be difficult to clean i.e., old car, fence, surrounding wall, etc. Obviously, these results should be closely examined when instituting control programs for Newcastle disease. The large variety of available scavenged feed without any data on intake raises the question of how to balance the ration for the flock. Family poultry scientists need to determine a method to estimate intake which could assist in determining what supplementary feed is necessary if any. This challenge may be one of the most important aspects to family poultry management because of the importance of nutrition to poultry production with the added difficulty of providing balanced nutrition in an extensive system. (author)

  10. Earthquake activity along the Himalayan orogenic belt

    Science.gov (United States)

    Bai, L.; Mori, J. J.

    2017-12-01

    The collision between the Indian and Eurasian plates formed the Himalayas, the largest orogenic belt on the Earth. The entire region accommodates shallow earthquakes, while intermediate-depth earthquakes are concentrated at the eastern and western Himalayan syntaxis. Here we investigate the focal depths, fault plane solutions, and source rupture process for three earthquake sequences, which are located at the western, central and eastern regions of the Himalayan orogenic belt. The Pamir-Hindu Kush region is located at the western Himalayan syntaxis and is characterized by extreme shortening of the upper crust and strong interaction of various layers of the lithosphere. Many shallow earthquakes occur on the Main Pamir Thrust at focal depths shallower than 20 km, while intermediate-deep earthquakes are mostly located below 75 km. Large intermediate-depth earthquakes occur frequently at the western Himalayan syntaxis about every 10 years on average. The 2015 Nepal earthquake is located in the central Himalayas. It is a typical megathrust earthquake that occurred on the shallow portion of the Main Himalayan Thrust (MHT). Many of the aftershocks are located above the MHT and illuminate faulting structures in the hanging wall with dip angles that are steeper than the MHT. These observations provide new constraints on the collision and uplift processes for the Himalaya orogenic belt. The Indo-Burma region is located south of the eastern Himalayan syntaxis, where the strike of the plate boundary suddenly changes from nearly east-west at the Himalayas to nearly north-south at the Burma Arc. The Burma arc subduction zone is a typical oblique plate convergence zone. The eastern boundary is the north-south striking dextral Sagaing fault, which hosts many shallow earthquakes with focal depth less than 25 km. In contrast, intermediate-depth earthquakes along the subduction zone reflect east-west trending reverse faulting.

  11. Crowdsourced earthquake early warning

    Science.gov (United States)

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.

  12. Fault geometry and earthquake mechanics

    Directory of Open Access Journals (Sweden)

    D. J. Andrews

    1994-06-01

    Full Text Available Earthquake mechanics may be determined by the geometry of a fault system. Slip on a fractal branching fault surface can explain: 1 regeneration of stress irregularities in an earthquake; 2 the concentration of stress drop in an earthquake into asperities; 3 starting and stopping of earthquake slip at fault junctions, and 4 self-similar scaling of earthquakes. Slip at fault junctions provides a natural realization of barrier and asperity models without appealing to variations of fault strength. Fault systems are observed to have a branching fractal structure, and slip may occur at many fault junctions in an earthquake. Consider the mechanics of slip at one fault junction. In order to avoid a stress singularity of order 1/r, an intersection of faults must be a triple junction and the Burgers vectors on the three fault segments at the junction must sum to zero. In other words, to lowest order the deformation consists of rigid block displacement, which ensures that the local stress due to the dislocations is zero. The elastic dislocation solution, however, ignores the fact that the configuration of the blocks changes at the scale of the displacement. A volume change occurs at the junction; either a void opens or intense local deformation is required to avoid material overlap. The volume change is proportional to the product of the slip increment and the total slip since the formation of the junction. Energy absorbed at the junction, equal to confining pressure times the volume change, is not large enongh to prevent slip at a new junction. The ratio of energy absorbed at a new junction to elastic energy released in an earthquake is no larger than P/µ where P is confining pressure and µ is the shear modulus. At a depth of 10 km this dimensionless ratio has th value P/µ= 0.01. As slip accumulates at a fault junction in a number of earthquakes, the fault segments are displaced such that they no longer meet at a single point. For this reason the

  13. Overestimation of the earthquake hazard along the Himalaya: constraints in bracketing of medieval earthquakes from paleoseismic studies

    Science.gov (United States)

    Arora, Shreya; Malik, Javed N.

    2017-12-01

    The Himalaya is one of the most seismically active regions of the world. The occurrence of several large magnitude earthquakes viz. 1905 Kangra earthquake (Mw 7.8), 1934 Bihar-Nepal earthquake (Mw 8.2), 1950 Assam earthquake (Mw 8.4), 2005 Kashmir (Mw 7.6), and 2015 Gorkha (Mw 7.8) are the testimony to ongoing tectonic activity. In the last few decades, tremendous efforts have been made along the Himalayan arc to understand the patterns of earthquake occurrences, size, extent, and return periods. Some of the large magnitude earthquakes produced surface rupture, while some remained blind. Furthermore, due to the incompleteness of the earthquake catalogue, a very few events can be correlated with medieval earthquakes. Based on the existing paleoseismic data certainly, there exists a complexity to precisely determine the extent of surface rupture of these earthquakes and also for those events, which occurred during historic times. In this paper, we have compiled the paleo-seismological data and recalibrated the radiocarbon ages from the trenches excavated by previous workers along the entire Himalaya and compared earthquake scenario with the past. Our studies suggest that there were multiple earthquake events with overlapping surface ruptures in small patches with an average rupture length of 300 km limiting Mw 7.8-8.0 for the Himalayan arc, rather than two or three giant earthquakes rupturing the whole front. It has been identified that the large magnitude Himalayan earthquakes, such as 1905 Kangra, 1934 Bihar-Nepal, and 1950 Assam, that have occurred within a time frame of 45 years. Now, if these events are dated, there is a high possibility that within the range of ±50 years, they may be considered as the remnant of one giant earthquake rupturing the entire Himalayan arc. Therefore, leading to an overestimation of seismic hazard scenario in Himalaya.

  14. Survey of damage to 602 MR scanners after the Great East Japan Earthquake

    International Nuclear Information System (INIS)

    Nakai, Toshiharu; Yamaguchi-Sekino, Sachiko; Tsuchihashi, Toshio

    2013-01-01

    An earthquake of 9.0 magnitude, the largest in modern Japanese history, struck east Japan on March 11, 2011. We investigated hazards and observations related to magnetic resonance (MR) scanners in this earthquake to evaluate potential risks and consider further prevention or minimization of damage from and injury of patients in such large earthquakes. The investigation team funded by MHLW sent questionnaires to the 984 facilities with installed MR scanners in the 7 prefectures of east Japan (Iwate, Miyagi, Fukushima, Ibaraki, Chiba, Tokyo, Saitama) and collected 458 responses (46.6%) with information on 602 MR scanners (144 units≤0.5 tesla; 31 one-T units; 371 1.5-T units; and 56 units≥3 T). Significant differences in damage were observed between seismic scale 5 and 6 (χ 2 test, P<0.001 for all items of damage checked). The frequencies of typical damage were displacement of magnets (12.4%), failure of the chiller or air conditioning (9.6%), rapid decrease in liquid helium (8.4%), damage to magnet enclosure and its equipment (7.6%), damage to shielding of the MR scanner room (6.1%), damage to the quench duct (4.5%), breakage of devices anchoring system cabinets (4.4%), damage to the magnet base (3.9%), and flying of metal components (1.5%). Twelve facilities reported flooding by the subsequent tsunami, and quench was confirmed in 19 facilities. No fire damage was reported. It was confirmed that no one was severely injured in MR scanners, and base isolation of the building was very useful in completely preventing damage even at seismic scale 7. In the future, training for evacuation and establishment of a standard protocol for emergency shutdown of MR scanners, onsite checking by MR operators, and emergency power plant equipment to maintain chiller for MR scanners will further ensure MR safety in an earthquake. (author)

  15. Variations of local seismic response in Benevento (Southern Italy) using earthquakes and ambient noise recordings

    Science.gov (United States)

    Improta, Luigi; di Giulio, Giuseppe; Rovelli, Antonio

    The city of Benevento (Southern Italy) has been repeatedly struck by large historical earthquakes. A heterogeneous geologic structure and widespread soft soil conditions make the estimation of site effects crucial for the seismic hazard assessment of the city. From 2000 until 2004, we installed seismic stations to collect earthquake data over zones with different geological conditions. Despite the high level of urban noise, we recorded more than 150 earthquakes at twelve sites. This data set yields the first, well documented experimental evidence for weak to moderate local amplifications. We investigated site effects primarily by the classical spectral ratio technique (CSR) using a rock station placed on the Benevento hill as reference. All sites in the Calore river valley and in the eastern part of the Benevento hill show a moderate high-frequency (f > 4 Hz) amplification peak. Conversely, sites in the Sabato river valley share weak-to-moderate amplification in a wide frequency band (from 1-2 to 7-10 Hz), without evident frequency peaks. Application of no-reference-site techniques to earthquake and noise data confirms the results of the CSRs in the sites of the Calore river valley and of the eastern part of the Benevento hill, but fails in providing indications for site effects in the Sabato river valley, being the H/V ratios nearly flat. One-dimensional modeling indicates that the ground motion amplification can be essentially explained in terms of a vertically varying geologic structure. High-frequency narrow peaks are caused by the strong impedance contrast existing between near-surface soft deposits and stiff cemented conglomerates. Conversely, broad-band amplifications in the Sabato river valley are likely due to a more complex layering with weak impedance contrasts both in the shallow and deep structure of the valley.

  16. Coping with earthquakes induced by fluid injection

    Science.gov (United States)

    McGarr, Arthur F.; Bekins, Barbara; Burkardt, Nina; Dewey, James W.; Earle, Paul S.; Ellsworth, William L.; Ge, Shemin; Hickman, Stephen H.; Holland, Austin F.; Majer, Ernest; Rubinstein, Justin L.; Sheehan, Anne

    2015-01-01

    Large areas of the United States long considered geologically stable with little or no detected seismicity have recently become seismically active. The increase in earthquake activity began in the mid-continent starting in 2001 (1) and has continued to rise. In 2014, the rate of occurrence of earthquakes with magnitudes (M) of 3 and greater in Oklahoma exceeded that in California (see the figure). This elevated activity includes larger earthquakes, several with M > 5, that have caused significant damage (2, 3). To a large extent, the increasing rate of earthquakes in the mid-continent is due to fluid-injection activities used in modern energy production (1, 4, 5). We explore potential avenues for mitigating effects of induced seismicity. Although the United States is our focus here, Canada, China, the UK, and others confront similar problems associated with oil and gas production, whereas quakes induced by geothermal activities affect Switzerland, Germany, and others.

  17. Disturbances in equilibrium function after major earthquake.

    Science.gov (United States)

    Honma, Motoyasu; Endo, Nobutaka; Osada, Yoshihisa; Kim, Yoshiharu; Kuriyama, Kenichi

    2012-01-01

    Major earthquakes were followed by a large number of aftershocks and significant outbreaks of dizziness occurred over a large area. However it is unclear why major earthquake causes dizziness. We conducted an intergroup trial on equilibrium dysfunction and psychological states associated with equilibrium dysfunction in individuals exposed to repetitive aftershocks versus those who were rarely exposed. Greater equilibrium dysfunction was observed in the aftershock-exposed group under conditions without visual compensation. Equilibrium dysfunction in the aftershock-exposed group appears to have arisen from disturbance of the inner ear, as well as individual vulnerability to state anxiety enhanced by repetitive exposure to aftershocks. We indicate potential effects of autonomic stress on equilibrium function after major earthquake. Our findings may contribute to risk management of psychological and physical health after major earthquakes with aftershocks, and allow development of a new empirical approach to disaster care after such events.

  18. Shallow moonquakes - How they compare with earthquakes

    Science.gov (United States)

    Nakamura, Y.

    1980-01-01

    Of three types of moonquakes strong enough to be detectable at large distances - deep moonquakes, meteoroid impacts and shallow moonquakes - only shallow moonquakes are similar in nature to earthquakes. A comparison of various characteristics of moonquakes with those of earthquakes indeed shows a remarkable similarity between shallow moonquakes and intraplate earthquakes: (1) their occurrences are not controlled by tides; (2) they appear to occur in locations where there is evidence of structural weaknesses; (3) the relative abundances of small and large quakes (b-values) are similar, suggesting similar mechanisms; and (4) even the levels of activity may be close. The shallow moonquakes may be quite comparable in nature to intraplate earthquakes, and they may be of similar origin.

  19. Marmara Island earthquakes, of 1265 and 1935; Turkey

    Directory of Open Access Journals (Sweden)

    Y. Altınok

    2006-01-01

    Full Text Available The long-term seismicity of the Marmara Sea region in northwestern Turkey is relatively well-recorded. Some large and some of the smaller events are clearly associated with fault zones known to be seismically active, which have distinct morphological expressions and have generated damaging earthquakes before and later. Some less common and moderate size earthquakes have occurred in the vicinity of the Marmara Islands in the west Marmara Sea. This paper presents an extended summary of the most important earthquakes that have occurred in 1265 and 1935 and have since been known as the Marmara Island earthquakes. The informative data and the approaches used have therefore the potential of documenting earthquake ruptures of fault segments and may extend the records kept on earthquakes far before known history, rock falls and abnormal sea waves observed during these events, thus improving hazard evaluations and the fundamental understanding of the process of an earthquake.

  20. Earthquake forecasting and warning

    Energy Technology Data Exchange (ETDEWEB)

    Rikitake, T.

    1983-01-01

    This review briefly describes two other books on the same subject either written or partially written by Rikitake. In this book, the status of earthquake prediction efforts in Japan, China, the Soviet Union, and the United States are updated. An overview of some of the organizational, legal, and societal aspects of earthquake prediction in these countries is presented, and scientific findings of precursory phenomena are included. A summary of circumstances surrounding the 1975 Haicheng earthquake, the 1978 Tangshan earthquake, and the 1976 Songpan-Pingwu earthquake (all magnitudes = 7.0) in China and the 1978 Izu-Oshima earthquake in Japan is presented. This book fails to comprehensively summarize recent advances in earthquake prediction research.

  1. Automatic Earthquake Detection by Active Learning

    Science.gov (United States)

    Bergen, K.; Beroza, G. C.

    2017-12-01

    In recent years, advances in machine learning have transformed fields such as image recognition, natural language processing and recommender systems. Many of these performance gains have relied on the availability of large, labeled data sets to train high-accuracy models; labeled data sets are those for which each sample includes a target class label, such as waveforms tagged as either earthquakes or noise. Earthquake seismologists are increasingly leveraging machine learning and data mining techniques to detect and analyze weak earthquake signals in large seismic data sets. One of the challenges in applying machine learning to seismic data sets is the limited labeled data problem; learning algorithms need to be given examples of earthquake waveforms, but the number of known events, taken from earthquake catalogs, may be insufficient to build an accurate detector. Furthermore, earthquake catalogs are known to be incomplete, resulting in training data that may be biased towards larger events and contain inaccurate labels. This challenge is compounded by the class imbalance problem; the events of interest, earthquakes, are infrequent relative to noise in continuous data sets, and many learning algorithms perform poorly on rare classes. In this work, we investigate the use of active learning for automatic earthquake detection. Active learning is a type of semi-supervised machine learning that uses a human-in-the-loop approach to strategically supplement a small initial training set. The learning algorithm incorporates domain expertise through interaction between a human expert and the algorithm, with the algorithm actively posing queries to the user to improve detection performance. We demonstrate the potential of active machine learning to improve earthquake detection performance with limited available training data.

  2. Report on the 2010 Chilean earthquake and tsunami response

    Science.gov (United States)

    ,

    2011-01-01

    delegation, it was clear that a multidisciplinary approach was required to properly analyze the emergency response, technical, and social components of this disaster. A diverse and knowledgeable delegation was necessary to analyze the Chilean response in a way that would be beneficial to preparedness in California, as well as improve mitigation efforts around the United States. By most standards, the Maule earthquake was a catastrophe for Chile. The economic losses totaled $30 billion USD or 17% of the GDP of the country. Twelve million people, or ¾ of the population of the country, were in areas that felt strong shaking. Yet only 521 fatalities have been confirmed, with 56 people still missing and presumed dead in the tsunami. The Science and Technology Team evaluated the impacts of the earthquake on built environment with implications for the United States. The fires following the earthquake were minimal in part because of the shutdown of the national electrical grid early in the shaking. Only five engineer-designed buildings were destroyed during the earthquake; however, over 350,000 housing units were destroyed. Chile has a law that holds building owners liable for the first 10 years of a building’s existence for any losses resulting from inadequate application of the building code during construction. This law was cited by many our team met with as a prime reason for the strong performance of the built environment. Overall, this earthquake demonstrated that strict building codes and standards could greatly reduce losses in even the largest earthquakes. In the immediate response to the earthquake and tsunami, first responders, emergency personnel, and search and rescue teams handled many challenges. Loss of communications was significant; many lives were lost and effective coordination to support life-sustaining efforts was gravely impacted due to a lack of inter- and intra-agency coordination. The Health and Medical Services Team sought to understand the medical

  3. Clustered and transient earthquake sequences in mid-continents

    Science.gov (United States)

    Liu, M.; Stein, S. A.; Wang, H.; Luo, G.

    2012-12-01

    Earthquakes result from sudden release of strain energy on faults. On plate boundary faults, strain energy is constantly accumulating from steady and relatively rapid relative plate motion, so large earthquakes continue to occur so long as motion continues on the boundary. In contrast, such steady accumulation of stain energy does not occur on faults in mid-continents, because the far-field tectonic loading is not steadily distributed between faults, and because stress perturbations from complex fault interactions and other stress triggers can be significant relative to the slow tectonic stressing. Consequently, mid-continental earthquakes are often temporally clustered and transient, and spatially migrating. This behavior is well illustrated by large earthquakes in North China in the past two millennia, during which no single large earthquakes repeated on the same fault segments, but moment release between large fault systems was complementary. Slow tectonic loading in mid-continents also causes long aftershock sequences. We show that the recent small earthquakes in the Tangshan region of North China are aftershocks of the 1976 Tangshan earthquake (M 7.5), rather than indicators of a new phase of seismic activity in North China, as many fear. Understanding the transient behavior of mid-continental earthquakes has important implications for assessing earthquake hazards. The sequence of large earthquakes in the New Madrid Seismic Zone (NMSZ) in central US, which includes a cluster of M~7 events in 1811-1812 and perhaps a few similar ones in the past millennium, is likely a transient process, releasing previously accumulated elastic strain on recently activated faults. If so, this earthquake sequence will eventually end. Using simple analysis and numerical modeling, we show that the large NMSZ earthquakes may be ending now or in the near future.

  4. Summary of earthquake experience database

    International Nuclear Information System (INIS)

    1999-01-01

    Strong-motion earthquakes frequently occur throughout the Pacific Basin, where power plants or industrial facilities are included in the affected areas. By studying the performance of these earthquake-affected (or database) facilities, a large inventory of various types of equipment installations can be compiled that have experienced substantial seismic motion. The primary purposes of the seismic experience database are summarized as follows: to determine the most common sources of seismic damage, or adverse effects, on equipment installations typical of industrial facilities; to determine the thresholds of seismic motion corresponding to various types of seismic damage; to determine the general performance of equipment during earthquakes, regardless of the levels of seismic motion; to determine minimum standards in equipment construction and installation, based on past experience, to assure the ability to withstand anticipated seismic loads. To summarize, the primary assumption in compiling an experience database is that the actual seismic hazard to industrial installations is best demonstrated by the performance of similar installations in past earthquakes

  5. Major earthquake of Friday March 11, 2011, magnitude 8.9 at 5:46 UT, off Honshu island (Japan); Seisme majeur au large de l'Ile d'Honshu (Japon) du vendredi 11 mars 2011 Magnitude = 8,9 a 5h46 (TU)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    On Friday March 11, 2011, at 5:46 UT (2:46 PM local time), a magnitude 8.9 earthquake took place at 80 km east of Honshu island (Japan). The earthquake affected a large part of the Honshu territory and led to the automatic emergency shutdown of all nuclear power plants of the east coast. This paper recalls first the seismo-tectonic and historical seismic context of the Japan archipelago and the first analyses of the Tohoku earthquake impact on nuclear facilities. At the time of publication of this information report, no radioactive release in the environment and no anomaly at the Tokai-Mura and Rokkasho-Mura sites were mentioned. However, the evacuation of populations in a 3 to 10 km area around the Fukushima-Dai-ichi power plant had been ordered by the Governor as preventive measure, which made one think that the situation at this specific site was particularly worrying. (J.S.)

  6. Earthquake precursors: spatial-temporal gravity changes before the great earthquakes in the Sichuan-Yunnan area

    Science.gov (United States)

    Zhu, Yi-Qing; Liang, Wei-Feng; Zhang, Song

    2018-01-01

    Using multiple-scale mobile gravity data in the Sichuan-Yunnan area, we systematically analyzed the relationships between spatial-temporal gravity changes and the 2014 Ludian, Yunnan Province Ms6.5 earthquake and the 2014 Kangding Ms6.3, 2013 Lushan Ms7.0, and 2008 Wenchuan Ms8.0 earthquakes in Sichuan Province. Our main results are as follows. (1) Before the occurrence of large earthquakes, gravity anomalies occur in a large area around the epicenters. The directions of gravity change gradient belts usually agree roughly with the directions of the main fault zones of the study area. Such gravity changes might reflect the increase of crustal stress, as well as the significant active tectonic movements and surface deformations along fault zones, during the period of gestation of great earthquakes. (2) Continuous significant changes of the multiple-scale gravity fields, as well as greater gravity changes with larger time scales, can be regarded as medium-range precursors of large earthquakes. The subsequent large earthquakes always occur in the area where the gravity changes greatly. (3) The spatial-temporal gravity changes are very useful in determining the epicenter of coming large earthquakes. The large gravity networks are useful to determine the general areas of coming large earthquakes. However, the local gravity networks with high spatial-temporal resolution are suitable for determining the location of epicenters. Therefore, denser gravity observation networks are necessary for better forecasts of the epicenters of large earthquakes. (4) Using gravity changes from mobile observation data, we made medium-range forecasts of the Kangding, Ludian, Lushan, and Wenchuan earthquakes, with especially successful forecasts of the location of their epicenters. Based on the above discussions, we emphasize that medium-/long-term potential for large earthquakes might exist nowadays in some areas with significant gravity anomalies in the study region. Thus, the monitoring

  7. Ionospheric precursors for crustal earthquakes in Italy

    Directory of Open Access Journals (Sweden)

    L. Perrone

    2010-04-01

    Full Text Available Crustal earthquakes with magnitude 6.0>M≥5.5 observed in Italy for the period 1979–2009 including the last one at L'Aquila on 6 April 2009 were considered to check if the earlier obtained relationships for ionospheric precursors for strong Japanese earthquakes are valid for the Italian moderate earthquakes. The ionospheric precursors are based on the observed variations of the sporadic E-layer parameters (h'Es, fbEs and foF2 at the ionospheric station Rome. Empirical dependencies for the seismo-ionospheric disturbances relating the earthquake magnitude and the epicenter distance are obtained and they have been shown to be similar to those obtained earlier for Japanese earthquakes. The dependences indicate the process of spreading the disturbance from the epicenter towards periphery during the earthquake preparation process. Large lead times for the precursor occurrence (up to 34 days for M=5.8–5.9 tells about a prolong preparation period. A possibility of using the obtained relationships for the earthquakes prediction is discussed.

  8. Smartphone MEMS accelerometers and earthquake early warning

    Science.gov (United States)

    Kong, Q.; Allen, R. M.; Schreier, L.; Kwon, Y. W.

    2015-12-01

    The low cost MEMS accelerometers in the smartphones are attracting more and more attentions from the science community due to the vast number and potential applications in various areas. We are using the accelerometers inside the smartphones to detect the earthquakes. We did shake table tests to show these accelerometers are also suitable to record large shakings caused by earthquakes. We developed an android app - MyShake, which can even distinguish earthquake movements from daily human activities from the recordings recorded by the accelerometers in personal smartphones and upload trigger information/waveform to our server for further analysis. The data from these smartphones forms a unique datasets for seismological applications, such as earthquake early warning. In this talk I will layout the method we used to recognize earthquake-like movement from single smartphone, and the overview of the whole system that harness the information from a network of smartphones for rapid earthquake detection. This type of system can be easily deployed and scaled up around the global and provides additional insights of the earthquake hazards.

  9. Safety requirements for buildings under induced earthquakes due to gas extraction

    NARCIS (Netherlands)

    Steenbergen, R.D.J.M.; Vrouwenvelder, A.C.W.M.

    2017-01-01

    In the Dutch province of Groningen over the last year shallow earthquakes are induced due to large scale gas extraction from the gas field at 3 km depth. The induced earthquakes differ from the better known tectonic earthquakes all over the world, caused by movement of the earth at large depths. The

  10. Consideration for standard earthquake vibration (1). The Niigataken Chuetsu-oki Earthquake in 2007

    International Nuclear Information System (INIS)

    Ishibashi, Katsuhiko

    2007-01-01

    Outline of new guideline of quakeproof design standard of nuclear power plant and the standard earthquake vibration are explained. The improvement points of new guideline are discussed on the basis of Kashiwazaki-Kariwa Nuclear Power Plant incidents. The fundamental limits of new guideline are pointed. Placement of the quakeproof design standard of nuclear power plant, JEAG4601 of Japan Electric Association, new guideline, standard earthquake vibration of new guideline, the Niigataken Chuetsu-oki Earthquake in 2007 and damage of Kashiwazaki-Kariwa Nuclear Power Plant are discussed. The safety criteria of safety review system, organization, standard and guideline should be improved on the basis of this earthquake and nuclear plant accident. The general knowledge, 'a nuclear power plant is not constructed in the area expected large earthquake', has to be realized. Preconditions of all nuclear power plants should not cause damage to anything. (S.Y.)

  11. Interaction of the san jacinto and san andreas fault zones, southern california: triggered earthquake migration and coupled recurrence intervals.

    Science.gov (United States)

    Sanders, C O

    1993-05-14

    Two lines of evidence suggest that large earthquakes that occur on either the San Jacinto fault zone (SJFZ) or the San Andreas fault zone (SAFZ) may be triggered by large earthquakes that occur on the other. First, the great 1857 Fort Tejon earthquake in the SAFZ seems to have triggered a progressive sequence of earthquakes in the SJFZ. These earthquakes occurred at times and locations that are consistent with triggering by a strain pulse that propagated southeastward at a rate of 1.7 kilometers per year along the SJFZ after the 1857 earthquake. Second, the similarity in average recurrence intervals in the SJFZ (about 150 years) and in the Mojave segment of the SAFZ (132 years) suggests that large earthquakes in the northern SJFZ may stimulate the relatively frequent major earthquakes on the Mojave segment. Analysis of historic earthquake occurrence in the SJFZ suggests little likelihood of extended quiescence between earthquake sequences.

  12. Radon anomalies prior to earthquakes (2). Atmospheric radon anomaly observed before the Hyogoken-Nanbu earthquake

    International Nuclear Information System (INIS)

    Ishikawa, Tetsuo; Tokonami, Shinji; Yasuoka, Yumi; Shinogi, Masaki; Nagahama, Hiroyuki; Omori, Yasutaka; Kawada, Yusuke

    2008-01-01

    Before the 1995 Hyogoken-Nanbu earthquake, various geochemical precursors were observed in the aftershock area: chloride ion concentration, groundwater discharge rate, groundwater radon concentration and so on. Kobe Pharmaceutical University (KPU) is located about 25 km northeast from the epicenter and within the aftershock area. Atmospheric radon concentration had been continuously measured from 1984 at KPU, using a flow-type ionization chamber. The radon concentration data were analyzed using the smoothed residual values which represent the daily minimum of radon concentration with the exclusion of normalized seasonal variation. The radon concentration (smoothed residual values) demonstrated an upward trend about two months before the Hyogoken-Nanbu earthquake. The trend can be well fitted to a log-periodic model related to earthquake fault dynamics. As a result of model fitting, a critical point was calculated to be between 13 and 27 January 1995, which was in good agreement with the occurrence date of earthquake (17 January 1995). The mechanism of radon anomaly before earthquakes is not fully understood. However, it might be possible to detect atmospheric radon anomaly as a precursor before a large earthquake, if (1) the measurement is conducted near the earthquake fault, (2) the monitoring station is located on granite (radon-rich) areas, and (3) the measurement is conducted for more than several years before the earthquake to obtain background data. (author)

  13. Encyclopedia of earthquake engineering

    CERN Document Server

    Kougioumtzoglou, Ioannis; Patelli, Edoardo; Au, Siu-Kui

    2015-01-01

    The Encyclopedia of Earthquake Engineering is designed to be the authoritative and comprehensive reference covering all major aspects of the science of earthquake engineering, specifically focusing on the interaction between earthquakes and infrastructure. The encyclopedia comprises approximately 265 contributions. Since earthquake engineering deals with the interaction between earthquake disturbances and the built infrastructure, the emphasis is on basic design processes important to both non-specialists and engineers so that readers become suitably well-informed without needing to deal with the details of specialist understanding. The content of this encyclopedia provides technically inclined and informed readers about the ways in which earthquakes can affect our infrastructure and how engineers would go about designing against, mitigating and remediating these effects. The coverage ranges from buildings, foundations, underground construction, lifelines and bridges, roads, embankments and slopes. The encycl...

  14. Increased earthquake safety through optimised mounting concept

    International Nuclear Information System (INIS)

    Kollmann, Dieter; Senechal, Holger

    2013-01-01

    Since Fukushima, there has been intensive work on earthquake safety in all nuclear power plants. A large part of these efforts aim at the earthquake safety of safety-relevant pipeline systems. The problem with earthquake safety here is not the pipeline system itself but rather its mountings and connections to components. This is precisely the topic that the KAE dealt with in years of research and development work. It has developed an algorithm that determines the optimal mounting concept with a few iteration steps depending on arbitrary combinations of loading conditions whilst maintaining compliance with relevant regulations for any pipeline systems. With this tool at hand, we are now in a position to plan and realise remedial measures accurately with minimum time and hardware expenditure, and so distinctly improve the earthquake safety of safety-relevant systems. (orig.)

  15. Earthquake at 40 feet

    Science.gov (United States)

    Miller, G. J.

    1976-01-01

    The earthquake that struck the island of Guam on November 1, 1975, at 11:17 a.m had many unique aspects-not the least of which was the experience of an earthquake of 6.25 Richter magnitude while at 40 feet. My wife Bonnie, a fellow diver, Greg Guzman, and I were diving at Gabgab Beach in teh outer harbor of Apra Harbor, engaged in underwater phoyography when the earthquake struck. 

  16. Earthquakes and economic growth

    OpenAIRE

    Fisker, Peter Simonsen

    2012-01-01

    This study explores the economic consequences of earthquakes. In particular, it is investigated how exposure to earthquakes affects economic growth both across and within countries. The key result of the empirical analysis is that while there are no observable effects at the country level, earthquake exposure significantly decreases 5-year economic growth at the local level. Areas at lower stages of economic development suffer harder in terms of economic growth than richer areas. In addition,...

  17. Statistical eruption forecast for the Chilean Southern Volcanic Zone: typical probabilities of volcanic eruptions as baseline for possibly enhanced activity following the large 2010 Concepción earthquake

    Directory of Open Access Journals (Sweden)

    Y. Dzierma

    2010-10-01

    Full Text Available A probabilistic eruption forecast is provided for ten volcanoes of the Chilean Southern Volcanic Zone (SVZ. Since 70% of the Chilean population lives in this area, the estimation of future eruption likelihood is an important part of hazard assessment. After investigating the completeness and stationarity of the historical eruption time series, the exponential, Weibull, and log-logistic distribution functions are fit to the repose time distributions for the individual volcanoes and the models are evaluated. This procedure has been implemented in two different ways to methodologically compare details in the fitting process. With regard to the probability of at least one VEI ≥ 2 eruption in the next decade, Llaima, Villarrica and Nevados de Chillán are most likely to erupt, while Osorno shows the lowest eruption probability among the volcanoes analysed. In addition to giving a compilation of the statistical eruption forecasts along the historically most active volcanoes of the SVZ, this paper aims to give "typical" eruption probabilities, which may in the future permit to distinguish possibly enhanced activity in the aftermath of the large 2010 Concepción earthquake.

  18. Ten colour photometry of twelve Ap-stars

    International Nuclear Information System (INIS)

    Musielok, B.; Lange, D.; Schoeneich, W.; Hildebrandt, G.; Zelwanowa, E.; Hempelmann, A.; Salmanov, G.

    1980-01-01

    Ten-colour photoelectric observations are presented for twelve Ap-stars. Improved ephemeris for seven of them is given. Phase relations between the light curves and line intensity variations are discussed. The problem of the electromagnetic flux conctancy of IOTA Cas is approached from a qualitative point of view. (author)

  19. Education and Development: Twelve Considerations for Transformative Practice

    Science.gov (United States)

    VanBalkom, W. Duffie; Eastham, Sarada

    2011-01-01

    Twelve factors that are essential to consider when embarking on the process of transformative development are examined in the context of international development programming in education and training. Each factor raises a number of questions for the deliberations of policy makers, development practitioners, scholars, international educators,…

  20. Secondary Textbook Review: English, Grades Nine through Twelve.

    Science.gov (United States)

    California State Dept. of Education, Sacramento.

    This book is intended as a resource for teachers and curriculum developers who select textbooks for secondary English courses. It includes a compilation of 32 factual textbook reviews obtained from the application of a review instrument, which was based on the California "Model Curriculum Standards: Grades Nine through Twelve, English…

  1. Safety of superconducting fusion magnets: twelve problem areas

    International Nuclear Information System (INIS)

    Turner, L.R.

    1979-01-01

    Twelve problem areas of superconducting magnets for fusion reaction are described. These are: quench detection and energy dump, stationary normal region of conductor, current leads, electrical arcing, electrical shorts, conductor joints, forces from unequal currents, eddy current effects, cryostat rupture, vacuum failure, fringing field and instrumentation for safety. Priorities among these areas are suggested

  2. Safety of superconducting fusion magnets: twelve problem areas

    International Nuclear Information System (INIS)

    Turner, L.R.

    1979-01-01

    Twelve problem areas of superconducting magnets for fusion reaction are described. These are: Quench Detection and Energy Dump, Stationary Normal Region of Conductor, Current Leads, Electrical Arcing, Electrical Shorts, Conductor Joints, Forces from Unequal Currents, Eddy Current Effects, Cryostat Rupture, Vacuum Failure, Fringing Field and Instrumentation for Safety. Priorities among these areas are suggested

  3. The 2016 Kumamoto earthquake sequence.

    Science.gov (United States)

    Kato, Aitaro; Nakamura, Kouji; Hiyama, Yohei

    2016-01-01

    Beginning in April 2016, a series of shallow, moderate to large earthquakes with associated strong aftershocks struck the Kumamoto area of Kyushu, SW Japan. An M j 7.3 mainshock occurred on 16 April 2016, close to the epicenter of an M j 6.5 foreshock that occurred about 28 hours earlier. The intense seismicity released the accumulated elastic energy by right-lateral strike slip, mainly along two known, active faults. The mainshock rupture propagated along multiple fault segments with different geometries. The faulting style is reasonably consistent with regional deformation observed on geologic timescales and with the stress field estimated from seismic observations. One striking feature of this sequence is intense seismic activity, including a dynamically triggered earthquake in the Oita region. Following the mainshock rupture, postseismic deformation has been observed, as well as expansion of the seismicity front toward the southwest and northwest.

  4. Earthquake lights and rupture processes

    Directory of Open Access Journals (Sweden)

    T. V. Losseva

    2005-01-01

    Full Text Available A physical model of earthquake lights is proposed. It is suggested that the magnetic diffusion from the electric and magnetic fields source region is a dominant process, explaining rather high localization of the light flashes. A 3D numerical code allowing to take into account the arbitrary distribution of currents caused by ground motion, conductivity in the ground and at its surface, including the existence of sea water above the epicenter or (and near the ruptured segments of the fault have been developed. Simulations for the 1995 Kobe earthquake were conducted taking into account the existence of sea water with realistic geometry of shores. The results do not contradict the eyewitness reports and scarce measurements of the electric and magnetic fields at large distances from the epicenter.

  5. The 2016 Kumamoto earthquake sequence

    Science.gov (United States)

    KATO, Aitaro; NAKAMURA, Kouji; HIYAMA, Yohei

    2016-01-01

    Beginning in April 2016, a series of shallow, moderate to large earthquakes with associated strong aftershocks struck the Kumamoto area of Kyushu, SW Japan. An Mj 7.3 mainshock occurred on 16 April 2016, close to the epicenter of an Mj 6.5 foreshock that occurred about 28 hours earlier. The intense seismicity released the accumulated elastic energy by right-lateral strike slip, mainly along two known, active faults. The mainshock rupture propagated along multiple fault segments with different geometries. The faulting style is reasonably consistent with regional deformation observed on geologic timescales and with the stress field estimated from seismic observations. One striking feature of this sequence is intense seismic activity, including a dynamically triggered earthquake in the Oita region. Following the mainshock rupture, postseismic deformation has been observed, as well as expansion of the seismicity front toward the southwest and northwest. PMID:27725474

  6. Links Between Earthquake Characteristics and Subducting Plate Heterogeneity in the 2016 Pedernales Ecuador Earthquake Rupture Zone

    Science.gov (United States)

    Bai, L.; Mori, J. J.

    2016-12-01

    The collision between the Indian and Eurasian plates formed the Himalayas, the largest orogenic belt on the Earth. The entire region accommodates shallow earthquakes, while intermediate-depth earthquakes are concentrated at the eastern and western Himalayan syntaxis. Here we investigate the focal depths, fault plane solutions, and source rupture process for three earthquake sequences, which are located at the western, central and eastern regions of the Himalayan orogenic belt. The Pamir-Hindu Kush region is located at the western Himalayan syntaxis and is characterized by extreme shortening of the upper crust and strong interaction of various layers of the lithosphere. Many shallow earthquakes occur on the Main Pamir Thrust at focal depths shallower than 20 km, while intermediate-deep earthquakes are mostly located below 75 km. Large intermediate-depth earthquakes occur frequently at the western Himalayan syntaxis about every 10 years on average. The 2015 Nepal earthquake is located in the central Himalayas. It is a typical megathrust earthquake that occurred on the shallow portion of the Main Himalayan Thrust (MHT). Many of the aftershocks are located above the MHT and illuminate faulting structures in the hanging wall with dip angles that are steeper than the MHT. These observations provide new constraints on the collision and uplift processes for the Himalaya orogenic belt. The Indo-Burma region is located south of the eastern Himalayan syntaxis, where the strike of the plate boundary suddenly changes from nearly east-west at the Himalayas to nearly north-south at the Burma Arc. The Burma arc subduction zone is a typical oblique plate convergence zone. The eastern boundary is the north-south striking dextral Sagaing fault, which hosts many shallow earthquakes with focal depth less than 25 km. In contrast, intermediate-depth earthquakes along the subduction zone reflect east-west trending reverse faulting.

  7. OMG Earthquake! Can Twitter improve earthquake response?

    Science.gov (United States)

    Earle, P. S.; Guy, M.; Ostrum, C.; Horvath, S.; Buckmaster, R. A.

    2009-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment its earthquake response products and the delivery of hazard information. The goal is to gather near real-time, earthquake-related messages (tweets) and provide geo-located earthquake detections and rough maps of the corresponding felt areas. Twitter and other social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources. People local to an event often publish information within seconds via these technologies. In contrast, depending on the location of the earthquake, scientific alerts take between 2 to 20 minutes. Examining the tweets following the March 30, 2009, M4.3 Morgan Hill earthquake shows it is possible (in some cases) to rapidly detect and map the felt area of an earthquake using Twitter responses. Within a minute of the earthquake, the frequency of “earthquake” tweets rose above the background level of less than 1 per hour to about 150 per minute. Using the tweets submitted in the first minute, a rough map of the felt area can be obtained by plotting the tweet locations. Mapping the tweets from the first six minutes shows observations extending from Monterey to Sacramento, similar to the perceived shaking region mapped by the USGS “Did You Feel It” system. The tweets submitted after the earthquake also provided (very) short first-impression narratives from people who experienced the shaking. Accurately assessing the potential and robustness of a Twitter-based system is difficult because only tweets spanning the previous seven days can be searched, making a historical study impossible. We have, however, been archiving tweets for several months, and it is clear that significant limitations do exist. The main drawback is the lack of quantitative information

  8. Radon anomalies prior to earthquakes (1). Review of previous studies

    International Nuclear Information System (INIS)

    Ishikawa, Tetsuo; Tokonami, Shinji; Yasuoka, Yumi; Shinogi, Masaki; Nagahama, Hiroyuki; Omori, Yasutaka; Kawada, Yusuke

    2008-01-01

    The relationship between radon anomalies and earthquakes has been studied for more than 30 years. However, most of the studies dealt with radon in soil gas or in groundwater. Before the 1995 Hyogoken-Nanbu earthquake, an anomalous increase of atmospheric radon was observed at Kobe Pharmaceutical University. The increase was well fitted with a mathematical model related to earthquake fault dynamics. This paper reports the significance of this observation, reviewing previous studies on radon anomaly before earthquakes. Groundwater/soil radon measurements for earthquake prediction began in 1970's in Japan as well as foreign countries. One of the most famous studies in Japan is groundwater radon anomaly before the 1978 Izu-Oshima-kinkai earthquake. We have recognized the significance of radon in earthquake prediction research, but recently its limitation was also pointed out. Some researchers are looking for a better indicator for precursors; simultaneous measurements of radon and other gases are new trials in recent studies. Contrary to soil/groundwater radon, we have not paid much attention to atmospheric radon before earthquakes. However, it might be possible to detect precursors in atmospheric radon before a large earthquake. In the next issues, we will discuss the details of the anomalous atmospheric radon data observed before the Hyogoken-Nanbu earthquake. (author)

  9. Earthquakes and Schools

    Science.gov (United States)

    National Clearinghouse for Educational Facilities, 2008

    2008-01-01

    Earthquakes are low-probability, high-consequence events. Though they may occur only once in the life of a school, they can have devastating, irreversible consequences. Moderate earthquakes can cause serious damage to building contents and non-structural building systems, serious injury to students and staff, and disruption of building operations.…

  10. Bam Earthquake in Iran

    CERN Multimedia

    2004-01-01

    Following their request for help from members of international organisations, the permanent Mission of the Islamic Republic of Iran has given the following bank account number, where you can donate money to help the victims of the Bam earthquake. Re: Bam earthquake 235 - UBS 311264.35L Bubenberg Platz 3001 BERN

  11. Tradable Earthquake Certificates

    NARCIS (Netherlands)

    Woerdman, Edwin; Dulleman, Minne

    2018-01-01

    This article presents a market-based idea to compensate for earthquake damage caused by the extraction of natural gas and applies it to the case of Groningen in the Netherlands. Earthquake certificates give homeowners a right to yearly compensation for both property damage and degradation of living

  12. Real-time earthquake monitoring using a search engine method.

    Science.gov (United States)

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-12-04

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.

  13. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    Science.gov (United States)

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  14. Historic Eastern Canadian earthquakes

    International Nuclear Information System (INIS)

    Asmis, G.J.K.; Atchinson, R.J.

    1981-01-01

    Nuclear power plants licensed in Canada have been designed to resist earthquakes: not all plants, however, have been explicitly designed to the same level of earthquake induced forces. Understanding the nature of strong ground motion near the source of the earthquake is still very tentative. This paper reviews historical and scientific accounts of the three strongest earthquakes - St. Lawrence (1925), Temiskaming (1935), Cornwall (1944) - that have occurred in Canada in 'modern' times, field studies of near-field strong ground motion records and their resultant damage or non-damage to industrial facilities, and numerical modelling of earthquake sources and resultant wave propagation to produce accelerograms consistent with the above historical record and field studies. It is concluded that for future construction of NPP's near-field strong motion must be explicitly considered in design

  15. Acute myocardial infarction and stress cardiomyopathy following the Christchurch earthquakes.

    Science.gov (United States)

    Chan, Christina; Elliott, John; Troughton, Richard; Frampton, Christopher; Smyth, David; Crozier, Ian; Bridgman, Paul

    2013-01-01

    Christchurch, New Zealand, was struck by 2 major earthquakes at 4:36 am on 4 September 2010, magnitude 7.1 and at 12:51 pm on 22 February 2011, magnitude 6.3. Both events caused widespread destruction. Christchurch Hospital was the region's only acute care hospital. It remained functional following both earthquakes. We were able to examine the effects of the 2 earthquakes on acute cardiac presentations. Patients admitted under Cardiology in Christchurch Hospital 3 week prior to and 5 weeks following both earthquakes were analysed, with corresponding control periods in September 2009 and February 2010. Patients were categorised based on diagnosis: ST elevation myocardial infarction, Non ST elevation myocardial infarction, stress cardiomyopathy, unstable angina, stable angina, non cardiac chest pain, arrhythmia and others. There was a significant increase in overall admissions (pearthquake. This pattern was not seen after the early afternoon February earthquake. Instead, there was a very large number of stress cardiomyopathy admissions with 21 cases (95% CI 2.6-6.4) in 4 days. There had been 6 stress cardiomyopathy cases after the first earthquake (95% CI 0.44-2.62). Statistical analysis showed this to be a significant difference between the earthquakes (pearthquake triggered a large increase in ST elevation myocardial infarction and a few stress cardiomyopathy cases. The early afternoon February earthquake caused significantly more stress cardiomyopathy. Two major earthquakes occurring at different times of day differed in their effect on acute cardiac events.

  16. Radon, gas geochemistry, groundwater, and earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    King, Chi-Yu [Power Reactor and Nuclear Fuel Development Corp., Tono Geoscience Center, Toki, Gifu (Japan)

    1998-12-31

    Radon monitoring in groundwater, soil air, and atmosphere has been continued in many seismic areas of the world for earthquake-prediction and active-fault studies. Some recent measurements of radon and other geochemical and hydrological parameters have been made for sufficiently long periods, with reliable instruments, and together with measurements of meteorological variables and solid-earth tides. The resultant data are useful in better distinguishing earthquake-related changes from various background noises. Some measurements have been carried out in areas where other geophysical measurements are being made also. Comparative studies of various kinds of geophysical data are helpful in ascertaining the reality of the earthquake-related and fault-related anomalies and in understanding the underlying mechanisms. Spatial anomalies of radon and other terrestrial gasses have been observed for many active faults. Such observations indicate that gas concentrations are very much site dependent, particularly on fault zones where terrestrial fluids may move vertically. Temporal anomalies have been reliably observed before and after some recent earthquakes, including the 1995 Kobe earthquake, and the general pattern of anomaly occurrence remains the same as observed before: They are recorded at only relatively few sensitive sites, which can be at much larger distances than expected from existing earthquake-source models. The sensitivity of a sensitive site is also found to be changeable with time. These results clearly show the inadequacy of the existing dilatancy-fluid diffusion and elastic-dislocation models for earthquake sources to explain earthquake-related geochemical and geophysical changes recorded at large distances. (J.P.N.)

  17. Mexican Earthquakes and Tsunamis Catalog Reviewed

    Science.gov (United States)

    Ramirez-Herrera, M. T.; Castillo-Aja, R.

    2015-12-01

    Today the availability of information on the internet makes online catalogs very easy to access by both scholars and the public in general. The catalog in the "Significant Earthquake Database", managed by the National Center for Environmental Information (NCEI formerly NCDC), NOAA, allows access by deploying tabular and cartographic data related to earthquakes and tsunamis contained in the database. The NCEI catalog is the product of compiling previously existing catalogs, historical sources, newspapers, and scientific articles. Because NCEI catalog has a global coverage the information is not homogeneous. Existence of historical information depends on the presence of people in places where the disaster occurred, and that the permanence of the description is preserved in documents and oral tradition. In the case of instrumental data, their availability depends on the distribution and quality of seismic stations. Therefore, the availability of information for the first half of 20th century can be improved by careful analysis of the available information and by searching and resolving inconsistencies. This study shows the advances we made in upgrading and refining data for the earthquake and tsunami catalog of Mexico since 1500 CE until today, presented in the format of table and map. Data analysis allowed us to identify the following sources of error in the location of the epicenters in existing catalogs: • Incorrect coordinate entry • Place name erroneous or mistaken • Too general data that makes difficult to locate the epicenter, mainly for older earthquakes • Inconsistency of earthquakes and the tsunami occurrence: earthquake's epicenter located too far inland reported as tsunamigenic. The process of completing the catalogs directly depends on the availability of information; as new archives are opened for inspection, there are more opportunities to complete the history of large earthquakes and tsunamis in Mexico. Here, we also present new earthquake and

  18. Earthquake Swarm Along the San Andreas Fault near Palmdale, Southern California, 1976 to 1977.

    Science.gov (United States)

    McNally, K C; Kanamori, H; Pechmann, J C; Fuis, G

    1978-09-01

    Between November 1976 and November 1977 a swarm of small earthquakes (local magnitude foreshock sequences, such as tight clustering of hypocenters and time-dependent rotations of stress axes inferred from focal mechanisms. However, because of our present lack of understanding of the processes that precede earthquake faulting, the implications of the swarm for future large earthquakes on the San Andreas fault are unknown.

  19. Twelve years of fireworks market surveillance in France

    OpenAIRE

    Branka , Ruddy

    2012-01-01

    International audience; In the view of market surveillance, more than 4400 fireworks have been taken on the spot by sworn people or bought on the market in France since 1999 for inspection purposes. This paper presents the market surveillance sampling evolution during twelve years, carried out by the PYRO unit of the Accidental Risks Division of INERIS as testing body ; the related measures implemented : additional audits in importer plants, interlaboratory tests for guarantying the reliabili...

  20. Twelve Theses on Reactive Rules for the Web

    OpenAIRE

    Bry, François; Eckert, Michael

    2006-01-01

    Reactivity, the ability to detect and react to events, is an essential functionality in many information systems. In particular, Web systems such as online marketplaces, adaptive (e.g., recommender) sys- tems, and Web services, react to events such as Web page updates or data posted to a server. This article investigates issues of relevance in designing high-level programming languages dedicated to reactivity on the Web. It presents twelve theses on features desira...

  1. Hidden twelve-dimensional super Poincare symmetry in eleven dimensions

    International Nuclear Information System (INIS)

    Bars, Itzhak; Deliduman, Cemsinan; Pasqua, Andrea; Zumino, Bruno

    2004-01-01

    First, we review a result in our previous paper, of how a ten-dimensional superparticle, taken off-shell, has a hidden eleven-dimensional super Poincare symmetry. Then, we show that the physical sector is defined by three first-class constraints which preserve the full eleven-dimensional symmetry. Applying the same concepts to the eleven-dimensional superparticle, taken off-shell, we discover a hidden twelve-dimensional super Poincare symmetry that governs the theory

  2. Morphology of the spermathecae of twelve species of Triatominae (Hemiptera, Reduviidae) vectors of Chagas disease.

    Science.gov (United States)

    Nascimento, Juliana Damieli; Ribeiro, Aline Rimoldi; Almeida, Larissa Aguiar; de Oliveira, Jader; Mendonça, Vagner José; Cilense, Mário; da Rosa, João Aristeu

    2017-12-01

    Trypanosoma cruzi, the etiological agent of Chagas disease, is transmitted by triatomines that have been described in a large number of studies. Most of those studies are related to external morphology and taxonomy, but some biochemical, genetic and physiological studies have also been published. There are a few publications in the literature about the internal organs of Triatominae, for instance the spermathecae, which are responsible for storing and maintaining the viability of the spermatozoids until the fertilization of the oocytes. This work aims to study the spermathecae of twelve species of triatomines obtained from the Triatominae Insectarium of the Faculty of Pharmaceutical Sciences, UNESP, Araraquara, using optical microscopy and scanning electron microscopy. The spermathecae of the twelve species studied showed three morphological patterns: a) P. herreri sn, P. lignarius, P. megistus, Triatoma brasiliensis, T. juazeirensis, T. sherlocki and T. tibiamaculata have spermathecae with a thin initial portion and an oval-shaped final portion; b) R. montenegrensis, R. nasutus, R. neglectus, R. pictipes and R. prolixus have tubular and winding spermathecae; c) T. infestans has oval spermathecae. In addition to the three morphological patterns, it was noted that each of the twelve species has particular features that differentiate them. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Earthquakes, November-December 1977

    Science.gov (United States)

    Person, W.J.

    1978-01-01

    Two major earthquakes occurred in the last 2 months of the year. A magnitude 7.0 earthquake struck San Juan Province, Argentina, on November 23, causing fatalities and damage. The second major earthquake was a magnitude 7.0 in the Bonin Islands region, an unpopulated area. On December 19, Iran experienced a destructive earthquake, which killed over 500.

  4. Earthquakes, September-October 1986

    Science.gov (United States)

    Person, W.J.

    1987-01-01

    There was one great earthquake (8.0 and above) during this reporting period in the South Pacific in the Kermadec Islands. There were no major earthquakes (7.0-7.9) but earthquake-related deaths were reported in Greece and in El Salvador. There were no destrcutive earthquakes in the United States.

  5. Crowd-Sourced Global Earthquake Early Warning

    Science.gov (United States)

    Minson, S. E.; Brooks, B. A.; Glennie, C. L.; Murray, J. R.; Langbein, J. O.; Owen, S. E.; Iannucci, B. A.; Hauser, D. L.

    2014-12-01

    Although earthquake early warning (EEW) has shown great promise for reducing loss of life and property, it has only been implemented in a few regions due, in part, to the prohibitive cost of building the required dense seismic and geodetic networks. However, many cars and consumer smartphones, tablets, laptops, and similar devices contain low-cost versions of the same sensors used for earthquake monitoring. If a workable EEW system could be implemented based on either crowd-sourced observations from consumer devices or very inexpensive networks of instruments built from consumer-quality sensors, EEW coverage could potentially be expanded worldwide. Controlled tests of several accelerometers and global navigation satellite system (GNSS) receivers typically found in consumer devices show that, while they are significantly noisier than scientific-grade instruments, they are still accurate enough to capture displacements from moderate and large magnitude earthquakes. The accuracy of these sensors varies greatly depending on the type of data collected. Raw coarse acquisition (C/A) code GPS data are relatively noisy. These observations have a surface displacement detection threshold approaching ~1 m and would thus only be useful in large Mw 8+ earthquakes. However, incorporating either satellite-based differential corrections or using a Kalman filter to combine the raw GNSS data with low-cost acceleration data (such as from a smartphone) decreases the noise dramatically. These approaches allow detection thresholds as low as 5 cm, potentially enabling accurate warnings for earthquakes as small as Mw 6.5. Simulated performance tests show that, with data contributed from only a very small fraction of the population, a crowd-sourced EEW system would be capable of warning San Francisco and San Jose of a Mw 7 rupture on California's Hayward fault and could have accurately issued both earthquake and tsunami warnings for the 2011 Mw 9 Tohoku-oki, Japan earthquake.

  6. GPS detection of ionospheric perturbation before the 13 February 2001, El Salvador earthquake

    OpenAIRE

    V. V. Plotkin

    2003-01-01

    A large earthquake of M6.6 occurred on 13 February 2001 at 14:22:05 UT in El Salvador. We detected ionospheric perturbation before this earthquake using GPS data received from CORS network. Systematic decreases of ionospheric total electron content during two days before the earthquake onset were observed at set of stations near the earthquake location and probably in region of about 1000 km from epicenter. This result is consistent with t...

  7. Earthquake hazard assessment and small earthquakes

    International Nuclear Information System (INIS)

    Reiter, L.

    1987-01-01

    The significance of small earthquakes and their treatment in nuclear power plant seismic hazard assessment is an issue which has received increased attention over the past few years. In probabilistic studies, sensitivity studies showed that the choice of the lower bound magnitude used in hazard calculations can have a larger than expected effect on the calculated hazard. Of particular interest is the fact that some of the difference in seismic hazard calculations between the Lawrence Livermore National Laboratory (LLNL) and Electric Power Research Institute (EPRI) studies can be attributed to this choice. The LLNL study assumed a lower bound magnitude of 3.75 while the EPRI study assumed a lower bound magnitude of 5.0. The magnitudes used were assumed to be body wave magnitudes or their equivalents. In deterministic studies recent ground motion recordings of small to moderate earthquakes at or near nuclear power plants have shown that the high frequencies of design response spectra may be exceeded. These exceedances became important issues in the licensing of the Summer and Perry nuclear power plants. At various times in the past particular concerns have been raised with respect to the hazard and damage potential of small to moderate earthquakes occurring at very shallow depths. In this paper a closer look is taken at these issues. Emphasis is given to the impact of lower bound magnitude on probabilistic hazard calculations and the historical record of damage from small to moderate earthquakes. Limited recommendations are made as to how these issues should be viewed

  8. Tidal controls on earthquake size-frequency statistics

    Science.gov (United States)

    Ide, S.; Yabe, S.; Tanaka, Y.

    2016-12-01

    The possibility that tidal stresses can trigger earthquakes is a long-standing issue in seismology. Except in some special cases, a causal relationship between seismicity and the phase of tidal stress has been rejected on the basis of studies using many small events. However, recently discovered deep tectonic tremors are highly sensitive to tidal stress levels, with the relationship being governed by a nonlinear law according to which the tremor rate increases exponentially with increasing stress; thus, slow deformation (and the probability of earthquakes) may be enhanced during periods of large tidal stress. Here, we show the influence of tidal stress on seismicity by calculating histories of tidal shear stress during the 2-week period before earthquakes. Very large earthquakes tend to occur near the time of maximum tidal stress, but this tendency is not obvious for small earthquakes. Rather, we found that tidal stress controls the earthquake size-frequency statistics; i.e., the fraction of large events increases (i.e. the b-value of the Gutenberg-Richter relation decreases) as the tidal shear stress increases. This correlation is apparent in data from the global catalog and in relatively homogeneous regional catalogues of earthquakes in Japan. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. Our findings indicate that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. This finding has clear implications for probabilistic earthquake forecasting.

  9. Sun, Moon and Earthquakes

    Science.gov (United States)

    Kolvankar, V. G.

    2013-12-01

    During a study conducted to find the effect of Earth tides on the occurrence of earthquakes, for small areas [typically 1000km X1000km] of high-seismicity regions, it was noticed that the Sun's position in terms of universal time [GMT] shows links to the sum of EMD [longitude of earthquake location - longitude of Moon's foot print on earth] and SEM [Sun-Earth-Moon angle]. This paper provides the details of this relationship after studying earthquake data for over forty high-seismicity regions of the world. It was found that over 98% of the earthquakes for these different regions, examined for the period 1973-2008, show a direct relationship between the Sun's position [GMT] and [EMD+SEM]. As the time changes from 00-24 hours, the factor [EMD+SEM] changes through 360 degree, and plotting these two variables for earthquakes from different small regions reveals a simple 45 degree straight-line relationship between them. This relationship was tested for all earthquakes and earthquake sequences for magnitude 2.0 and above. This study conclusively proves how Sun and the Moon govern all earthquakes. Fig. 12 [A+B]. The left-hand figure provides a 24-hour plot for forty consecutive days including the main event (00:58:23 on 26.12.2004, Lat.+3.30, Long+95.980, Mb 9.0, EQ count 376). The right-hand figure provides an earthquake plot for (EMD+SEM) vs GMT timings for the same data. All the 376 events including the main event faithfully follow the straight-line curve.

  10. Preliminary report on Petatlan, Mexico: earthquake of 14 March 1979

    Energy Technology Data Exchange (ETDEWEB)

    1979-01-01

    A major earthquake, M/sub s/ = 7.6, occurred off the southern coast of Mexico near the town of Petatlan on 14 March 1979. The earthquake ruptured a 50-km-long section of the Middle American subduction zone, a seismic gap last ruptured by a major earthquake (M/sub s/ = 7.5) in 1943. Since adjacent gaps of approximately the same size have not had a large earthquake since 1911, and one of these suffered three major earthquakes in four years (1907, 1909, 1911), recurrence times for large events here are highly variable. Thus, this general area remains one of high seismic risk, and provides a focus for investigation of segmentation in the subduction processes. 2 figures.

  11. Implications of fault constitutive properties for earthquake prediction.

    Science.gov (United States)

    Dieterich, J H; Kilgore, B

    1996-04-30

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  12. Distribution of incremental static stress caused by earthquakes

    Directory of Open Access Journals (Sweden)

    Y. Y. Kagan

    1994-01-01

    Full Text Available Theoretical calculations, simulations and measurements of rotation of earthquake focal mechanisms suggest that the stress in earthquake focal zones follows the Cauchy distribution which is one of the stable probability distributions (with the value of the exponent α equal to 1. We review the properties of the stable distributions and show that the Cauchy distribution is expected to approximate the stress caused by earthquakes occurring over geologically long intervals of a fault zone development. However, the stress caused by recent earthquakes recorded in instrumental catalogues, should follow symmetric stable distributions with the value of α significantly less than one. This is explained by a fractal distribution of earthquake hypocentres: the dimension of a hypocentre set, ��, is close to zero for short-term earthquake catalogues and asymptotically approaches 2¼ for long-time intervals. We use the Harvard catalogue of seismic moment tensor solutions to investigate the distribution of incremental static stress caused by earthquakes. The stress measured in the focal zone of each event is approximated by stable distributions. In agreement with theoretical considerations, the exponent value of the distribution approaches zero as the time span of an earthquake catalogue (ΔT decreases. For large stress values α increases. We surmise that it is caused by the δ increase for small inter-earthquake distances due to location errors.

  13. Investigating landslides caused by earthquakes - A historical review

    Science.gov (United States)

    Keefer, D.K.

    2002-01-01

    Post-earthquake field investigations of landslide occurrence have provided a basis for understanding, evaluating, and mapping the hazard and risk associated with earthquake-induced landslides. This paper traces the historical development of knowledge derived from these investigations. Before 1783, historical accounts of the occurrence of landslides in earthquake are typically so incomplete and vague that conclusions based on these accounts are of limited usefulness. For example, the number of landslides triggered by a given event is almost always greatly underestimated. The first formal, scientific post-earthquake investigation that included systematic documentation of the landslides was undertaken in the Calabria region of Italy after the 1783 earthquake swarm. From then until the mid-twentieth century, the best information on earthquake-induced landslides came from a succession of post-earthquake investigations largely carried out by formal commissions that undertook extensive ground-based field studies. Beginning in the mid-twentieth century, when the use of aerial photography became widespread, comprehensive inventories of landslide occurrence have been made for several earthquakes in the United States, Peru, Guatemala, Italy, El Salvador, Japan, and Taiwan. Techniques have also been developed for performing "retrospective" analyses years or decades after an earthquake that attempt to reconstruct the distribution of landslides triggered by the event. The additional use of Geographic Information System (GIS) processing and digital mapping since about 1989 has greatly facilitated the level of analysis that can applied to mapped distributions of landslides. Beginning in 1984, synthesis of worldwide and national data on earthquake-induced landslides have defined their general characteristics and relations between their occurrence and various geologic and seismic parameters. However, the number of comprehensive post-earthquake studies of landslides is still

  14. Limitation of the Predominant-Period Estimator for Earthquake Early Warning and the Initial Rupture of Earthquakes

    Science.gov (United States)

    Yamada, T.; Ide, S.

    2007-12-01

    Earthquake early warning is an important and challenging issue for the reduction of the seismic damage, especially for the mitigation of human suffering. One of the most important problems in earthquake early warning systems is how immediately we can estimate the final size of an earthquake after we observe the ground motion. It is relevant to the problem whether the initial rupture of an earthquake has some information associated with its final size. Nakamura (1988) developed the Urgent Earthquake Detection and Alarm System (UrEDAS). It calculates the predominant period of the P wave (τp) and estimates the magnitude of an earthquake immediately after the P wave arrival from the value of τpmax, or the maximum value of τp. The similar approach has been adapted by other earthquake alarm systems (e.g., Allen and Kanamori (2003)). To investigate the characteristic of the parameter τp and the effect of the length of the time window (TW) in the τpmax calculation, we analyze the high-frequency recordings of earthquakes at very close distances in the Mponeng mine in South Africa. We find that values of τpmax have upper and lower limits. For larger earthquakes whose source durations are longer than TW, the values of τpmax have an upper limit which depends on TW. On the other hand, the values for smaller earthquakes have a lower limit which is proportional to the sampling interval. For intermediate earthquakes, the values of τpmax are close to their typical source durations. These two limits and the slope for intermediate earthquakes yield an artificial final size dependence of τpmax in a wide size range. The parameter τpmax is useful for detecting large earthquakes and broadcasting earthquake early warnings. However, its dependence on the final size of earthquakes does not suggest that the earthquake rupture is deterministic. This is because τpmax does not always have a direct relation to the physical quantities of an earthquake.

  15. Earthquake Ground Motion Selection

    Science.gov (United States)

    2012-05-01

    Nonlinear analyses of soils, structures, and soil-structure systems offer the potential for more accurate characterization of geotechnical and structural response under strong earthquake shaking. The increasing use of advanced performance-based desig...

  16. 1988 Spitak Earthquake Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...

  17. Evaluating real-time air-quality data as earthquake indicator

    International Nuclear Information System (INIS)

    Hsu, Shih-Chieh; Huang, Yi-Tang; Huang, Jr-Chung; Tu, Jien-Yi; Engling, Guenter; Lin, Chuan-Yao; Lin, Fei-Jan; Huang, Chao-Hao

    2010-01-01

    A catastrophic earthquake, namely the 921-earthquake, occurred with a magnitude of M L = 7.3 in Taiwan on September 21, 1999, causing severe disaster. The evaluation of real-time air-quality data, obtained by the Taiwan Environmental Protection Administration (EPA), revealed a staggering increase in ambient SO 2 concentrations by more than one order of magnitude across the island several hours prior to the earthquake, particularly at background stations. The abrupt increase in SO 2 concentrations likely resulted from seismic-triggered degassing instead of air pollution. An additional case of a large earthquake (M L = 6.8), occurring on March 31, 2002, was examined to confirm our observations of significantly enhanced SO 2 concentrations in ambient air prior to large earthquakes. The coincidence between large earthquakes and increases in trace gases during the pre-quake period (several hours) indicates the potential of employing air-quality monitoring data to forecast catastrophic earthquakes.

  18. Memory effect in M ≥ 6 earthquakes of South-North Seismic Belt, Mainland China

    Science.gov (United States)

    Wang, Jeen-Hwa

    2013-07-01

    The M ≥ 6 earthquakes occurred in the South-North Seismic Belt, Mainland China, during 1901-2008 are taken to study the possible existence of memory effect in large earthquakes. The fluctuation analysis technique is applied to analyze the sequences of earthquake magnitude and inter-event time represented in the natural time domain. Calculated results show that the exponents of scaling law of fluctuation versus window length are less than 0.5 for the sequences of earthquake magnitude and inter-event time. The migration of earthquakes in study is taken to discuss the possible correlation between events. The phase portraits of two sequent magnitudes and two sequent inter-event times are also applied to explore if large (or small) earthquakes are followed by large (or small) events. Together with all kinds of given information, we conclude that the earthquakes in study is short-term correlated and thus the short-term memory effect would be operative.

  19. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation.

    Science.gov (United States)

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-05-10

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011.

  20. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation

    Science.gov (United States)

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-01-01

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011. PMID:27161897

  1. Electromagnetic Manifestation of Earthquakes

    OpenAIRE

    Uvarov Vladimir

    2017-01-01

    In a joint analysis of the results of recording the electrical component of the natural electromagnetic field of the Earth and the catalog of earthquakes in Kamchatka in 2013, unipolar pulses of constant amplitude associated with earthquakes were identified, whose activity is closely correlated with the energy of the electromagnetic field. For the explanation, a hypothesis about the cooperative character of these impulses is proposed.

  2. Electromagnetic Manifestation of Earthquakes

    Directory of Open Access Journals (Sweden)

    Uvarov Vladimir

    2017-01-01

    Full Text Available In a joint analysis of the results of recording the electrical component of the natural electromagnetic field of the Earth and the catalog of earthquakes in Kamchatka in 2013, unipolar pulses of constant amplitude associated with earthquakes were identified, whose activity is closely correlated with the energy of the electromagnetic field. For the explanation, a hypothesis about the cooperative character of these impulses is proposed.

  3. Hypocentre estimation of induced earthquakes in Groningen

    NARCIS (Netherlands)

    Spetzler, J.; Dost, Bernard

    2017-01-01

    Induced earthquakes due to gas production have taken place in the province of Groningen in the northeast of The Netherlands since 1986. In the first years of seismicity, a sparse seismological network with large station distances from the seismogenic area in Groningen was used. The location of

  4. "Earthquake!"--A Cooperative Learning Experience.

    Science.gov (United States)

    Hodder, A. Peter W.

    2001-01-01

    Presents an exercise designed as a team building experience for managers that can be used to demonstrate to science students the potential benefit of group decision-making. Involves the ranking of options for surviving a large earthquake. Yields quantitative measures of individual student knowledge and how well the groups function. (Author/YDS)

  5. Fractals and Forecasting in Earthquakes and Finance

    Science.gov (United States)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.

    2011-12-01

    It is now recognized that Benoit Mandelbrot's fractals play a critical role in describing a vast range of physical and social phenomena. Here we focus on two systems, earthquakes and finance. Since 1942, earthquakes have been characterized by the Gutenberg-Richter magnitude-frequency relation, which in more recent times is often written as a moment-frequency power law. A similar relation can be shown to hold for financial markets. Moreover, a recent New York Times article, titled "A Richter Scale for the Markets" [1] summarized the emerging viewpoint that stock market crashes can be described with similar ideas as large and great earthquakes. The idea that stock market crashes can be related in any way to earthquake phenomena has its roots in Mandelbrot's 1963 work on speculative prices in commodities markets such as cotton [2]. He pointed out that Gaussian statistics did not account for the excessive number of booms and busts that characterize such markets. Here we show that both earthquakes and financial crashes can both be described by a common Landau-Ginzburg-type free energy model, involving the presence of a classical limit of stability, or spinodal. These metastable systems are characterized by fractal statistics near the spinodal. For earthquakes, the independent ("order") parameter is the slip deficit along a fault, whereas for the financial markets, it is financial leverage in place. For financial markets, asset values play the role of a free energy. In both systems, a common set of techniques can be used to compute the probabilities of future earthquakes or crashes. In the case of financial models, the probabilities are closely related to implied volatility, an important component of Black-Scholes models for stock valuations. [2] B. Mandelbrot, The variation of certain speculative prices, J. Business, 36, 294 (1963)

  6. Seismic methodology in determining basis earthquake for nuclear installation

    International Nuclear Information System (INIS)

    Ameli Zamani, Sh.

    2008-01-01

    Design basis earthquake ground motions for nuclear installations should be determined to assure the design purpose of reactor safety: that reactors should be built and operated to pose no undue risk to public health and safety from earthquake and other hazards. Regarding the influence of seismic hazard to a site, large numbers of earthquake ground motions can be predicted considering possible variability among the source, path, and site parameters. However, seismic safety design using all predicted ground motions is practically impossible. In the determination of design basis earthquake ground motions it is therefore important to represent the influences of the large numbers of earthquake ground motions derived from the seismic ground motion prediction methods for the surrounding seismic sources. Viewing the relations between current design basis earthquake ground motion determination and modem earthquake ground motion estimation, a development of risk-informed design basis earthquake ground motion methodology is discussed for insight into the on going modernization of the Examination Guide for Seismic Design on NPP

  7. Millipedes (Diplopoda of twelve caves in Western Mecsek, Southwest Hungary

    Directory of Open Access Journals (Sweden)

    Angyal, D.

    2013-11-01

    Full Text Available Twelve caves of Western Mecsek, Southwest Hungary were examined between September 2010 and April 2013from the millipede (Diplopoda faunistical point of view. Ten species were found in eight caves, which consistedeutroglophile and troglobiont elements as well. The cave with the most diverse fauna was the Törökpince Sinkhole, while thetwo previously also investigated caves, the Abaligeti Cave and the Mánfai-kőlyuk Cave provided less species, which couldbe related to their advanced touristic and industrial utilization.

  8. Twelve tips for creating an academic teaching portfolio.

    Science.gov (United States)

    Little-Wienert, Kim; Mazziotti, Mark

    2018-01-01

    An academic teaching portfolio is not only a requirement at many academic teaching institutions, but it is also important in a medical educator's growth and development through documentation, reflection, evaluation, and change. Creating an academic portfolio may appear daunting at first but with careful advanced preparation, organized evidence collection of your educational work, proof of scholarship, and thorough documentation of self-reflection and change, you can produce a successful product that accurately represents your educational beliefs, accomplishments, and growth throughout your career. This article provides medical educators with twelve steps for creating a successful academic teaching portfolio.

  9. Fault roughness and strength heterogeneity control earthquake size and stress drop

    KAUST Repository

    Zielke, Olaf

    2017-01-13

    An earthquake\\'s stress drop is related to the frictional breakdown during sliding and constitutes a fundamental quantity of the rupture process. High-speed laboratory friction experiments that emulate the rupture process imply stress drop values that greatly exceed those commonly reported for natural earthquakes. We hypothesize that this stress drop discrepancy is due to fault-surface roughness and strength heterogeneity: an earthquake\\'s moment release and its recurrence probability depend not only on stress drop and rupture dimension but also on the geometric roughness of the ruptured fault and the location of failing strength asperities along it. Using large-scale numerical simulations for earthquake ruptures under varying roughness and strength conditions, we verify our hypothesis, showing that smoother faults may generate larger earthquakes than rougher faults under identical tectonic loading conditions. We further discuss the potential impact of fault roughness on earthquake recurrence probability. This finding provides important information, also for seismic hazard analysis.

  10. The Great Tangshan Earthquake of 1976

    OpenAIRE

    Huixian, Liu; Housner, George W.; Lili, Xie; Duxin, He

    2002-01-01

    At 4:00 a.m. on July 28, 1976 the city of Tangshan, China ceased to exist. A magnitude 7.8 earthquake was generated by a fault that passed through the city and caused 85% of the buildings to collapse or to be so seriously damaged as to be unusable, and the death toll was enormous. The earthquake caused the failures of the electric power system, the water supply system, the sewer system, the telephone and telegraph systems, and radio communications; and the large coal mines and the industries ...

  11. Experimental evidence that thrust earthquake ruptures might open faults.

    Science.gov (United States)

    Gabuchian, Vahe; Rosakis, Ares J; Bhat, Harsha S; Madariaga, Raúl; Kanamori, Hiroo

    2017-05-18

    Many of Earth's great earthquakes occur on thrust faults. These earthquakes predominantly occur within subduction zones, such as the 2011 moment magnitude 9.0 eathquake in Tohoku-Oki, Japan, or along large collision zones, such as the 1999 moment magnitude 7.7 earthquake in Chi-Chi, Taiwan. Notably, these two earthquakes had a maximum slip that was very close to the surface. This contributed to the destructive tsunami that occurred during the Tohoku-Oki event and to the large amount of structural damage caused by the Chi-Chi event. The mechanism that results in such large slip near the surface is poorly understood as shallow parts of thrust faults are considered to be frictionally stable. Here we use earthquake rupture experiments to reveal the existence of a torquing mechanism of thrust fault ruptures near the free surface that causes them to unclamp and slip large distances. Complementary numerical modelling of the experiments confirms that the hanging-wall wedge undergoes pronounced rotation in one direction as the earthquake rupture approaches the free surface, and this torque is released as soon as the rupture breaks the free surface, resulting in the unclamping and violent 'flapping' of the hanging-wall wedge. Our results imply that the shallow extent of the seismogenic zone of a subducting interface is not fixed and can extend up to the trench during great earthquakes through a torquing mechanism.

  12. On the magnitude and recurence of Vrancea earthquakes

    International Nuclear Information System (INIS)

    Oncescu, M.C.

    1987-07-01

    The moment-magnitude scale Msub(W) is proposed for the quantification of Vrancea earthquakes. The asperity model is found adequate to explain the observed quasi-cycles and super-cycles in the occurrence of large events. (auhtor)

  13. Incorporating human-triggered earthquake risks into energy and water policies

    Science.gov (United States)

    Klose, C. D.; Seeber, L.; Jacob, K. H.

    2010-12-01

    A comprehensive understanding of earthquake risks in urbanized regions requires an accurate assessment of both urban vulnerabilities and hazards from earthquakes, including ones whose timing might be affected by human activities. Socioeconomic risks associated with human-triggered earthquakes are often misconstrued and receive little scientific, legal, and public attention. Worldwide, more than 200 damaging earthquakes, associated with industrialization and urbanization, were documented since the 20th century. Geomechanical pollution due to large-scale geoengineering activities can advance the clock of earthquakes, trigger new seismic events or even shot down natural background seismicity. Activities include mining, hydrocarbon production, fluid injections, water reservoir impoundments and deep-well geothermal energy production. This type of geohazard has impacts on human security on a regional and national level. Some planned or considered future engineering projects raise particularly strong concerns about triggered earthquakes, such as for instance, sequestration of carbon dioxide by injecting it deep underground and large-scale natural gas production in the Marcellus shale in the Appalacian basin. Worldwide examples of earthquakes are discussed, including their associated losses of human life and monetary losses (e.g., 1989 Newcastle and Volkershausen earthquakes, 2001 Killari earthquake, 2006 Basel earthquake, 2010 Wenchuan earthquake). An overview is given on global statistics of human-triggered earthquakes, including depths and time delay of triggering. Lastly, strategies are described, including risk mitigation measures such as urban planning adaptations and seismic hazard mapping.

  14. Global observation of Omori-law decay in the rate of triggered earthquakes

    Science.gov (United States)

    Parsons, T.

    2001-12-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 events in El Salvador. In this study, earthquakes with M greater than 7.0 from the Harvard CMT catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near the main shocks are associated with calculated shear stress increases, while ~39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, triggered earthquakes obey an Omori-law rate decay that lasts between ~7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main-shock centroid. Earthquakes triggered by smaller quakes (foreshocks) also obey Omori's law, which is one of the few time-predictable patterns evident in the global occurrence of earthquakes. These observations indicate that earthquake probability calculations which include interactions from previous shocks should incorporate a transient Omori-law decay with time. In addition, a very simple model using the observed global rate change with time and spatial distribution of triggered earthquakes can be applied to immediately assess the likelihood of triggered earthquakes following large events, and can be in place until more sophisticated analyses are conducted.

  15. Definition of a Twelve-Point Polygonal SAA Boundary for the GLAST Mission

    International Nuclear Information System (INIS)

    Djomehri, Sabra I.; UC, Santa Cruz; SLAC

    2007-01-01

    The Gamma-Ray Large Area Space Telescope (GLAST), set to launch in early 2008, detects gamma rays within a huge energy range of 100 MeV - 300 GeV. Background cosmic radiation interferes with such detection resulting in confusion over distinguishing cosmic from gamma rays encountered. This quandary is resolved by encasing GLAST's Large Area Telescope (LAT) with an Anti-Coincidence Detector (ACD), a device which identifies and vetoes charged particles. The ACD accomplishes this through plastic scintillator tiles; when cosmic rays strike, photons produced induce currents in Photomultiplier Tubes (PMTs) attached to these tiles. However, as GLAST orbits Earth at altitudes ∼550km and latitudes between -26 degree and 26 degree, it will confront the South Atlantic Anomaly (SAA), a region of high particle flux caused by trapped radiation in the geomagnetic field. Since the SAA flux would degrade the sensitivity of the ACD's PMTs over time, a determined boundary enclosing this region need be attained, signaling when to lower the voltage on the PMTs as a protective measure. The operational constraints on such a boundary require a convex SAA polygon with twelve edges, whose area is minimal ensuring GLAST has maximum observation time. The AP8 and PSB97 models describing the behavior of trapped radiation were used in analyzing the SAA and defining a convex SAA boundary of twelve sides. The smallest possible boundary was found to cover 14.58% of GLAST's observation time. Further analysis of defining a boundary safety margin to account for inaccuracies in the models reveals if the total SAA hull area is increased by ∼20%, the loss of total observational area is < 5%. These twelve coordinates defining the SAA flux region are ready for implementation by the GLAST satellite

  16. Retrospective evaluation of the five-year and ten-year CSEP-Italy earthquake forecasts

    Directory of Open Access Journals (Sweden)

    Stefan Wiemer

    2010-11-01

    Full Text Available On August 1, 2009, the global Collaboratory for the Study of Earthquake Predictability (CSEP launched a prospective and comparative earthquake predictability experiment in Italy. The goal of this CSEP-Italy experiment is to test earthquake occurrence hypotheses that have been formalized as probabilistic earthquake forecasts over temporal scales that range from days to years. In the first round of forecast submissions, members of the CSEP-Italy Working Group presented 18 five-year and ten-year earthquake forecasts to the European CSEP Testing Center at ETH Zurich. We have considered here the twelve time-independent earthquake forecasts among this set, and evaluated them with respect to past seismicity data from two Italian earthquake catalogs. We present the results of the tests that measure the consistencies of the forecasts according to past observations. As well as being an evaluation of the time-independent forecasts submitted, this exercise provides insight into a number of important issues in predictability experiments with regard to the specification of the forecasts, the performance of the tests, and the trade-off between robustness of results and experiment duration. We conclude with suggestions for the design of future earthquake predictability experiments.

  17. Simulating Earthquakes for Science and Society: Earthquake Visualizations Ideal for use in Science Communication and Education

    Science.gov (United States)

    de Groot, R.

    2008-12-01

    The Southern California Earthquake Center (SCEC) has been developing groundbreaking computer modeling capabilities for studying earthquakes. These visualizations were initially shared within the scientific community but have recently gained visibility via television news coverage in Southern California. Computers have opened up a whole new world for scientists working with large data sets, and students can benefit from the same opportunities (Libarkin & Brick, 2002). For example, The Great Southern California ShakeOut was based on a potential magnitude 7.8 earthquake on the southern San Andreas fault. The visualization created for the ShakeOut was a key scientific and communication tool for the earthquake drill. This presentation will also feature SCEC Virtual Display of Objects visualization software developed by SCEC Undergraduate Studies in Earthquake Information Technology interns. According to Gordin and Pea (1995), theoretically visualization should make science accessible, provide means for authentic inquiry, and lay the groundwork to understand and critique scientific issues. This presentation will discuss how the new SCEC visualizations and other earthquake imagery achieve these results, how they fit within the context of major themes and study areas in science communication, and how the efficacy of these tools can be improved.

  18. Earthquake magnitude estimation using the τ c and P d method for earthquake early warning systems

    Science.gov (United States)

    Jin, Xing; Zhang, Hongcai; Li, Jun; Wei, Yongxiang; Ma, Qiang

    2013-10-01

    Earthquake early warning (EEW) systems are one of the most effective ways to reduce earthquake disaster. Earthquake magnitude estimation is one of the most important and also the most difficult parts of the entire EEW system. In this paper, based on 142 earthquake events and 253 seismic records that were recorded by the KiK-net in Japan, and aftershocks of the large Wenchuan earthquake in Sichuan, we obtained earthquake magnitude estimation relationships using the τ c and P d methods. The standard variances of magnitude calculation of these two formulas are ±0.65 and ±0.56, respectively. The P d value can also be used to estimate the peak ground motion of velocity, then warning information can be released to the public rapidly, according to the estimation results. In order to insure the stability and reliability of magnitude estimation results, we propose a compatibility test according to the natures of these two parameters. The reliability of the early warning information is significantly improved though this test.

  19. Indoor radon and earthquake

    International Nuclear Information System (INIS)

    Saghatelyan, E.; Petrosyan, L.; Aghbalyan, Yu.; Baburyan, M.; Araratyan, L.

    2004-01-01

    For the first time on the basis of the Spitak earthquake of December 1988 (Armenia, December 1988) experience it is found out that the earthquake causes intensive and prolonged radon splashes which, rapidly dispersing in the open space of close-to-earth atmosphere, are contrastingly displayed in covered premises (dwellings, schools, kindergartens) even if they are at considerable distance from the earthquake epicenter, and this multiplies the radiation influence on the population. The interval of splashes includes the period from the first fore-shock to the last after-shock, i.e. several months. The area affected by radiation is larger vs. Armenia's territory. The scale of this impact on population is 12 times higher than the number of people injured in Spitak, Leninakan and other settlements (toll of injured - 25 000 people, radiation-induced diseases in people - over 300 000). The influence of radiation directly correlates with the earthquake force. Such a conclusion is underpinned by indoor radon monitoring data for Yerevan since 1987 (120 km from epicenter) 5450 measurements and multivariate analysis with identification of cause-and-effect linkages between geo dynamics of indoor radon under stable and conditions of Earth crust, behavior of radon in different geological mediums during earthquakes, levels of room radon concentrations and effective equivalent dose of radiation impact of radiation dose on health and statistical data on public health provided by the Ministry of Health. The following hitherto unexplained facts can be considered as consequences of prolonged radiation influence on human organism: long-lasting state of apathy and indifference typical of the population of Armenia during the period of more than a year after the earthquake, prevalence of malignant cancer forms in disaster zones, dominating lung cancer and so on. All urban territories of seismically active regions are exposed to the threat of natural earthquake-provoked radiation influence

  20. Investigating Landslides Caused by Earthquakes A Historical Review

    Science.gov (United States)

    Keefer, David K.

    Post-earthquake field investigations of landslide occurrence have provided a basis for understanding, evaluating, and mapping the hazard and risk associated withearthquake-induced landslides. This paper traces thehistorical development of knowledge derived from these investigations. Before 1783, historical accounts of the occurrence of landslides in earthquakes are typically so incomplete and vague that conclusions based on these accounts are of limited usefulness. For example, the number of landslides triggered by a given event is almost always greatly underestimated. The first formal, scientific post-earthquake investigation that included systematic documentation of the landslides was undertaken in the Calabria region of Italy after the 1783 earthquake swarm. From then until the mid-twentieth century, the best information on earthquake-induced landslides came from a succession ofpost-earthquake investigations largely carried out by formal commissions that undertook extensive ground-based field studies. Beginning in the mid-twentieth century, when the use of aerial photography became widespread, comprehensive inventories of landslide occurrence have been made for several earthquakes in the United States, Peru, Guatemala, Italy, El Salvador, Japan, and Taiwan. Techniques have also been developed for performing ``retrospective'' analyses years or decades after an earthquake that attempt to reconstruct the distribution of landslides triggered by the event. The additional use of Geographic Information System (GIS) processing and digital mapping since about 1989 has greatly facilitated the level of analysis that can applied to mapped distributions of landslides. Beginning in 1984, syntheses of worldwide and national data on earthquake-induced landslides have defined their general characteristics and relations between their occurrence and various geologic and seismic parameters. However, the number of comprehensive post-earthquake studies of landslides is still

  1. Challenges to communicate risks of human-caused earthquakes

    Science.gov (United States)

    Klose, C. D.

    2014-12-01

    The awareness of natural hazards has been up-trending in recent years. In particular, this is true for earthquakes, which increase in frequency and magnitude in regions that normally do not experience seismic activity. In fact, one of the major concerns for many communities and businesses is that humans today seem to cause earthquakes due to large-scale shale gas production, dewatering and flooding of mines and deep geothermal power production. Accordingly, without opposing any of these technologies it should be a priority of earth scientists who are researching natural hazards to communicate earthquake risks. This presentation discusses the challenges that earth scientists are facing to properly communicate earthquake risks, in light of the fact that human-caused earthquakes are an environmental change affecting only some communities and businesses. Communication channels may range from research papers, books and class room lectures to outreach events and programs, popular media events or even social media networks.

  2. Earthquake damage to underground facilities

    International Nuclear Information System (INIS)

    Pratt, H.R.; Stephenson, D.E.; Zandt, G.; Bouchon, M.; Hustrulid, W.A.

    1980-01-01

    In order to assess the seismic risk for an underground facility, a data base was established and analyzed to evaluate the potential for seismic disturbance. Substantial damage to underground facilities is usually the result of displacements primarily along pre-existing faults and fractures, or at the surface entrance to these facilities. Evidence of this comes from both earthquakes and large explosions. Therefore, the displacement due to earthquakes as a function of depth is important in the evaluation of the hazard to underground facilities. To evaluate potential displacements due to seismic effects of block motions along pre-existing or induced fractures, the displacement fields surrounding two types of faults were investigated. Analytical models were used to determine relative displacements of shafts and near-surface displacement of large rock masses. Numerical methods were used to determine the displacement fields associated with pure strike-slip and vertical normal faults. Results are presented as displacements for various fault lengths as a function of depth and distance. This provides input to determine potential displacements in terms of depth and distance for underground facilities, important for assessing potential sites and design parameters

  3. Excel, Earthquakes, and Moneyball: exploring Cascadia earthquake probabilities using spreadsheets and baseball analogies

    Science.gov (United States)

    Campbell, M. R.; Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2017-12-01

    Much recent media attention focuses on Cascadia's earthquake hazard. A widely cited magazine article starts "An earthquake will destroy a sizable portion of the coastal Northwest. The question is when." Stories include statements like "a massive earthquake is overdue", "in the next 50 years, there is a 1-in-10 chance a "really big one" will erupt," or "the odds of the big Cascadia earthquake happening in the next fifty years are roughly one in three." These lead students to ask where the quoted probabilities come from and what they mean. These probability estimates involve two primary choices: what data are used to describe when past earthquakes happened and what models are used to forecast when future earthquakes will happen. The data come from a 10,000-year record of large paleoearthquakes compiled from subsidence data on land and turbidites, offshore deposits recording submarine slope failure. Earthquakes seem to have happened in clusters of four or five events, separated by gaps. Earthquakes within a cluster occur more frequently and regularly than in the full record. Hence the next earthquake is more likely if we assume that we are in the recent cluster that started about 1700 years ago, than if we assume the cluster is over. Students can explore how changing assumptions drastically changes probability estimates using easy-to-write and display spreadsheets, like those shown below. Insight can also come from baseball analogies. The cluster issue is like deciding whether to assume that a hitter's performance in the next game is better described by his lifetime record, or by the past few games, since he may be hitting unusually well or in a slump. The other big choice is whether to assume that the probability of an earthquake is constant with time, or is small immediately after one occurs and then grows with time. This is like whether to assume that a player's performance is the same from year to year, or changes over their career. Thus saying "the chance of

  4. Rupture, waves and earthquakes.

    Science.gov (United States)

    Uenishi, Koji

    2017-01-01

    Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but "extraordinary" phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable.

  5. Earthquakes and Earthquake Engineering. LC Science Tracer Bullet.

    Science.gov (United States)

    Buydos, John F., Comp.

    An earthquake is a shaking of the ground resulting from a disturbance in the earth's interior. Seismology is the (1) study of earthquakes; (2) origin, propagation, and energy of seismic phenomena; (3) prediction of these phenomena; and (4) investigation of the structure of the earth. Earthquake engineering or engineering seismology includes the…

  6. Correlating precursory declines in groundwater radon with earthquake magnitude.

    Science.gov (United States)

    Kuo, T

    2014-01-01

    Both studies at the Antung hot spring in eastern Taiwan and at the Paihe spring in southern Taiwan confirm that groundwater radon can be a consistent tracer for strain changes in the crust preceding an earthquake when observed in a low-porosity fractured aquifer surrounded by a ductile formation. Recurrent anomalous declines in groundwater radon were observed at the Antung D1 monitoring well in eastern Taiwan prior to the five earthquakes of magnitude (Mw ): 6.8, 6.1, 5.9, 5.4, and 5.0 that occurred on December 10, 2003; April 1, 2006; April 15, 2006; February 17, 2008; and July 12, 2011, respectively. For earthquakes occurring on the longitudinal valley fault in eastern Taiwan, the observed radon minima decrease as the earthquake magnitude increases. The above correlation has been proven to be useful for early warning local large earthquakes. In southern Taiwan, radon anomalous declines prior to the 2010 Mw 6.3 Jiasian, 2012 Mw 5.9 Wutai, and 2012 ML 5.4 Kaohsiung earthquakes were also recorded at the Paihe spring. For earthquakes occurring on different faults in southern Taiwan, the correlation between the observed radon minima and the earthquake magnitude is not yet possible. © 2013, National Ground Water Association.

  7. Chilean megathrust earthquake recurrence linked to frictional contrast at depth

    Science.gov (United States)

    Moreno, M.; Li, S.; Melnick, D.; Bedford, J. R.; Baez, J. C.; Motagh, M.; Metzger, S.; Vajedian, S.; Sippl, C.; Gutknecht, B. D.; Contreras-Reyes, E.; Deng, Z.; Tassara, A.; Oncken, O.

    2018-04-01

    Fundamental processes of the seismic cycle in subduction zones, including those controlling the recurrence and size of great earthquakes, are still poorly understood. Here, by studying the 2016 earthquake in southern Chile—the first large event within the rupture zone of the 1960 earthquake (moment magnitude (Mw) = 9.5)—we show that the frictional zonation of the plate interface fault at depth mechanically controls the timing of more frequent, moderate-size deep events (Mw shallow earthquakes (Mw > 8.5). We model the evolution of stress build-up for a seismogenic zone with heterogeneous friction to examine the link between the 2016 and 1960 earthquakes. Our results suggest that the deeper segments of the seismogenic megathrust are weaker and interseismically loaded by a more strongly coupled, shallower asperity. Deeper segments fail earlier ( 60 yr recurrence), producing moderate-size events that precede the failure of the shallower region, which fails in a great earthquake (recurrence >110 yr). We interpret the contrasting frictional strength and lag time between deeper and shallower earthquakes to be controlled by variations in pore fluid pressure. Our integrated analysis strengthens understanding of the mechanics and timing of great megathrust earthquakes, and therefore could aid in the seismic hazard assessment of other subduction zones.

  8. Earthquakes; May-June 1982

    Science.gov (United States)

    Person, W.J.

    1982-01-01

    There were four major earthquakes (7.0-7.9) during this reporting period: two struck in Mexico, one in El Salvador, and one in teh Kuril Islands. Mexico, El Salvador, and China experienced fatalities from earthquakes.

  9. Extended investigation of the twelve-flavor β-function

    Science.gov (United States)

    Fodor, Zoltán; Holland, Kieran; Kuti, Julius; Nógrádi, Dániel; Wong, Chik Him

    2018-04-01

    We report new results from high precision analysis of an important BSM gauge theory with twelve massless fermion flavors in the fundamental representation of the SU(3) color gauge group. The range of the renormalized gauge coupling is extended from our earlier work [1] to probe the existence of an infrared fixed point (IRFP) in the β-function reported at two different locations, originally in [2] and at a new location in [3]. We find no evidence for the IRFP of the β-function in the extended range of the renormalized gauge coupling, in disagreement with [2,3]. New arguments to guard the existence of the IRFP remain unconvincing [4], including recent claims of an IRFP with ten massless fermion flavors [5,6] which we also rule out. Predictions of the recently completed 5-loop QCD β-function for general flavor number are discussed in this context.

  10. Twelve reasons to refuse the nuclear in the MDP

    International Nuclear Information System (INIS)

    Bonduelle, A.

    2000-01-01

    The author presents twelve reasons which show that the nuclear energy has not a place in the MDP Mechanism of Clean Development: a main loophole for the developed countries, the doubtful ''additionality'' of the nuclear, the treaty ratification is more difficult with the nuclear, the domestic energy conservation is more efficient in Europe than the nuclear development, the nuclear white elephants facing the South debts, the technology transfers are doubtful, the developing countries and the sustainable development policies are evicted from the MDP, some options are more powerful in the South, the reactors and transport networks size are unsuited, the absence of democratic control, the nuclear proliferation, the nuclear safety and the wastes. (A.L.B.)

  11. Summary of Great East Japan Earthquake response at Onagawa Nuclear Power Station and further safety improvement measures

    International Nuclear Information System (INIS)

    Sato, Toru

    2013-01-01

    A large earthquake occurred on March 11, 2011 and tsunami was generated following it. The East Japan suffered serious damage by the earthquake and tsunami. This is called the Great East Japan Earthquake. Onagawa Nuclear Power Station (NPS) is located closest to the epicenter of Great East Japan Earthquake. We experienced intense shake by the earthquake and some flooding from the tsunami, however, we have succeeded safely cold shutdown of the reactors. In this paper, we introduce summary of Great East Japan Earthquake response a Onagawa NPS and safety improvement measures which are based on both experience of Onagawa NPS and lesson from Fukushima Daiichi NPS accident. (author)

  12. Earthquake Complex Network Analysis Before and After the Mw 8.2 Earthquake in Iquique, Chile

    Science.gov (United States)

    Pasten, D.

    2017-12-01

    The earthquake complex networks have shown that they are abble to find specific features in seismic data set. In space, this networkshave shown a scale-free behavior for the probability distribution of connectivity, in directed networks and theyhave shown a small-world behavior, for the undirected networks.In this work, we present an earthquake complex network analysis for the large earthquake Mw 8.2 in the north ofChile (near to Iquique) in April, 2014. An earthquake complex network is made dividing the three dimensional space intocubic cells, if one of this cells contain an hypocenter, we name this cell like a node. The connections between nodes aregenerated in time. We follow the time sequence of seismic events and we are making the connections betweennodes. Now, we have two different networks: a directed and an undirected network. Thedirected network takes in consideration the time-direction of the connections, that is very important for the connectivityof the network: we are considering the connectivity, ki of the i-th node, like the number of connections going out ofthe node i plus the self-connections (if two seismic events occurred successive in time in the same cubic cell, we havea self-connection). The undirected network is made removing the direction of the connections and the self-connectionsfrom the directed network. For undirected networks, we are considering only if two nodes are or not connected.We have built a directed complex network and an undirected complex network, before and after the large earthquake in Iquique. We have used magnitudes greater than Mw = 1.0 and Mw = 3.0. We found that this method can recognize the influence of thissmall seismic events in the behavior of the network and we found that the size of the cell used to build the network isanother important factor to recognize the influence of the large earthquake in this complex system. This method alsoshows a difference in the values of the critical exponent γ (for the probability

  13. Sensing the earthquake

    Science.gov (United States)

    Bichisao, Marta; Stallone, Angela

    2017-04-01

    Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.

  14. Turkish Children's Ideas about Earthquakes

    Science.gov (United States)

    Simsek, Canan Lacin

    2007-01-01

    Earthquake, a natural disaster, is among the fundamental problems of many countries. If people know how to protect themselves from earthquake and arrange their life styles in compliance with this, damage they will suffer will reduce to that extent. In particular, a good training regarding earthquake to be received in primary schools is considered…

  15. Earthquakes, May-June 1991

    Science.gov (United States)

    Person, W.J.

    1992-01-01

    One major earthquake occurred during this reporting period. This was a magntidue 7.1 in Indonesia (Minahassa Peninsula) on June 20. Earthquake-related deaths were reported in the Western Caucasus (Georgia, USSR) on May 3 and June 15. One earthquake-related death was also reported El Salvador on June 21. 

  16. Organizational changes at Earthquakes & Volcanoes

    Science.gov (United States)

    Gordon, David W.

    1992-01-01

    Primary responsibility for the preparation of Earthquakes & Volcanoes within the Geological Survey has shifted from the Office of Scientific Publications to the Office of Earthquakes, Volcanoes, and Engineering (OEVE). As a consequence of this reorganization, Henry Spall has stepepd down as Science Editor for Earthquakes & Volcanoes(E&V).

  17. Twelve type 2 diabetes susceptibility loci identified through large-scale association analysis

    DEFF Research Database (Denmark)

    Voight, Benjamin F; Scott, Laura J; Steinthorsdottir, Valgerdur

    2010-01-01

    By combining genome-wide association data from 8,130 individuals with type 2 diabetes (T2D) and 38,987 controls of European descent and following up previously unidentified meta-analysis signals in a further 34,412 cases and 59,925 controls, we identified 12 new T2D association signals...

  18. Twelve type 2 diabetes susceptibility loci identified through large-scale association analysis

    NARCIS (Netherlands)

    B.F. Voight (Benjamin); L.J. Scott (Laura); V. Steinthorsdottir (Valgerdur); A.D. Morris (Andrew); C. Dina (Christian); R.P. Welch (Ryan); E. Zeggini (Eleftheria); C. Huth (Cornelia); Y.S. Aulchenko (Yurii); G. Thorleifsson (Gudmar); L.J. McCulloch (Laura); T. Ferreira (Teresa); H. Grallert (Harald); N. Amin (Najaf); G. Wu (Guanming); C.J. Willer (Cristen); S. Raychaudhuri (Soumya); S.A. McCarroll (Steven); C. Langenberg (Claudia); O.M. Hofmann (Oliver); J. Dupuis (Josée); L. Qi (Lu); A.V. Segrè (Ayellet); M. van Hoek (Mandy); P. Navarro (Pau); K.G. Ardlie (Kristin); B. Balkau (Beverley); R. Benediktsson (Rafn); A.J. Bennett (Amanda); R. Blagieva (Roza); E.A. Boerwinkle (Eric); L.L. Bonnycastle (Lori); K.B. Boström (Kristina Bengtsson); B. Bravenboer (Bert); S. Bumpstead (Suzannah); N.P. Burtt (Noël); G. Charpentier (Guillaume); P.S. Chines (Peter); M. Cornelis (Marilyn); D.J. Couper (David); G. Crawford (Gabe); A.S.F. Doney (Alex); K.S. Elliott (Katherine); M.R. Erdos (Michael); C.S. Fox (Caroline); C.S. Franklin (Christopher); M. Ganser (Martha); C. Gieger (Christian); N. Grarup (Niels); T. Green (Todd); S. Griffin (Simon); C.J. Groves (Christopher); C. Guiducci (Candace); S. Hadjadj (Samy); N. Hassanali (Neelam); C. Herder (Christian); B. Isomaa (Bo); A.U. Jackson (Anne); P.R.V. Johnson (Paul); T. Jørgensen (Torben); W.H.L. Kao (Wen); N. Klopp (Norman); A. Kong (Augustine); P. Kraft (Peter); J. Kuusisto (Johanna); T. Lauritzen (Torsten); M. Li (Man); A. Lieverse (Aloysius); C.M. Lindgren (Cecilia); V. Lyssenko (Valeriya); M. Marre (Michel); T. Meitinger (Thomas); K. Midthjell (Kristian); M.A. Morken (Mario); N. Narisu (Narisu); P. Nilsson (Peter); K.R. Owen (Katharine); F. Payne (Felicity); J.R.B. Perry (John); A.K. Petersen; C. Platou (Carl); C. Proença (Christine); I. Prokopenko (Inga); W. Rathmann (Wolfgang); N.W. Rayner (Nigel William); N.R. Robertson (Neil); G. Rocheleau (Ghislain); M. Roden (Michael); M.J. Sampson (Michael); R. Saxena (Richa); B.M. Shields (Beverley); P. Shrader (Peter); G. Sigurdsson (Gunnar); T. Sparsø (Thomas); K. Strassburger (Klaus); H.M. Stringham (Heather); Q. Sun (Qi); A.J. Swift (Amy); B. Thorand (Barbara); J. Tichet (Jean); T. Tuomi (Tiinamaija); R.M. van Dam (Rob); T.W. van Haeften (Timon); T.W. van Herpt (Thijs); J.V. van Vliet-Ostaptchouk (Jana); G.B. Walters (Bragi); M.N. Weedon (Michael); C. Wijmenga (Cisca); J.C.M. Witteman (Jacqueline); R.N. Bergman (Richard); S. Cauchi (Stephane); F.S. Collins (Francis); A.L. Gloyn (Anna); U. Gyllensten (Ulf); T. Hansen (Torben); W.A. Hide (Winston); G.A. Hitman (Graham); A. Hofman (Albert); D. Hunter (David); K. Hveem (Kristian); M. Laakso (Markku); K.L. Mohlke (Karen); C.N.A. Palmer (Colin); P.P. Pramstaller (Peter Paul); I. Rudan (Igor); E.J.G. Sijbrands (Eric); L.D. Stein (Lincoln); J. Tuomilehto (Jaakko); A.G. Uitterlinden (André); M. Walker (Mark); N.J. Wareham (Nick); G.R. Abecasis (Gonçalo); B.O. Boehm (Bernhard); H. Campbell (Harry); M.J. Daly (Mark); A.T. Hattersley (Andrew); F.B. Hu (Frank); J.B. Meigs (James); J.S. Pankow (James); O. Pedersen (Oluf); H.E. Wichmann (Erich); I.E. Barroso (Inês); J.C. Florez (Jose); T.M. Frayling (Timothy); L. Groop (Leif); R. Sladek (Rob); U. Thorsteinsdottir (Unnur); J.F. Wilson (James); T. Illig (Thomas); P. Froguel (Philippe); P. Tikka-Kleemola (Päivi); J-A. Zwart (John-Anker); D. Altshuler (David); M. Boehnke (Michael); M.I. McCarthy (Mark); R.M. Watanabe (Richard)

    2010-01-01

    textabstractBy combining genome-wide association data from 8,130 individuals with type 2 diabetes (T2D) and 38,987 controls of European descent and following up previously unidentified meta-analysis signals in a further 34,412 cases and 59,925 controls, we identified 12 new T2D association signals

  19. Twelve type 2 diabetes susceptibility loci identified through large-scale association analysis

    OpenAIRE

    Voight, Benjamin; Scott, Laura; Steinthorsdottir, Valgerdur; Morris, Andrew; Dina, Christian; Welch, Ryan; Zeggini, Eleftheria; Huth, Cornelia; Aulchenko, Yurii; Thorleifsson, Gudmar; McCulloch, Laura; Ferreira, Teresa; Grallert, Harald; Amin, Najaf; Wu, Guanming

    2010-01-01

    textabstractBy combining genome-wide association data from 8,130 individuals with type 2 diabetes (T2D) and 38,987 controls of European descent and following up previously unidentified meta-analysis signals in a further 34,412 cases and 59,925 controls, we identified 12 new T2D association signals with combined P 5 × 10 8. These include a second independent signal at the KCNQ1 locus; the first report, to our knowledge, of an X-chromosomal association (near DUSP9); and a further instance of ov...

  20. The 1976 Tangshan earthquake

    Science.gov (United States)

    Fang, Wang

    1979-01-01

    The Tangshan earthquake of 1976 was one of the largest earthquakes in recent years. It occurred on July 28 at 3:42 a.m, Beijing (Peking) local time, and had magnitude 7.8, focal depth of 15 kilometers, and an epicentral intensity of XI on the New Chinese Seismic Intensity Scale; it caused serious damage and loss of life in this densely populated industrial city. Now, with the help of people from all over China, the city of Tangshan is being rebuild. 

  1. [Earthquakes in El Salvador].

    Science.gov (United States)

    de Ville de Goyet, C

    2001-02-01

    The Pan American Health Organization (PAHO) has 25 years of experience dealing with major natural disasters. This piece provides a preliminary review of the events taking place in the weeks following the major earthquakes in El Salvador on 13 January and 13 February 2001. It also describes the lessons that have been learned over the last 25 years and the impact that the El Salvador earthquakes and other disasters have had on the health of the affected populations. Topics covered include mass-casualties management, communicable diseases, water supply, managing donations and international assistance, damages to the health-facilities infrastructure, mental health, and PAHO's role in disasters.

  2. Earthquake correlations and networks: A comparative study

    Science.gov (United States)

    Krishna Mohan, T. R.; Revathi, P. G.

    2011-04-01

    We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E EULEEJ1539-375510.1103/PhysRevE.69.06610669, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.

  3. Earthquake correlations and networks: A comparative study

    International Nuclear Information System (INIS)

    Krishna Mohan, T. R.; Revathi, P. G.

    2011-01-01

    We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E 69, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.

  4. Building the Southern California Earthquake Center

    Science.gov (United States)

    Jordan, T. H.; Henyey, T.; McRaney, J. K.

    2004-12-01

    Kei Aki was the founding director of the Southern California Earthquake Center (SCEC), a multi-institutional collaboration formed in 1991 as a Science and Technology Center (STC) under the National Science Foundation (NSF) and the U. S. Geological Survey (USGS). Aki and his colleagues articulated a system-level vision for the Center: investigations by disciplinary working groups would be woven together into a "Master Model" for Southern California. In this presentation, we will outline how the Master-Model concept has evolved and how SCEC's structure has adapted to meet scientific challenges of system-level earthquake science. In its first decade, SCEC conducted two regional imaging experiments (LARSE I & II); published the "Phase-N" reports on (1) the Landers earthquake, (2) a new earthquake rupture forecast for Southern California, and (3) new models for seismic attenuation and site effects; it developed two prototype "Community Models" (the Crustal Motion Map and Community Velocity Model) and, perhaps most important, sustained a long-term, multi-institutional, interdisciplinary collaboration. The latter fostered pioneering numerical simulations of earthquake ruptures, fault interactions, and wave propagation. These accomplishments provided the impetus for a successful proposal in 2000 to reestablish SCEC as a "stand alone" center under NSF/USGS auspices. SCEC remains consistent with the founders' vision: it continues to advance seismic hazard analysis through a system-level synthesis that is based on community models and an ever expanding array of information technology. SCEC now represents a fully articulated "collaboratory" for earthquake science, and many of its features are extensible to other active-fault systems and other system-level collaborations. We will discuss the implications of the SCEC experience for EarthScope, the USGS's program in seismic hazard analysis, NSF's nascent Cyberinfrastructure Initiative, and other large collaboratory programs.

  5. What Googling Trends Tell Us About Public Interest in Earthquakes

    Science.gov (United States)

    Tan, Y. J.; Maharjan, R.

    2017-12-01

    Previous studies have shown that immediately after large earthquakes, there is a period of increased public interest. This represents a window of opportunity for science communication and disaster relief fundraising efforts to reach more people. However, how public interest varies for different earthquakes has not been quantified systematically on a global scale. We analyze how global search interest for the term "earthquake" on Google varies following earthquakes of magnitude ≥ 5.5 from 2004 to 2016. We find that there is a spike in search interest after large earthquakes followed by an exponential temporal decay. Preliminary results suggest that the period of increased search interest scales with death toll and correlates with the period of increased media coverage. This suggests that the relationship between the period of increased public interest in earthquakes and death toll might be an effect of differences in media coverage. However, public interest never remains elevated for more than three weeks. Therefore, to take advantage of this short period of increased public interest, science communication and disaster relief fundraising efforts have to act promptly following devastating earthquakes.

  6. GEM - The Global Earthquake Model

    Science.gov (United States)

    Smolka, A.

    2009-04-01

    Over 500,000 people died in the last decade due to earthquakes and tsunamis, mostly in the developing world, where the risk is increasing due to rapid population growth. In many seismic regions, no hazard and risk models exist, and even where models do exist, they are intelligible only by experts, or available only for commercial purposes. The Global Earthquake Model (GEM) answers the need for an openly accessible risk management tool. GEM is an internationally sanctioned public private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) which will establish an authoritative standard for calculating and communicating earthquake hazard and risk, and will be designed to serve as the critical instrument to support decisions and actions that reduce earthquake losses worldwide. GEM will integrate developments on the forefront of scientific and engineering knowledge of earthquakes, at global, regional and local scale. The work is organized in three modules: hazard, risk, and socio-economic impact. The hazard module calculates probabilities of earthquake occurrence and resulting shaking at any given location. The risk module calculates fatalities, injuries, and damage based on expected shaking, building vulnerability, and the distribution of population and of exposed values and facilities. The socio-economic impact module delivers tools for making educated decisions to mitigate and manage risk. GEM will be a versatile online tool, with open source code and a map-based graphical interface. The underlying data will be open wherever possible, and its modular input and output will be adapted to multiple user groups: scientists and engineers, risk managers and decision makers in the public and private sectors, and the public-at- large. GEM will be the first global model for seismic risk assessment at a national and regional scale, and aims to achieve broad scientific participation and independence. Its development will occur in a

  7. The 2012 Mw5.6 earthquake in Sofia seismogenic zone - is it a slow earthquake

    Science.gov (United States)

    Raykova, Plamena; Solakov, Dimcho; Slavcheva, Krasimira; Simeonova, Stela; Aleksandrova, Irena

    2017-04-01

    Recently our understanding of tectonic faulting has been shaken by the discoveries of seismic tremor, low frequency earthquakes, slow slip events, and other models of fault slip. These phenomenas represent models of failure that were thought to be non-existent and theoretically impossible only a few years ago. Slow earthquakes are seismic phenomena in which the rupture of geological faults in the earth's crust occurs gradually without creating strong tremors. Despite the growing number of observations of slow earthquakes their origin remains unresolved. Studies show that the duration of slow earthquakes ranges from a few seconds to a few hundred seconds. The regular earthquakes with which most people are familiar release a burst of built-up stress in seconds, slow earthquakes release energy in ways that do little damage. This study focus on the characteristics of the Mw5.6 earthquake occurred in Sofia seismic zone on May 22nd, 2012. The Sofia area is the most populated, industrial and cultural region of Bulgaria that faces considerable earthquake risk. The Sofia seismic zone is located in South-western Bulgaria - the area with pronounce tectonic activity and proved crustal movement. In 19th century the city of Sofia (situated in the centre of the Sofia seismic zone) has experienced two strong earthquakes with epicentral intensity of 10 MSK. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK64).The 2012 quake occurs in an area characterized by a long quiescence (of 95 years) for moderate events. Moreover, a reduced number of small earthquakes have also been registered in the recent past. The Mw5.6 earthquake is largely felt on the territory of Bulgaria and neighbouring countries. No casualties and severe injuries have been reported. Mostly moderate damages were observed in the cities of Pernik and Sofia and their surroundings. These observations could be assumed indicative for a

  8. Twelve fundamental life histories evolving through allocation-dependent fecundity and survival.

    Science.gov (United States)

    Johansson, Jacob; Brännström, Åke; Metz, Johan A J; Dieckmann, Ulf

    2018-03-01

    An organism's life history is closely interlinked with its allocation of energy between growth and reproduction at different life stages. Theoretical models have established that diminishing returns from reproductive investment promote strategies with simultaneous investment into growth and reproduction (indeterminate growth) over strategies with distinct phases of growth and reproduction (determinate growth). We extend this traditional, binary classification by showing that allocation-dependent fecundity and mortality rates allow for a large diversity of optimal allocation schedules. By analyzing a model of organisms that allocate energy between growth and reproduction, we find twelve types of optimal allocation schedules, differing qualitatively in how reproductive allocation increases with body mass. These twelve optimal allocation schedules include types with different combinations of continuous and discontinuous increase in reproduction allocation, in which phases of continuous increase can be decelerating or accelerating. We furthermore investigate how this variation influences growth curves and the expected maximum life span and body size. Our study thus reveals new links between eco-physiological constraints and life-history evolution and underscores how allocation-dependent fitness components may underlie biological diversity.

  9. Global risk of big earthquakes has not recently increased.

    Science.gov (United States)

    Shearer, Peter M; Stark, Philip B

    2012-01-17

    The recent elevated rate of large earthquakes has fueled concern that the underlying global rate of earthquake activity has increased, which would have important implications for assessments of seismic hazard and our understanding of how faults interact. We examine the timing of large (magnitude M≥7) earthquakes from 1900 to the present, after removing local clustering related to aftershocks. The global rate of M≥8 earthquakes has been at a record high roughly since 2004, but rates have been almost as high before, and the rate of smaller earthquakes is close to its historical average. Some features of the global catalog are improbable in retrospect, but so are some features of most random sequences--if the features are selected after looking at the data. For a variety of magnitude cutoffs and three statistical tests, the global catalog, with local clusters removed, is not distinguishable from a homogeneous Poisson process. Moreover, no plausible physical mechanism predicts real changes in the underlying global rate of large events. Together these facts suggest that the global risk of large earthquakes is no higher today than it has been in the past.

  10. Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes

    Science.gov (United States)

    Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467

  11. Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake

    Science.gov (United States)

    Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten

    2014-05-01

    In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.

  12. Earthquake Safety Tips in the Classroom

    Science.gov (United States)

    Melo, M. O.; Maciel, B. A. P. C.; Neto, R. P.; Hartmann, R. P.; Marques, G.; Gonçalves, M.; Rocha, F. L.; Silveira, G. M.

    2014-12-01

    The catastrophes induced by earthquakes are among the most devastating ones, causing an elevated number of human losses and economic damages. But, we have to keep in mind that earthquakes don't kill people, buildings do. Earthquakes can't be predicted and the only way of dealing with their effects is to teach the society how to be prepared for them, and how to deal with their consequences. In spite of being exposed to moderate and large earthquakes, most of the Portuguese are little aware of seismic risk, mainly due to the long recurrence intervals between strong events. The acquisition of safe and correct attitudes before, during and after an earthquake is relevant for human security. Children play a determinant role in the establishment of a real and long-lasting "culture of prevention", both through action and new attitudes. On the other hand, when children assume correct behaviors, their relatives often change their incorrect behaviors to mimic the correct behaviors of their kids. In the framework of a Parents-in-Science initiative, we started with bi-monthly sessions for children aged 5 - 6 years old and 9 - 10 years old. These sessions, in which parents, teachers and high-school students participate, became part of the school's permanent activities. We start by a short introduction to the Earth and to earthquakes by story telling and by using simple science activities to trigger children curiosity. With safety purposes, we focus on how crucial it is to know basic information about themselves and to define, with their families, an emergency communications plan, in case family members are separated. Using a shaking table we teach them how to protect themselves during an earthquake. We then finish with the preparation on an individual emergency kit. This presentation will highlight the importance of encouraging preventive actions in order to reduce the impact of earthquakes on society. This project is developed by science high-school students and teachers, in

  13. Retrospective Evaluation of the Long-Term CSEP-Italy Earthquake Forecasts

    Science.gov (United States)

    Werner, M. J.; Zechar, J. D.; Marzocchi, W.; Wiemer, S.

    2010-12-01

    On 1 August 2009, the global Collaboratory for the Study of Earthquake Predictability (CSEP) launched a prospective and comparative earthquake predictability experiment in Italy. The goal of the CSEP-Italy experiment is to test earthquake occurrence hypotheses that have been formalized as probabilistic earthquake forecasts over temporal scales that range from days to years. In the first round of forecast submissions, members of the CSEP-Italy Working Group presented eighteen five-year and ten-year earthquake forecasts to the European CSEP Testing Center at ETH Zurich. We considered the twelve time-independent earthquake forecasts among this set and evaluated them with respect to past seismicity data from two Italian earthquake catalogs. Here, we present the results of tests that measure the consistency of the forecasts with the past observations. Besides being an evaluation of the submitted time-independent forecasts, this exercise provided insight into a number of important issues in predictability experiments with regard to the specification of the forecasts, the performance of the tests, and the trade-off between the robustness of results and experiment duration.

  14. Amplitude of foreshocks as a possible seismic precursor to earthquakes

    Science.gov (United States)

    Lindh, A.G.

    1978-01-01

    In recent years, we have made significant progress in being able to recognize the long-range pattern of events that precede large earthquakes. For example, in a recent issue of the Earthquake Information Bulletin, we saw how the pioneering work of S.A. Fedotov of the U.S.S.R in the Kamchatka-Kurile Islands region has been applied worldwide to forecast where large, shallow earthquakes might occur in the next decades. Indeed, such a "seismic gap" off the coast of Alaska was filled by the 1972 Sitka earthquake. Promising results are slowly accumulating from other techniques that suggest that intermediate-term precursors might also be seen: among these are tilt and geomagnetic anomalies and anomalous land uplift. But the crucial point remains that short-term precursors (days to hours) will be needed in many cases if there is to be a significant saving of lives. 

  15. The EM Earthquake Precursor

    Science.gov (United States)

    Jones, K. B., II; Saxton, P. T.

    2013-12-01

    Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After the 1989 Loma Prieta Earthquake, American earthquake investigators predetermined magnetometer use and a minimum earthquake magnitude necessary for EM detection. This action was set in motion, due to the extensive damage incurred and public outrage concerning earthquake forecasting; however, the magnetometers employed, grounded or buried, are completely subject to static and electric fields and have yet to correlate to an identifiable precursor. Secondly, there is neither a networked array for finding any epicentral locations, nor have there been any attempts to find even one. This methodology needs dismissal, because it is overly complicated, subject to continuous change, and provides no response time. As for the minimum magnitude threshold, which was set at M5, this is simply higher than what modern technological advances have gained. Detection can now be achieved at approximately M1, which greatly improves forecasting chances. A propagating precursor has now been detected in both the field and laboratory. Field antenna testing conducted outside the NE Texas town of Timpson in February, 2013, detected three strong EM sources along with numerous weaker signals. The antenna had mobility, and observations were noted for recurrence, duration, and frequency response. Next, two

  16. Simulated earthquake ground motions

    International Nuclear Information System (INIS)

    Vanmarcke, E.H.; Gasparini, D.A.

    1977-01-01

    The paper reviews current methods for generating synthetic earthquake ground motions. Emphasis is on the special requirements demanded of procedures to generate motions for use in nuclear power plant seismic response analysis. Specifically, very close agreement is usually sought between the response spectra of the simulated motions and prescribed, smooth design response spectra. The features and capabilities of the computer program SIMQKE, which has been widely used in power plant seismic work are described. Problems and pitfalls associated with the use of synthetic ground motions in seismic safety assessment are also pointed out. The limitations and paucity of recorded accelerograms together with the widespread use of time-history dynamic analysis for obtaining structural and secondary systems' response have motivated the development of earthquake simulation capabilities. A common model for synthesizing earthquakes is that of superposing sinusoidal components with random phase angles. The input parameters for such a model are, then, the amplitudes and phase angles of the contributing sinusoids as well as the characteristics of the variation of motion intensity with time, especially the duration of the motion. The amplitudes are determined from estimates of the Fourier spectrum or the spectral density function of the ground motion. These amplitudes may be assumed to be varying in time or constant for the duration of the earthquake. In the nuclear industry, the common procedure is to specify a set of smooth response spectra for use in aseismic design. This development and the need for time histories have generated much practical interest in synthesizing earthquakes whose response spectra 'match', or are compatible with a set of specified smooth response spectra

  17. Hotspots, Lifelines, and the Safrr Haywired Earthquake Sequence

    Science.gov (United States)

    Ratliff, J. L.; Porter, K.

    2014-12-01

    Though California has experienced many large earthquakes (San Francisco, 1906; Loma Prieta, 1989; Northridge, 1994), the San Francisco Bay Area has not had a damaging earthquake for 25 years. Earthquake risk and surging reliance on smartphones and the Internet to handle everyday tasks raise the question: is an increasingly technology-reliant Bay Area prepared for potential infrastructure impacts caused by a major earthquake? How will a major earthquake on the Hayward Fault affect lifelines (roads, power, water, communication, etc.)? The U.S. Geological Survey Science Application for Risk Reduction (SAFRR) program's Haywired disaster scenario, a hypothetical two-year earthquake sequence triggered by a M7.05 mainshock on the Hayward Fault, addresses these and other questions. We explore four geographic aspects of lifeline damage from earthquakes: (1) geographic lifeline concentrations, (2) areas where lifelines pass through high shaking or potential ground-failure zones, (3) areas with diminished lifeline service demand due to severe building damage, and (4) areas with increased lifeline service demand due to displaced residents and businesses. Potential mainshock lifeline vulnerability and spatial demand changes will be discerned by superimposing earthquake shaking, liquefaction probability, and landslide probability damage thresholds with lifeline concentrations and with large-capacity shelters. Intersecting high hazard levels and lifeline clusters represent potential lifeline susceptibility hotspots. We will also analyze possible temporal vulnerability and demand changes using an aftershock shaking threshold. The results of this analysis will inform regional lifeline resilience initiatives and response and recovery planning, as well as reveal potential redundancies and weaknesses for Bay Area lifelines. Identified spatial and temporal hotspots can provide stakeholders with a reference for possible systemic vulnerability resulting from an earthquake sequence.

  18. Geomorphic legacy of medieval Himalayan earthquakes in the Pokhara Valley

    Science.gov (United States)

    Schwanghart, Wolfgang; Bernhardt, Anne; Stolle, Amelie; Hoelzmann, Philipp; Adhikari, Basanta R.; Andermann, Christoff; Tofelde, Stefanie; Merchel, Silke; Rugel, Georg; Fort, Monique; Korup, Oliver

    2016-04-01

    The Himalayas and their foreland belong to the world's most earthquake-prone regions. With millions of people at risk from severe ground shaking and associated damages, reliable data on the spatial and temporal occurrence of past major earthquakes is urgently needed to inform seismic risk analysis. Beyond the instrumental record such information has been largely based on historical accounts and trench studies. Written records provide evidence for damages and fatalities, yet are difficult to interpret when derived from the far-field. Trench studies, in turn, offer information on rupture histories, lengths and displacements along faults but involve high chronological uncertainties and fail to record earthquakes that do not rupture the surface. Thus, additional and independent information is required for developing reliable earthquake histories. Here, we present exceptionally well-dated evidence of catastrophic valley infill in the Pokhara Valley, Nepal. Bayesian calibration of radiocarbon dates from peat beds, plant macrofossils, and humic silts in fine-grained tributary sediments yields a robust age distribution that matches the timing of nearby M>8 earthquakes in ~1100, 1255, and 1344 AD. The upstream dip of tributary valley fills and X-ray fluorescence spectrometry of their provenance rule out local sediment sources. Instead, geomorphic and sedimentary evidence is consistent with catastrophic fluvial aggradation and debris flows that had plugged several tributaries with tens of meters of calcareous sediment from the Annapurna Massif >60 km away. The landscape-changing consequences of past large Himalayan earthquakes have so far been elusive. Catastrophic aggradation in the wake of two historically documented medieval earthquakes and one inferred from trench studies underscores that Himalayan valley fills should be considered as potential archives of past earthquakes. Such valley fills are pervasive in the Lesser Himalaya though high erosion rates reduce

  19. What is the earthquake fracture energy?

    Science.gov (United States)

    Di Toro, G.; Nielsen, S. B.; Passelegue, F. X.; Spagnuolo, E.; Bistacchi, A.; Fondriest, M.; Murphy, S.; Aretusini, S.; Demurtas, M.

    2016-12-01

    The energy budget of an earthquake is one of the main open questions in earthquake physics. During seismic rupture propagation, the elastic strain energy stored in the rock volume that bounds the fault is converted into (1) gravitational work (relative movement of the wall rocks bounding the fault), (2) in- and off-fault damage of the fault zone rocks (due to rupture propagation and frictional sliding), (3) frictional heating and, of course, (4) seismic radiated energy. The difficulty in the budget determination arises from the measurement of some parameters (e.g., the temperature increase in the slipping zone which constraints the frictional heat), from the not well constrained size of the energy sinks (e.g., how large is the rock volume involved in off-fault damage?) and from the continuous exchange of energy from different sinks (for instance, fragmentation and grain size reduction may result from both the passage of the rupture front and frictional heating). Field geology studies, microstructural investigations, experiments and modelling may yield some hints. Here we discuss (1) the discrepancies arising from the comparison of the fracture energy measured in experiments reproducing seismic slip with the one estimated from seismic inversion for natural earthquakes and (2) the off-fault damage induced by the diffusion of frictional heat during simulated seismic slip in the laboratory. Our analysis suggests, for instance, that the so called earthquake fracture energy (1) is mainly frictional heat for small slips and (2), with increasing slip, is controlled by the geometrical complexity and other plastic processes occurring in the damage zone. As a consequence, because faults are rapidly and efficiently lubricated upon fast slip initiation, the dominant dissipation mechanism in large earthquakes may not be friction but be the off-fault damage due to fault segmentation and stress concentrations in a growing region around the fracture tip.

  20. Commercializing Government-sponsored Innovations: Twelve Successful Buildings Case Studies

    Science.gov (United States)

    Brown, M. A.; Berry, L. G.; Goel, R. K.

    1989-01-01

    This report examines the commercialization and use of R and D results funded by DOE's Office of Buildings and Community Systems (OBCS), an office that is dedicated to improving the energy efficiency of the nation's buildings. Three goals guided the research described in this report: to improve understanding of the factors that hinder or facilitate the transfer of OBCS R and D results, to determine which technology transfer strategies are most effective and under what circumstances each is appropriate, and to document the market penetration and energy savings achieved by successfully-commercialized innovations that have received OBCS support. Twelve successfully-commercialized innovations are discussed here. The methodology employed involved a review of the literature, interviews with innovation program managers and industry personnel, and data collection from secondary sources. Six generic technology transfer strategies are also described. Of these, contracting R and D to industrial partners is found to be the most commonly used strategy in our case studies. The market penetration achieved to date by the innovations studied ranges from less than 1% to 100%. For the three innovations with the highest predicted levels of energy savings (i.e., the flame retention head oil burner, low-E windows, and solid-state ballasts), combined cumulative savings by the year 2000 are likely to approach 2 quads. To date the energy savings for these three innovations have been about 0.2 quads. Our case studies illustrate the important role federal agencies can play in commercializing new technologies.

  1. THE ELM SURVEY. II. TWELVE BINARY WHITE DWARF MERGER SYSTEMS

    International Nuclear Information System (INIS)

    Kilic, Mukremin; Brown, Warren R.; Kenyon, S. J.; Prieto, Carlos Allende; Agueeros, M. A.; Heinke, Craig

    2011-01-01

    We describe new radial velocity and X-ray observations of extremely low-mass white dwarfs (ELM WDs, ∼0.2 M sun ) in the Sloan Digital Sky Survey Data Release 4 and the MMT Hypervelocity Star survey. We identify four new short period binaries, including two merger systems. These observations bring the total number of short period binary systems identified in our survey to 20. No main-sequence or neutron star companions are visible in the available optical photometry, radio, and X-ray data. Thus, the companions are most likely WDs. Twelve of these systems will merge within a Hubble time due to gravitational wave radiation. We have now tripled the number of known merging WD systems. We discuss the characteristics of this merger sample and potential links to underluminous supernovae, extreme helium stars, AM CVn systems, and other merger products. We provide new observational tests of the WD mass-period distribution and cooling models for ELM WDs. We also find evidence for a new formation channel for single low-mass WDs through binary mergers of two lower mass objects.

  2. Post-earthquake building safety inspection: Lessons from the Canterbury, New Zealand, earthquakes

    Science.gov (United States)

    Marshall, J.; Jaiswal, Kishor; Gould, N.; Turner, F.; Lizundia, B.; Barnes, J.

    2013-01-01

    The authors discuss some of the unique aspects and lessons of the New Zealand post-earthquake building safety inspection program that was implemented following the Canterbury earthquake sequence of 2010–2011. The post-event safety assessment program was one of the largest and longest programs undertaken in recent times anywhere in the world. The effort engaged hundreds of engineering professionals throughout the country, and also sought expertise from outside, to perform post-earthquake structural safety inspections of more than 100,000 buildings in the city of Christchurch and the surrounding suburbs. While the building safety inspection procedure implemented was analogous to the ATC 20 program in the United States, many modifications were proposed and implemented in order to assess the large number of buildings that were subjected to strong and variable shaking during a period of two years. This note discusses some of the key aspects of the post-earthquake building safety inspection program and summarizes important lessons that can improve future earthquake response.

  3. The earthquake lights (EQL of the 6 April 2009 Aquila earthquake, in Central Italy

    Directory of Open Access Journals (Sweden)

    C. Fidani

    2010-05-01

    Full Text Available A seven-month collection of testimonials about the 6 April 2009 earthquake in Aquila, Abruzzo region, Italy, was compiled into a catalogue of non-seismic phenomena. Luminous phenomena were often reported starting about nine months before the strong shock and continued until about five months after the shock. A summary and list of the characteristics of these sightings was made according to 20th century classifications and a comparison was made with the Galli outcomes. These sightings were distributed over a large area around the city of Aquila, with a major extension to the north, up to 50 km. Various earthquake lights were correlated with several landscape characteristics and the source and dynamic of the earthquake. Some preliminary considerations on the location of the sightings suggest a correlation between electrical discharges and asperities, while flames were mostly seen along the Aterno Valley.

  4. Uncovering the 2010 Haiti earthquake death toll

    Science.gov (United States)

    Daniell, J. E.; Khazai, B.; Wenzel, F.

    2013-05-01

    Casualties are estimated for the 12 January 2010 earthquake in Haiti using various reports calibrated by observed building damage states from satellite imagery and reconnaissance reports on the ground. By investigating various damage reports, casualty estimates and burial figures, for a one year period from 12 January 2010 until 12 January 2011, there is also strong evidence that the official government figures of 316 000 total dead and missing, reported to have been caused by the earthquake, are significantly overestimated. The authors have examined damage and casualties report to arrive at their estimation that the median death toll is less than half of this value (±137 000). The authors show through a study of historical earthquake death tolls, that overestimates of earthquake death tolls occur in many cases, and is not unique to Haiti. As death toll is one of the key elements for determining the amount of aid and reconstruction funds that will be mobilized, scientific means to estimate death tolls should be applied. Studies of international aid in recent natural disasters reveal that large distributions of aid which do not match the respective needs may cause oversupply of help, aggravate corruption and social disruption rather than reduce them, and lead to distrust within the donor community.

  5. The 2016 Kumamoto Earthquakes: Cascading Geological Hazards and Compounding Risks

    Directory of Open Access Journals (Sweden)

    Katsuichiro Goda

    2016-08-01

    Full Text Available A sequence of two strike-slip earthquakes occurred on 14 and 16 April 2016 in the intraplate region of Kyushu Island, Japan, apart from subduction zones, and caused significant damage and disruption to the Kumamoto region. The analyses of regional seismic catalog and available strong motion recordings reveal striking characteristics of the events, such as migrating seismicity, earthquake surface rupture, and major foreshock-mainshock earthquake sequences. To gain valuable lessons from the events, a UK Earthquake Engineering Field Investigation Team (EEFIT was dispatched to Kumamoto, and earthquake damage surveys were conducted to relate observed earthquake characteristics to building and infrastructure damage caused by the earthquakes. The lessons learnt from the reconnaissance mission have important implications on current seismic design practice regarding the required seismic resistance of structures under multiple shocks and the seismic design of infrastructure subject to large ground deformation. The observations also highlight the consequences of cascading geological hazards on community resilience. To share the gathered damage data widely, geo-tagged photos are organized using Google Earth and the kmz file is made publicly available.

  6. An interdisciplinary approach to study Pre-Earthquake processes

    Science.gov (United States)

    Ouzounov, D.; Pulinets, S. A.; Hattori, K.; Taylor, P. T.

    2017-12-01

    We will summarize a multi-year research effort on wide-ranging observations of pre-earthquake processes. Based on space and ground data we present some new results relevant to the existence of pre-earthquake signals. Over the past 15-20 years there has been a major revival of interest in pre-earthquake studies in Japan, Russia, China, EU, Taiwan and elsewhere. Recent large magnitude earthquakes in Asia and Europe have shown the importance of these various studies in the search for earthquake precursors either for forecasting or predictions. Some new results were obtained from modeling of the atmosphere-ionosphere connection and analyses of seismic records (foreshocks /aftershocks), geochemical, electromagnetic, and thermodynamic processes related to stress changes in the lithosphere, along with their statistical and physical validation. This cross - disciplinary approach could make an impact on our further understanding of the physics of earthquakes and the phenomena that precedes their energy release. We also present the potential impact of these interdisciplinary studies to earthquake predictability. A detail summary of our approach and that of several international researchers will be part of this session and will be subsequently published in a new AGU/Wiley volume. This book is part of the Geophysical Monograph series and is intended to show the variety of parameters seismic, atmospheric, geochemical and historical involved is this important field of research and will bring this knowledge and awareness to a broader geosciences community.

  7. Surface Rupture Effects on Earthquake Moment-Area Scaling Relations

    Science.gov (United States)

    Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro

    2017-09-01

    Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment ( M 0) and rupture area ( A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0- A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0- A scaling relations for strike-slip earthquakes.

  8. Earthquake Swarm in Armutlu Peninsula, Eastern Marmara Region, Turkey

    Science.gov (United States)

    Yavuz, Evrim; Çaka, Deniz; Tunç, Berna; Serkan Irmak, T.; Woith, Heiko; Cesca, Simone; Lühr, Birger-Gottfried; Barış, Şerif

    2015-04-01

    The most active fault system of Turkey is North Anatolian Fault Zone and caused two large earthquakes in 1999. These two earthquakes affected the eastern Marmara region destructively. Unbroken part of the North Anatolian Fault Zone crosses north of Armutlu Peninsula on east-west direction. This branch has been also located quite close to Istanbul known as a megacity with its high population, economic and social aspects. A new cluster of microseismic activity occurred in the direct vicinity southeastern of the Yalova Termal area. Activity started on August 2, 2014 with a series of micro events, and then on August 3, 2014 a local magnitude is 4.1 event occurred, more than 1000 in the followed until August 31, 2014. Thus we call this tentatively a swarm-like activity. Therefore, investigation of the micro-earthquake activity of the Armutlu Peninsula has become important to understand the relationship between the occurrence of micro-earthquakes and the tectonic structure of the region. For these reasons, Armutlu Network (ARNET), installed end of 2005 and equipped with currently 27 active seismic stations operating by Kocaeli University Earth and Space Sciences Research Center (ESSRC) and Helmholtz-Zentrum Potsdam Deutsches GeoForschungsZentrum (GFZ), is a very dense network tool able to record even micro-earthquakes in this region. In the 30 days period of August 02 to 31, 2014 Kandilli Observatory and Earthquake Research Institute (KOERI) announced 120 local earthquakes ranging magnitudes between 0.7 and 4.1, but ARNET provided more than 1000 earthquakes for analyzes at the same time period. In this study, earthquakes of the swarm area and vicinity regions determined by ARNET were investigated. The focal mechanism of the August 03, 2014 22:22:42 (GMT) earthquake with local magnitude (Ml) 4.0 is obtained by the moment tensor solution. According to the solution, it discriminates a normal faulting with dextral component. The obtained focal mechanism solution is

  9. Historical earthquake research in Austria

    Science.gov (United States)

    Hammerl, Christa

    2017-12-01

    Austria has a moderate seismicity, and on average the population feels 40 earthquakes per year or approximately three earthquakes per month. A severe earthquake with light building damage is expected roughly every 2 to 3 years in Austria. Severe damage to buildings ( I 0 > 8° EMS) occurs significantly less frequently, the average period of recurrence is about 75 years. For this reason the historical earthquake research has been of special importance in Austria. The interest in historical earthquakes in the past in the Austro-Hungarian Empire is outlined, beginning with an initiative of the Austrian Academy of Sciences and the development of historical earthquake research as an independent research field after the 1978 "Zwentendorf plebiscite" on whether the nuclear power plant will start up. The applied methods are introduced briefly along with the most important studies and last but not least as an example of a recently carried out case study, one of the strongest past earthquakes in Austria, the earthquake of 17 July 1670, is presented. The research into historical earthquakes in Austria concentrates on seismic events of the pre-instrumental period. The investigations are not only of historical interest, but also contribute to the completeness and correctness of the Austrian earthquake catalogue, which is the basis for seismic hazard analysis and as such benefits the public, communities, civil engineers, architects, civil protection, and many others.

  10. Commercializing government-sponsored innovations: Twelve successful buildings case studies

    Energy Technology Data Exchange (ETDEWEB)

    Brown, M.A.; Berry, L.G.; Goel, R.K.

    1989-01-01

    This report examines the commercialization and use of R and D results funded by DOE's Office of Buildings and Community Systems (OBCS), an office that is dedicated to improving the energy efficiency of the nation's buildings. Three goals guided the research described in this report: to improve understanding of the factors that hinder or facilitate the transfer of OBCS R and D results, to determine which technology transfer strategies are most effective and under what circumstances each is appropriate, and to document the market penetration and energy savings achieved by successfully-commercialized innovations that have received OBCS support. Twelve successfully-commercialized innovations are discussed here. The methodology employed involved a review of the literature, interviews with innovation program managers and industry personnel, and data collection from secondary sources. Six generic technology transfer strategies are also described. Of these, contracting R and D to industrial partners is found to be the most commonly used strategy in our case studies. The market penetration achieved to date by the innovations studied ranges from less than 1% to 100%. For the three innovations with the highest predicted levels of energy savings (i.e., the flame retention head oil burner, low-E windows, and solid-state ballasts), combined cumulative savings by the year 2000 are likely to approach 2 quads. To date the energy savings for these three innovations have been about 0.2 quads. Our case studies illustrate the important role federal agencies can play in commercializing new technologies. 27 refs., 21 figs., 4 tabs.

  11. Twelve massless flavors and three colors below the conformal window

    International Nuclear Information System (INIS)

    Fodor, Zoltan; Holland, Kieran; Kuti, Julius; Nogradi, Daniel; Schroeder, Chris

    2011-01-01

    We report new results for a frequently discussed gauge theory with twelve fermion flavors in the fundamental representation of the SU(3) color gauge group. The model, controversial with respect to its conformality, is important in non-perturbative studies searching for a viable composite Higgs mechanism beyond the Standard Model (BSM). In comparison with earlier work, our new simulations apply larger volumes and probe deeper in fermion and pion masses toward the chiral limit. Investigating the controversy, we subject the model to opposite hypotheses with respect to the conformal window. In the first hypothesis, below the conformal window, we test chiral symmetry breaking (χSB) with its Goldstone spectrum, F π , the χSB condensate, and several composite hadron states as analytic functions of the fermion mass when varied in a limited range with our best effort to control finite volume effects. In the second test, for the alternate hypothesis inside the conformal window, we probe conformal behavior driven by a single anomalous mass dimension under the assumption of unbroken chiral symmetry at vanishing fermion mass. Our results at fixed gauge coupling, based on the assumptions of the two hypotheses we define, show low level of confidence in the conformal scenario with leading order scaling analysis. Relaxing the important assumption of leading mass-deformed conformality with its conformal finite size scaling would require added theoretical understanding of the scaling violation terms in the conformal analysis and a comprehensive test of its effects on the confidence level of the fits. Results for the running coupling, based on the force between static sources, and preliminary indications for the finite temperature transition are also presented. Staggered lattice fermions with stout-suppressed taste breaking are used throughout the simulations.

  12. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  13. Engineering geological aspect of Gorkha Earthquake 2015, Nepal

    Science.gov (United States)

    Adhikari, Basanta Raj; Andermann, Christoff; Cook, Kristen

    2016-04-01

    Strong shaking by earthquake causes massif landsliding with severe effects on infrastructure and human lives. The distribution of landslides and other hazards are depending on the combination of earthquake and local characteristics which influence the dynamic response of hillslopes. The Himalayas are one of the most active mountain belts with several kilometers of relief and is very prone to catastrophic mass failure. Strong and shallow earthquakes are very common and cause wide spread collapse of hillslopes, increasing the background landslide rate by several magnitude. The Himalaya is facing many small and large earthquakes in the past i.e. earthquakes i.e. Bihar-Nepal earthquake 1934 (Ms 8.2); Large Kangra earthquake of 1905 (Ms 7.8); Gorkha earthquake 2015 (Mw 7.8). The Mw 7.9 Gorkha earthquake has occurred on and around the main Himalayan Thrust with a hypocentral depth of 15 km (GEER 2015) followed by Mw 7.3 aftershock in Kodari causing 8700+ deaths and leaving hundreds of thousands of homeless. Most of the 3000 aftershocks located by National Seismological Center (NSC) within the first 45 days following the Gorkha Earthquake are concentrated in a narrow 40 km-wide band at midcrustal to shallow depth along the strike of the southern slope of the high Himalaya (Adhikari et al. 2015) and the ground shaking was substantially lower in the short-period range than would be expected for and earthquake of this magnitude (Moss et al. 2015). The effect of this earthquake is very unique in affected areas by showing topographic effect, liquefaction and land subsidence. More than 5000 landslides were triggered by this earthquake (Earthquake without Frontiers, 2015). Most of the landslides are shallow and occurred in weathered bedrock and appear to have mobilized primarily as raveling failures, rock slides and rock falls. Majority of landslides are limited to a zone which runs east-west, approximately parallel the lesser and higher Himalaya. There are numerous cracks in

  14. Listening to the 2011 magnitude 9.0 Tohoku-Oki, Japan, earthquake

    Science.gov (United States)

    Peng, Zhigang; Aiken, Chastity; Kilb, Debi; Shelly, David R.; Enescu, Bogdan

    2012-01-01

    The magnitude 9.0 Tohoku-Oki, Japan, earthquake on 11 March 2011 is the largest earthquake to date in Japan’s modern history and is ranked as the fourth largest earthquake in the world since 1900. This earthquake occurred within the northeast Japan subduction zone (Figure 1), where the Pacific plate is subducting beneath the Okhotsk plate at rate of ∼8–9 cm/yr (DeMets et al. 2010). This type of extremely large earthquake within a subduction zone is generally termed a “megathrust” earthquake. Strong shaking from this magnitude 9 earthquake engulfed the entire Japanese Islands, reaching a maximum acceleration ∼3 times that of gravity (3 g). Two days prior to the main event, a foreshock sequence occurred, including one earthquake of magnitude 7.2. Following the main event, numerous aftershocks occurred around the main slip region; the largest of these was magnitude 7.9. The entire foreshocks-mainshock-aftershocks sequence was well recorded by thousands of sensitive seismometers and geodetic instruments across Japan, resulting in the best-recorded megathrust earthquake in history. This devastating earthquake resulted in significant damage and high death tolls caused primarily by the associated large tsunami. This tsunami reached heights of more than 30 m, and inundation propagated inland more than 5 km from the Pacific coast, which also caused a nuclear crisis that is still affecting people’s lives in certain regions of Japan.

  15. Testing for the ‘predictability’ of dynamically triggered earthquakes in Geysers Geothermal Field

    Science.gov (United States)

    Aiken, Chastity; Meng, Xiaofeng; Hardebeck, Jeanne L.

    2018-01-01

    The Geysers geothermal field is well known for being susceptible to dynamic triggering of earthquakes by large distant earthquakes, owing to the introduction of fluids for energy production. Yet, it is unknown if dynamic triggering of earthquakes is ‘predictable’ or whether dynamic triggering could lead to a potential hazard for energy production. In this paper, our goal is to investigate the characteristics of triggering and the physical conditions that promote triggering to determine whether or not triggering is in anyway foreseeable. We find that, at present, triggering in The Geysers is not easily ‘predictable’ in terms of when and where based on observable physical conditions. However, triggered earthquake magnitude positively correlates with peak imparted dynamic stress, and larger dynamic stresses tend to trigger sequences similar to mainshock–aftershock sequences. Thus, we may be able to ‘predict’ what size earthquakes to expect at The Geysers following a large distant earthquake.

  16. Testing for the 'predictability' of dynamically triggered earthquakes in The Geysers geothermal field

    Science.gov (United States)

    Aiken, Chastity; Meng, Xiaofeng; Hardebeck, Jeanne

    2018-03-01

    The Geysers geothermal field is well known for being susceptible to dynamic triggering of earthquakes by large distant earthquakes, owing to the introduction of fluids for energy production. Yet, it is unknown if dynamic triggering of earthquakes is 'predictable' or whether dynamic triggering could lead to a potential hazard for energy production. In this paper, our goal is to investigate the characteristics of triggering and the physical conditions that promote triggering to determine whether or not triggering is in anyway foreseeable. We find that, at present, triggering in The Geysers is not easily 'predictable' in terms of when and where based on observable physical conditions. However, triggered earthquake magnitude positively correlates with peak imparted dynamic stress, and larger dynamic stresses tend to trigger sequences similar to mainshock-aftershock sequences. Thus, we may be able to 'predict' what size earthquakes to expect at The Geysers following a large distant earthquake.

  17. Identified EM Earthquake Precursors

    Science.gov (United States)

    Jones, Kenneth, II; Saxton, Patrick

    2014-05-01

    Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After a number of custom rock experiments, two hypotheses were formed which could answer the EM wave model. The first hypothesis concerned a sufficient and continuous electron movement either by surface or penetrative flow, and the second regarded a novel approach to radio transmission. Electron flow along fracture surfaces was determined to be inadequate in creating strong EM fields, because rock has a very high electrical resistance making it a high quality insulator. Penetrative flow could not be corroborated as well, because it was discovered that rock was absorbing and confining electrons to a very thin skin depth. Radio wave transmission and detection worked with every single test administered. This hypothesis was reviewed for propagating, long-wave generation with sufficient amplitude, and the capability of penetrating solid rock. Additionally, fracture spaces, either air or ion-filled, can facilitate this concept from great depths and allow for surficial detection. A few propagating precursor signals have been detected in the field occurring with associated phases using custom-built loop antennae. Field testing was conducted in Southern California from 2006-2011, and outside the NE Texas town of Timpson in February, 2013. The antennae have mobility and observations were noted for

  18. Pain after earthquake

    Directory of Open Access Journals (Sweden)

    Angeletti Chiara

    2012-06-01

    Full Text Available Abstract Introduction On 6 April 2009, at 03:32 local time, an Mw 6.3 earthquake hit the Abruzzi region of central Italy causing widespread damage in the City of L Aquila and its nearby villages. The earthquake caused 308 casualties and over 1,500 injuries, displaced more than 25,000 people and induced significant damage to more than 10,000 buildings in the L'Aquila region. Objectives This observational retrospective study evaluated the prevalence and drug treatment of pain in the five weeks following the L'Aquila earthquake (April 6, 2009. Methods 958 triage documents were analysed for patients pain severity, pain type, and treatment efficacy. Results A third of pain patients reported pain with a prevalence of 34.6%. More than half of pain patients reported severe pain (58.8%. Analgesic agents were limited to available drugs: anti-inflammatory agents, paracetamol, and weak opioids. Reduction in verbal numerical pain scores within the first 24 hours after treatment was achieved with the medications at hand. Pain prevalence and characterization exhibited a biphasic pattern with acute pain syndromes owing to trauma occurring in the first 15 days after the earthquake; traumatic pain then decreased and re-surged at around week five, owing to rebuilding efforts. In the second through fourth week, reports of pain occurred mainly owing to relapses of chronic conditions. Conclusions This study indicates that pain is prevalent during natural disasters, may exhibit a discernible pattern over the weeks following the event, and current drug treatments in this region may be adequate for emergency situations.

  19. Smartphone-Based Earthquake and Tsunami Early Warning in Chile

    Science.gov (United States)

    Brooks, B. A.; Baez, J. C.; Ericksen, T.; Barrientos, S. E.; Minson, S. E.; Duncan, C.; Guillemot, C.; Smith, D.; Boese, M.; Cochran, E. S.; Murray, J. R.; Langbein, J. O.; Glennie, C. L.; Dueitt, J.; Parra, H.

    2016-12-01

    Many locations around the world face high seismic hazard, but do not have the resources required to establish traditional earthquake and tsunami warning systems (E/TEW) that utilize scientific grade seismological sensors. MEMs accelerometers and GPS chips embedded in, or added inexpensively to, smartphones are sensitive enough to provide robust E/TEW if they are deployed in sufficient numbers. We report on a pilot project in Chile, one of the most productive earthquake regions world-wide. There, magnitude 7.5+ earthquakes occurring roughly every 1.5 years and larger tsunamigenic events pose significant local and trans-Pacific hazard. The smartphone-based network described here is being deployed in parallel to the build-out of a scientific-grade network for E/TEW. Our sensor package comprises a smartphone with internal MEMS and an external GPS chipset that provides satellite-based augmented positioning and phase-smoothing. Each station is independent of local infrastructure, they are solar-powered and rely on cellular SIM cards for communications. An Android app performs initial onboard processing and transmits both accelerometer and GPS data to a server employing the FinDer-BEFORES algorithm to detect earthquakes, producing an acceleration-based line source model for smaller magnitude earthquakes or a joint seismic-geodetic finite-fault distributed slip model for sufficiently large magnitude earthquakes. Either source model provides accurate ground shaking forecasts, while distributed slip models for larger offshore earthquakes can be used to infer seafloor deformation for local tsunami warning. The network will comprise 50 stations by Sept. 2016 and 100 stations by Dec. 2016. Since Nov. 2015, batch processing has detected, located, and estimated the magnitude for Mw>5 earthquakes. Operational since June, 2016, we have successfully detected two earthquakes > M5 (M5.5, M5.1) that occurred within 100km of our network while producing zero false alarms.

  20. Extending the ISC-GEM Global Earthquake Instrumental Catalogue

    Science.gov (United States)

    Di Giacomo, Domenico; Engdhal, Bob; Storchak, Dmitry; Villaseñor, Antonio; Harris, James

    2015-04-01

    After a 27-month project funded by the GEM Foundation (www.globalquakemodel.org), in January 2013 we released the ISC-GEM Global Instrumental Earthquake Catalogue (1900 2009) (www.isc.ac.uk/iscgem/index.php) as a special product to use for seismic hazard studies. The new catalogue was necessary as improved seismic hazard studies necessitate that earthquake catalogues are homogeneous (to the largest extent possible) over time in their fundamental parameters, such as location and magnitude. Due to time and resource limitation, the ISC-GEM catalogue (1900-2009) included earthquakes selected according to the following time-variable cut-off magnitudes: Ms=7.5 for earthquakes occurring before 1918; Ms=6.25 between 1918 and 1963; and Ms=5.5 from 1964 onwards. Because of the importance of having a reliable seismic input for seismic hazard studies, funding from GEM and two commercial companies in the US and UK allowed us to start working on the extension of the ISC-GEM catalogue both for earthquakes that occurred beyond 2009 and for earthquakes listed in the International Seismological Summary (ISS) which fell below the cut-off magnitude of 6.25. This extension is part of a four-year program that aims at including in the ISC-GEM catalogue large global earthquakes that occurred before the beginning of the ISC Bulletin in 1964. In this contribution we present the updated ISC GEM catalogue, which will include over 1000 more earthquakes that occurred in 2010 2011 and several hundreds more between 1950 and 1959. The catalogue extension between 1935 and 1949 is currently underway. The extension of the ISC-GEM catalogue will also be helpful for regional cross border seismic hazard studies as the ISC-GEM catalogue should be used as basis for cross-checking the consistency in location and magnitude of those earthquakes listed both in the ISC GEM global catalogue and regional catalogues.

  1. Statistical aspects and risks of human-caused earthquakes

    Science.gov (United States)

    Klose, C. D.

    2013-12-01

    The seismological community invests ample human capital and financial resources to research and predict risks associated with earthquakes. Industries such as the insurance and re-insurance sector are equally interested in using probabilistic risk models developed by the scientific community to transfer risks. These models are used to predict expected losses due to naturally occurring earthquakes. But what about the risks associated with human-caused earthquakes? Such risk models are largely absent from both industry and academic discourse. In countries around the world, informed citizens are becoming increasingly aware and concerned that this economic bias is not sustainable for long-term economic growth, environmental and human security. Ultimately, citizens look to their government officials to hold industry accountable. In the Netherlands, for example, the hydrocarbon industry is held accountable for causing earthquakes near Groningen. In Switzerland, geothermal power plants were shut down or suspended because they caused earthquakes in canton Basel and St. Gallen. The public and the private non-extractive industry needs access to information about earthquake risks in connection with sub/urban geoengineeing activities, including natural gas production through fracking, geothermal energy production, carbon sequestration, mining and water irrigation. This presentation illuminates statistical aspects of human-caused earthquakes with respect to different geologic environments. Statistical findings are based on the first catalog of human-caused earthquakes (in Klose 2013). Findings are discussed which include the odds to die during a medium-size earthquake that is set off by geomechanical pollution. Any kind of geoengineering activity causes this type of pollution and increases the likelihood of triggering nearby faults to rupture.

  2. The Manchester earthquake swarm of October 2002

    Science.gov (United States)

    Baptie, B.; Ottemoeller, L.

    2003-04-01

    An earthquake sequence started in the Greater Manchester area of the United Kingdom on October 19, 2002. This has continued to the time of writing and has consisted of more than 100 discrete earthquakes. Three temporary seismograph stations were installed to supplement existing permanent stations and to better understand the relationship between the seismicity and local geology. Due to the urban location, these were experienced by a large number of people. The largest event on October 21 had a magnitude ML 3.9. The activity appears to be an earthquake swarm, since there is no clear distinction between a main shock and aftershocks. However, most of the energy during the sequence was actually released in two earthquakes separated by a few seconds in time, on October 21 at 11:42. Other examples of swarm activity in the UK include Comrie (1788-1801, 1839-46), Glenalmond (1970-72), Doune (1997) and Blackford (1997-98, 2000-01) in central Scotland, Constantine (1981, 1986, 1992-4) in Cornwall, and Johnstonbridge (mid1980s) and Dumfries (1991,1999). The clustering of these events in time and space does suggest that there is a causal relationship between the events of the sequence. Joint hypocenter determination was used to simultaneously locate the swarm earthquakes, determine station corrections and improve the relative locations. It seems likely that all events in the sequence originate from a relatively small source volume. This is supported by the similarities in source mechanism and waveform signals between the various events. Focal depths were found to be very shallow and of the order of about 2-3 km. Source mechanisms determined for the largest of the events show strike-slip solutions along either northeast-southwest or northwest-southeast striking fault planes. The surface expression of faults in the epicentral area is generally northwest-southeast, suggesting that this is the more likely fault plane.

  3. Housing Damage Following Earthquake

    Science.gov (United States)

    1989-01-01

    An automobile lies crushed under the third story of this apartment building in the Marina District after the Oct. 17, 1989, Loma Prieta earthquake. The ground levels are no longer visible because of structural failure and sinking due to liquefaction. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: J.K. Nakata, U.S. Geological Survey.

  4. Seismic design technology for breeder reactor structures. Volume 1. Special topics in earthquake ground motion

    International Nuclear Information System (INIS)

    Reddy, D.P.

    1983-04-01

    This report is divided into twelve chapters: seismic hazard analysis procedures, statistical and probabilistic considerations, vertical ground motion characteristics, vertical ground response spectrum shapes, effects of inclined rock strata on site response, correlation of ground response spectra with intensity, intensity attenuation relationships, peak ground acceleration in the very mean field, statistical analysis of response spectral amplitudes, contributions of body and surface waves, evaluation of ground motion characteristics, and design earthquake motions

  5. Do Earthquakes Shake Stock Markets?

    Science.gov (United States)

    Ferreira, Susana; Karali, Berna

    2015-01-01

    This paper examines how major earthquakes affected the returns and volatility of aggregate stock market indices in thirty-five financial markets over the last twenty years. Results show that global financial markets are resilient to shocks caused by earthquakes even if these are domestic. Our analysis reveals that, in a few instances, some macroeconomic variables and earthquake characteristics (gross domestic product per capita, trade openness, bilateral trade flows, earthquake magnitude, a tsunami indicator, distance to the epicenter, and number of fatalities) mediate the impact of earthquakes on stock market returns, resulting in a zero net effect. However, the influence of these variables is market-specific, indicating no systematic pattern across global capital markets. Results also demonstrate that stock market volatility is unaffected by earthquakes, except for Japan.

  6. Earthquake engineering for nuclear facilities

    CERN Document Server

    Kuno, Michiya

    2017-01-01

    This book is a comprehensive compilation of earthquake- and tsunami-related technologies and knowledge for the design and construction of nuclear facilities. As such, it covers a wide range of fields including civil engineering, architecture, geotechnical engineering, mechanical engineering, and nuclear engineering, for the development of new technologies providing greater resistance against earthquakes and tsunamis. It is crucial both for students of nuclear energy courses and for young engineers in nuclear power generation industries to understand the basics and principles of earthquake- and tsunami-resistant design of nuclear facilities. In Part I, "Seismic Design of Nuclear Power Plants", the design of nuclear power plants to withstand earthquakes and tsunamis is explained, focusing on buildings, equipment's, and civil engineering structures. In Part II, "Basics of Earthquake Engineering", fundamental knowledge of earthquakes and tsunamis as well as the dynamic response of structures and foundation ground...

  7. Seismogeodesy for rapid earthquake and tsunami characterization

    Science.gov (United States)

    Bock, Y.

    2016-12-01

    dozens of seismogeodetic stations available through the Pacific Northwest Seismic Network (University of Washington), the Plate Boundary Observatory (UNAVCO) and the Pacific Northwest Geodetic Array (Central Washington University) as the basis for local tsunami warnings for a large subduction zone earthquake in Cascadia.

  8. Necessity of management for minor earthquake to improve public acceptance of nuclear energy in South Korea

    Directory of Open Access Journals (Sweden)

    Hyun-Tae Choi

    2018-04-01

    Full Text Available As public acceptance of nuclear energy in Korea worsens due to the Fukushima accident and the earthquakes that occurred in the Gyeongju area near the Wolsong nuclear power plant (NPP, estimating the effects of earthquakes has become more essential for the nuclear industry. Currently, most countermeasures against earthquakes are limited to large-scale disasters. Minor-scale earthquakes used to be ignored. Even though people do not feel the shaking due to minor earthquakes and minor earthquakes incur little damage to NPPs, they can change the environmental conditions, for instance, underground water level and the conductivity of the groundwater. This study conducted a questionnaire survey of residents living in the vicinity of an NPP to determine their perception and acceptance of plant safety against minor earthquakes. The results show that the residents feel earthquakes at levels that can be felt by people, but incur little damage to NPPs, as minor earthquakes (magnitude of 2.0–3.9 and set this level as a standard for countermeasures. Even if a minor earthquake has little impact on the safety of an NPP, there is still a possibility that public opinion will get worse. This study provides analysis results about problems of earthquake measures of Korean NPPs and specific things that can bring about an effect of deterioration of public acceptance. Based on these data, this article suggests that active management of minor earthquakes is necessary for the sustainability of nuclear energy. Keywords: Earthquake Measures, Management, Minor Earthquake, Nuclear Energy, Public Acceptance

  9. Scaling and spatial complementarity of tectonic earthquake swarms

    KAUST Repository

    Passarelli, Luigi

    2017-11-10

    Tectonic earthquake swarms (TES) often coincide with aseismic slip and sometimes precede damaging earthquakes. In spite of recent progress in understanding the significance and properties of TES at plate boundaries, their mechanics and scaling are still largely uncertain. Here we evaluate several TES that occurred during the past 20 years on a transform plate boundary in North Iceland. We show that the swarms complement each other spatially with later swarms discouraged from fault segments activated by earlier swarms, which suggests efficient strain release and aseismic slip. The fault area illuminated by earthquakes during swarms may be more representative of the total moment release than the cumulative moment of the swarm earthquakes. We use these findings and other published results from a variety of tectonic settings to discuss general scaling properties for TES. The results indicate that the importance of TES in releasing tectonic strain at plate boundaries may have been underestimated.

  10. Earthquake resistant design of structures

    International Nuclear Information System (INIS)

    Choi, Chang Geun; Kim, Gyu Seok; Lee, Dong Geun

    1990-02-01

    This book tells of occurrence of earthquake and damage analysis of earthquake, equivalent static analysis method, application of equivalent static analysis method, dynamic analysis method like time history analysis by mode superposition method and direct integration method, design spectrum analysis considering an earthquake-resistant design in Korea. Such as analysis model and vibration mode, calculation of base shear, calculation of story seismic load and combine of analysis results.

  11. Fault Branching and Long-Term Earthquake Rupture Scenario for Strike-Slip Earthquake

    Science.gov (United States)

    Klinger, Y.; CHOI, J. H.; Vallage, A.

    2017-12-01

    Careful examination of surface rupture for large continental strike-slip earthquakes reveals that for the majority of earthquakes, at least one major branch is involved in the rupture pattern. Often, branching might be either related to the location of the epicenter or located toward the end of the rupture, and possibly related to the stopping of the rupture. In this work, we examine large continental earthquakes that show significant branches at different scales and for which ground surface rupture has been mapped in great details. In each case, rupture conditions are described, including dynamic parameters, past earthquakes history, and regional stress orientation, to see if the dynamic stress field would a priori favor branching. In one case we show that rupture propagation and branching are directly impacted by preexisting geological structures. These structures serve as pathways for the rupture attempting to propagate out of its shear plane. At larger scale, we show that in some cases, rupturing a branch might be systematic, hampering possibilities for the development of a larger seismic rupture. Long-term geomorphology hints at the existence of a strong asperity in the zone where the rupture branched off the main fault. There, no evidence of throughgoing rupture could be seen along the main fault, while the branch is well connected to the main fault. This set of observations suggests that for specific configurations, some rupture scenarios involving systematic branching are more likely than others.

  12. The severity of an earthquake

    Science.gov (United States)

    ,

    1997-01-01

    The severity of an earthquake can be expressed in terms of both intensity and magnitude. However, the two terms are quite different, and they are often confused. Intensity is based on the observed effects of ground shaking on people, buildings, and natural features. It varies from place to place within the disturbed region depending on the location of the observer with respect to the earthquake epicenter. Magnitude is related to the amount of seismic energy released at the hypocenter of the earthquake. It is based on the amplitude of the earthquake waves recorded on instruments

  13. Data base and seismicity studies for Fagaras, Romania crustal earthquakes

    International Nuclear Information System (INIS)

    Moldovan, I.-A.; Enescu, B. D.; Pantea, A.; Constantin, A.; Bazacliu, O.; Malita, Z.; Moldoveanu, T.

    2002-01-01

    Besides the major impact of the Vrancea seismic region, one of the most important intermediate earthquake sources of Europe, the Romanian crustal earthquake sources, from Fagaras, Banat, Crisana, Bucovina or Dobrogea regions, have to be taken into consideration for seismicity studies or seismic hazard assessment. To determine the characteristics of the seismicity for Fagaras seismogenic region, a revised and updated catalogue of the Romanian earthquakes, recently compiled by Oncescu et al. (1999) is used. The catalogue contains 471 tectonic earthquakes and 338 induced earthquakes and is homogenous starting with 1471 for I>VIII and for I>VII starting with 1801. The catalogue is complete for magnitudes larger than 3 starting with 1982. In the studied zone only normal earthquakes occur, related to intracrustal fractures situated from 5 to 30 km depth. Most of them are of low energy, but once in a century a large destructive event occurs with epicentral intensity larger than VIII. The maximum expected magnitude is M GR = 6.5 and the epicenter distribution outlines significant clustering in the zones and on the lines mentioned in the tectonic studies. Taking into account the date of the last major earthquake (1916) and the return periods of severe damaging shocks of over 85 years it is to be expected very soon a large shock in the area. That's why a seismicity and hazard study for this zone is necessary. In the paper there are studied the b parameter variation (the mean value is 0.69), the activity value, the return periods, and seismicity maps and different histograms are plotted. At the same time there are excluded from the catalogue the explosions due to Campulung quarry. Because the catalogue contains the aftershocks for the 1916 earthquake for the seismicity studies we have excluded these shocks. (authors)

  14. Seismotectonics of the Nicobar Swarm and the geodynamic implications for the 2004 Great Sumatran Earthquake

    Science.gov (United States)

    Lister, Gordon

    2017-04-01

    The Great Sumatran Earthquake took place on 26th December 2004. One month into the aftershock sequence, a dense swarm of earthquakes took place beneath the Andaman Sea, northeast of the Nicobar Islands. The swarm continued for ˜11 days, rapidly decreasing in intensity towards the end of that period. Unlike most earthquake swarms, the Nicobar cluster was characterised by a large number of shocks with moment magnitude exceeding five. This meant that centroid moment tensor data could be determined, and this data in turn allows geometric analysis of inferred fault plane motions. The classification obtained using program eQuakes shows aftershocks falling into distinct spatial groups. Thrusts dominate in the south (in the Sumatran domain), and normal faults dominate in the north (in the Andaman domain). Strike-slip faults are more evenly spread. They occur on the Sumatran wrench system, for example, but also on the Indian plate itself. Orientation groups readily emerge from such an analysis. Temporal variation in behaviour is immediately evident, changing after ˜12 months. Orientation groups in the first twelve months are consistent with margin perpendicular extension beneath the Andaman Sea (i.e. mode II megathrust behaviour) whereas afterward the pattern of deformation appears to have reverted to that expected in consequence of relative plate motion. In the first twelve months, strike-slip motion appears to have taken place on faults that are sub-parallel to spreading segments in the Andaman Sea. By early 2006 however normal fault clusters formed that showed ˜N-S extension across these spreading segments had resumed, while the overall density of aftershocks in the Andaman segment had considerably diminished. Throughout this entire period the Sumatran segment exhibited aftershock sequences consistent with ongoing Mode I megathrust behaviour. The Nicobar Swarm marks the transition from one sort of slab dynamics to the other. The earthquake swarm may have been

  15. Has El Salvador Fault Zone produced M ≥ 7.0 earthquakes? The 1719 El Salvador earthquake

    Science.gov (United States)

    Canora, C.; Martínez-Díaz, J.; Álvarez-Gómez, J.; Villamor, P.; Ínsua-Arévalo, J.; Alonso-Henar, J.; Capote, R.

    2013-05-01

    Historically, large earthquakes, Mw ≥ 7.0, in the Εl Salvador area have been attributed to activity in the Cocos-Caribbean subduction zone. Τhis is correct for most of the earthquakes of magnitude greater than 6.5. However, recent paleoseismic evidence points to the existence of large earthquakes associated with rupture of the Εl Salvador Fault Ζone, an Ε-W oriented strike slip fault system that extends for 150 km through central Εl Salvador. Τo calibrate our results from paleoseismic studies, we have analyzed the historical seismicity of the area. In particular, we suggest that the 1719 earthquake can be associated with paleoseismic activity evidenced in the Εl Salvador Fault Ζone. Α reinterpreted isoseismal map for this event suggests that the damage reported could have been a consequence of the rupture of Εl Salvador Fault Ζone, rather than rupture of the subduction zone. Τhe isoseismal is not different to other upper crustal earthquakes in similar tectonovolcanic environments. We thus challenge the traditional assumption that only the subduction zone is capable of generating earthquakes of magnitude greater than 7.0 in this region. Τhis result has broad implications for future risk management in the region. Τhe potential occurrence of strong ground motion, significantly higher and closer to the Salvadorian populations that those assumed to date, must be considered in seismic hazard assessment studies in this area.

  16. Design and Control of a Twelve-Bar Tensegrity Robot

    Data.gov (United States)

    National Aeronautics and Space Administration — A tensegrity (tensional integrity) robot is a lightweight, compliant system consisting of rods suspended in a network of cables. It can absorb large loads on impact,...

  17. Earthquake Loss Scenarios: Warnings about the Extent of Disasters

    Science.gov (United States)

    Wyss, M.; Tolis, S.; Rosset, P.

    2016-12-01

    It is imperative that losses expected due to future earthquakes be estimated. Officials and the public need to be aware of what disaster is likely in store for them in order to reduce the fatalities and efficiently help the injured. Scenarios for earthquake parameters can be constructed to a reasonable accuracy in highly active earthquake belts, based on knowledge of seismotectonics and history. Because of the inherent uncertainties of loss estimates however, it would be desirable that more than one group calculate an estimate for the same area. By discussing these estimates, one may find a consensus of the range of the potential disasters and persuade officials and residents of the reality of the earthquake threat. To model a scenario and estimate earthquake losses requires data sets that are sufficiently accurate of the number of people present, the built environment, and if possible the transmission of seismic waves. As examples we use loss estimates for possible repeats of historic earthquakes in Greece that occurred between -464 and 700. We model future large Greek earthquakes as having M6.8 and rupture lengths of 60 km. In four locations where historic earthquakes with serious losses have occurred, we estimate that 1,000 to 1,500 people might perish, with an additional factor of four people injured. Defining the area of influence of these earthquakes as that with shaking intensities larger and equal to V, we estimate that 1.0 to 2.2 million people in about 2,000 settlements may be affected. We calibrate the QLARM tool for calculating intensities and losses in Greece, using the M6, 1999 Athens earthquake and matching the isoseismal information for six earthquakes, which occurred in Greece during the last 140 years. Comparing fatality numbers that would occur theoretically today with the numbers reported, and correcting for the increase in population, we estimate that the improvement of the building stock has reduced the mortality and injury rate in Greek

  18. The Kresna earthquake of 1904 in Bulgaria

    Energy Technology Data Exchange (ETDEWEB)

    Ambraseys, N. [Imperial College of Science, London (United Kingdom). Technology and Medicine, Dept. of Civil Engineering

    2001-02-01

    The Kresna earthquake in 1904 in Bulgaria is one of the largest shallow 20th century events on land in the Balkans. This event, which was preceded by a large foreshock, has hitherto been assigned a range of magnitudes up to M{sub s} = 7.8 but the reappraisal of instrumental data yields as much smaller value of M{sub s} = 7.2 and a re-assesment of the intensity distribution suggests 7.1. Thus both instrumental and macroseismic data appear consistent with a magnitude which is also compatible with the fault segmentation and local morphology of the region which cannot accommodate shallow events much larger than about 7.0. The relatively large size of the main shock suggests surface faulting but the available field evidence is insufficient to establish the dimensions, attitude and amount of dislocation, except perhaps in the vicinity of Krupnik. This down sizing of the Kresna earthquake has important consequences for tectonics and earthquake hazard estimates in the Balkans.

  19. The Kresna earthquake of 1904 in Bulgaria

    Directory of Open Access Journals (Sweden)

    N. N. Ambraseys

    2001-06-01

    Full Text Available The Kresna earthquake in 1904 in Bulgaria is one of the largest shallow 20th century events on land in the Balkans. This event, which was preceded by a large foreshock, has hitherto been assigned a range of magnitudes up to M S = 7.8 but the reappraisal of instrumental data yields a much smaller value of M S = 7.2 and a re-assement of the intensity distribution suggests 7.1. Thus both instrumental and macroseismic data appear consistent with a magnitude which is also compatible with the fault segmentation and local morphology of the region which cannot accommodate shallow events much larger than about 7.0. The relatively large size of the main shock suggests surface faulting but the available field evidence is insufficient to establish the dimensions, attitude andamount of dislocation, except perhaps in the vicinity of Krupnik. This downsizing of the Kresna earthquake has important consequences for tectonics and earthquake hazard estimates in the Balkans.

  20. Earthquake Recurrence and the Resolution Potential of Tectono‐Geomorphic Records

    KAUST Repository

    Zielke, Olaf

    2018-04-17

    A long‐standing debate in active tectonics addresses how slip is accumulated through space and time along a given fault or fault section. This debate is in part still ongoing because of the lack of sufficiently long instrumental data that may constrain the recurrence characteristics of surface‐rupturing earthquakes along individual faults. Geomorphic and stratigraphic records are used instead to constrain this behavior. Although geomorphic data frequently indicate slip accumulation via quasicharacteristic same‐size offset increments, stratigraphic data indicate that earthquake timing observes a quasirandom distribution. Assuming that both observations are valid within their respective frameworks, I want to address here which recurrence model is able to reproduce this seemingly contradictory behavior. I further want to address how aleatory offset variability and epistemic measurement uncertainty affect our ability to resolve single‐earthquake surface slip and along‐fault slip‐accumulation patterns. I use a statistical model that samples probability density functions (PDFs) for geomorphic marker formation (storm events), marker displacement (surface‐rupturing earthquakes), and offset measurement, generating tectono‐geomorphic catalogs to investigate which PDF combination consistently reproduces the above‐mentioned field observations. Doing so, I find that neither a purely characteristic earthquake (CE) nor a Gutenberg–Richter (GR) earthquake recurrence model is able to consistently reproduce those field observations. A combination of both however, with moderate‐size earthquakes following the GR model and large earthquakes following the CE model, is able to reproduce quasirandom earthquake recurrence times while simultaneously generating quasicharacteristic geomorphic offset increments. Along‐fault slip accumulation is dominated by, but not exclusively linked to, the occurrence of similar‐size large earthquakes. Further, the resolution

  1. Quantitative Earthquake Prediction on Global and Regional Scales

    International Nuclear Information System (INIS)

    Kossobokov, Vladimir G.

    2006-01-01

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  2. Quantitative Earthquake Prediction on Global and Regional Scales

    Science.gov (United States)

    Kossobokov, Vladimir G.

    2006-03-01

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  3. Home seismometer for earthquake early warning

    Science.gov (United States)

    Horiuchi, Shigeki; Horiuchi, Yuko; Yamamoto, Shunroku; Nakamura, Hiromitsu; Wu, Changjiang; Rydelek, Paul A.; Kachi, Masaaki

    2009-02-01

    The Japan Meteorological Agency (JMA) has started the practical service of Earthquake Early Warning (EEW) and a very dense deployment of receiving units is expected in the near future. The receiving/alarm unit of an EEW system is equipped with a CPU and memory and is on-line via the internet. By adding an inexpensive seismometer and A/D converter, this unit is transformed into a real-time seismic observatory, which we are calling a home seismometer. If the home seismometer is incorporated in the standard receiving unit of EEW, then the number of seismic observatories will be drastically increased. Since the background noise inside a house caused by human activity may be very large, we have developed specialized software for on-site warning using the home seismometer. We tested our software and found that our algorithm can correctly distinguish between noise and earthquakes for nearly all the events.

  4. Foreshocks, aftershocks, and earthquake probabilities: Accounting for the landers earthquake

    Science.gov (United States)

    Jones, Lucile M.

    1994-01-01

    The equation to determine the probability that an earthquake occurring near a major fault will be a foreshock to a mainshock on that fault is modified to include the case of aftershocks to a previous earthquake occurring near the fault. The addition of aftershocks to the background seismicity makes its less probable that an earthquake will be a foreshock, because nonforeshocks have become more common. As the aftershocks decay with time, the probability that an earthquake will be a foreshock increases. However, fault interactions between the first mainshock and the major fault can increase the long-term probability of a characteristic earthquake on that fault, which will, in turn, increase the probability that an event is a foreshock, compensating for the decrease caused by the aftershocks.

  5. Food, water, and fault lines: Remote sensing opportunities for earthquake-response management of agricultural water

    International Nuclear Information System (INIS)

    Rodriguez, Jenna; Ustin, Susan; Sandoval-Solis, Samuel; O'Geen, Anthony Toby

    2016-01-01

    Earthquakes often cause destructive and unpredictable changes that can affect local hydrology (e.g. groundwater elevation or reduction) and thus disrupt land uses and human activities. Prolific agricultural regions overlie seismically active areas, emphasizing the importance to improve our understanding and monitoring of hydrologic and agricultural systems following a seismic event. A thorough data collection is necessary for adequate post-earthquake crop management response; however, the large spatial extent of earthquake's impact makes challenging the collection of robust data sets for identifying locations and magnitude of these impacts. Observing hydrologic responses to earthquakes is not a novel concept, yet there is a lack of methods and tools for assessing earthquake's impacts upon the regional hydrology and agricultural systems. The objective of this paper is to describe how remote sensing imagery, methods and tools allow detecting crop responses and damage incurred after earthquakes because a change in the regional hydrology. Many remote sensing datasets are long archived with extensive coverage and with well-documented methods to assess plant-water relations. We thus connect remote sensing of plant water relations to its utility in agriculture using a post-earthquake agrohydrologic remote sensing (PEARS) framework; specifically in agro-hydrologic relationships associated with recent earthquake events that will lead to improved water management. - Highlights: • Remote sensing to improve agricultural disaster management • Introduce post-earthquake agrohydrologic remote sensing (PEARS) framework • Apply PEARS framework to 2010 Maule Earthquake in Central Chile

  6. Food, water, and fault lines: Remote sensing opportunities for earthquake-response management of agricultural water

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, Jenna, E-mail: jmmartin@ucdavis.edu; Ustin, Susan; Sandoval-Solis, Samuel; O' Geen, Anthony Toby

    2016-09-15

    Earthquakes often cause destructive and unpredictable changes that can affect local hydrology (e.g. groundwater elevation or reduction) and thus disrupt land uses and human activities. Prolific agricultural regions overlie seismically active areas, emphasizing the importance to improve our understanding and monitoring of hydrologic and agricultural systems following a seismic event. A thorough data collection is necessary for adequate post-earthquake crop management response; however, the large spatial extent of earthquake's impact makes challenging the collection of robust data sets for identifying locations and magnitude of these impacts. Observing hydrologic responses to earthquakes is not a novel concept, yet there is a lack of methods and tools for assessing earthquake's impacts upon the regional hydrology and agricultural systems. The objective of this paper is to describe how remote sensing imagery, methods and tools allow detecting crop responses and damage incurred after earthquakes because a change in the regional hydrology. Many remote sensing datasets are long archived with extensive coverage and with well-documented methods to assess plant-water relations. We thus connect remote sensing of plant water relations to its utility in agriculture using a post-earthquake agrohydrologic remote sensing (PEARS) framework; specifically in agro-hydrologic relationships associated with recent earthquake events that will lead to improved water management. - Highlights: • Remote sensing to improve agricultural disaster management • Introduce post-earthquake agrohydrologic remote sensing (PEARS) framework • Apply PEARS framework to 2010 Maule Earthquake in Central Chile.

  7. The Northern Rupture of the 1762 Arakan Meghathrust Earthquake and other Potential Earthquake Sources in Bangladesh.

    Science.gov (United States)

    Akhter, S. H.; Seeber, L.; Steckler, M. S.

    2015-12-01

    Bangladesh is one of the most densely populated countries in the world. It occupies a major part of the Bengal Basin, which contains the Ganges-Brahmaputra Delta (GBD), the largest and one of the most active of world deltas, and is located along the Alpine-Himalayan seismic belt. As such it is vulnerable to many natural hazards, especially earthquakes. The country sits at the junction of three tectonic plates - Indian, Eurasian, and the Burma 'sliver' of the Sunda plate. These form two boundaries where plates converge- the India-Eurasia plate boundary to the north forming the Himalaya Arc and the India-Burma plate boundary to the east forming the Indo-Burma Arc. The India-Burma plate boundary is exceptionally wide because collision with the GBD feeds an exception amount of sediment into the subduction zone. Thus the Himalayan continent collision orogeny along with its syntaxes to the N and NE of Bangladesh and the Burma Arc subduction boundary surround Bangladesh on two sides with active faults of regional scale, raising the potential for high-magnitude earthquakes. In recent years Bangladesh has experienced minor to moderate earthquakes. Historical records show that major and great earthquakes have ravaged the country and the neighboring region several times over the last 450 years. Field observations of Tertiary structures along the Chittagong-Teknaf coast reveal that the rupture of 1762 Arakan megathrust earthquake extended as far north as the Sitakund anticline to the north of the city of Chittagong. This earthquake brought changes to the landscape, uplifting the Teknaf peninsula and St. Martin's Island by about 2-2.5 m, and activated two mud volcanos along the axis of the Sitakund anticline, where large tabular blocks of exotic crystalline limestone, were tectonically transported from a deep-seated formation along with the eruptive mud. Vast area of the coast including inland areas east of the lower Meghna River were inundated. More than 500 peoples died near

  8. Report by the 'Mega-earthquakes and mega-tsunamis' subgroup

    International Nuclear Information System (INIS)

    Friedel, Jacques; Courtillot, Vincent; Dercourt, Jean; Jaupart, Claude; Le Pichon, Xavier; Poirier, Jean-Paul; Salencon, Jean; Tapponnier, Paul; Dautray, Robert; Carpentier, Alain; Taquet, Philippe; Blanchet, Rene; Le Mouel, Jean-Louis; BARD, Pierre-Yves; Bernard, Pascal; Montagner, Jean-Paul; Armijo, Rolando; Shapiro, Nikolai; Tait, Steve; Cara, Michel; Madariaga, Raul; Pecker, Alain; Schindele, Francois; Douglas, John

    2011-06-01

    This report comprises a presentation of scientific data on subduction earthquakes, on tsunamis and on the Tohoku earthquake. It proposes a detailed description of the French situation (in the West Indies, in metropolitan France, and in terms of soil response), and a discussion of social and economic issues (governance, seismic regulation and nuclear safety, para-seismic protection of constructions). The report is completed by other large documents: presentation of data on the Japanese earthquake, discussion on prediction and governance errors in the management of earthquake mitigation in Japan, discussions on tsunami prevention, on needs of research on accelerometers, and on the seismic risk in France

  9. GPS detection of ionospheric perturbation before the 13 February 2001, El Salvador earthquake

    Science.gov (United States)

    Plotkin, V. V.

    A large earthquake of M6.6 occurred on 13 February 2001 at 14:22:05 UT in El Salvador. We detected ionospheric perturbation before this earthquake using GPS data received from CORS network. Systematic decreases of ionospheric total electron content during two days before the earthquake onset were observed at set of stations near the earthquake location and probably in region of about 1000 km from epicenter. This result is consistent with that of investigators, which studied these phenomena with several observational techniques. However it is possible, that such TEC changes are simultaneously accompanied by changes due to solar wind parameters and Kp -index.

  10. GPS detection of ionospheric perturbation before the 13 February 2001, El Salvador earthquake

    Directory of Open Access Journals (Sweden)

    V. V. Plotkin

    2003-01-01

    Full Text Available A large earthquake of M6.6 occurred on 13 February 2001 at 14:22:05 UT in El Salvador. We detected ionospheric perturbation before this earthquake using GPS data received from CORS network. Systematic decreases of ionospheric total electron content during two days before the earthquake onset were observed at set of stations near the earthquake location and probably in region of about 1000 km from epicenter. This result is consistent with that of investigators, which studied these phenomena with several observational techniques. However it is possible, that such TEC changes are simultaneously accompanied by changes due to solar wind parameters and Kp -index.

  11. Generation of earthquake signals

    International Nuclear Information System (INIS)

    Kjell, G.

    1994-01-01

    Seismic verification can be performed either as a full scale test on a shaker table or as numerical calculations. In both cases it is necessary to have an earthquake acceleration time history. This report describes generation of such time histories by filtering white noise. Analogue and digital filtering methods are compared. Different methods of predicting the response spectrum of a white noise signal filtered by a band-pass filter are discussed. Prediction of both the average response level and the statistical variation around this level are considered. Examples with both the IEEE 301 standard response spectrum and a ground spectrum suggested for Swedish nuclear power stations are included in the report

  12. Earthquakes Threaten Many American Schools

    Science.gov (United States)

    Bailey, Nancy E.

    2010-01-01

    Millions of U.S. children attend schools that are not safe from earthquakes, even though they are in earthquake-prone zones. Several cities and states have worked to identify and repair unsafe buildings, but many others have done little or nothing to fix the problem. The reasons for ignoring the problem include political and financial ones, but…

  13. Make an Earthquake: Ground Shaking!

    Science.gov (United States)

    Savasci, Funda

    2011-01-01

    The main purposes of this activity are to help students explore possible factors affecting the extent of the damage of earthquakes and learn the ways to reduce earthquake damages. In these inquiry-based activities, students have opportunities to develop science process skills and to build an understanding of the relationship among science,…

  14. Ionospheric anomalies detected by ionosonde and possibly related to crustal earthquakes in Greece

    Science.gov (United States)

    Perrone, Loredana; De Santis, Angelo; Abbattista, Cristoforo; Alfonsi, Lucilla; Amoruso, Leonardo; Carbone, Marianna; Cesaroni, Claudio; Cianchini, Gianfranco; De Franceschi, Giorgiana; De Santis, Anna; Di Giovambattista, Rita; Marchetti, Dedalo; Pavòn-Carrasco, Francisco J.; Piscini, Alessandro; Spogli, Luca; Santoro, Francesca

    2018-03-01

    Ionosonde data and crustal earthquakes with magnitude M ≥ 6.0 observed in Greece during the 2003-2015 period were examined to check if the relationships obtained earlier between precursory ionospheric anomalies and earthquakes in Japan and central Italy are also valid for Greek earthquakes. The ionospheric anomalies are identified on the observed variations of the sporadic E-layer parameters (h'Es, foEs) and foF2 at the ionospheric station of Athens. The corresponding empirical relationships between the seismo-ionospheric disturbances and the earthquake magnitude and the epicentral distance are obtained and found to be similar to those previously published for other case studies. The large lead times found for the ionospheric anomalies occurrence may confirm a rather long earthquake preparation period. The possibility of using the relationships obtained for earthquake prediction is finally discussed.

  15. A way to synchronize models with seismic faults for earthquake forecasting

    DEFF Research Database (Denmark)

    González, Á.; Gómez, J.B.; Vázquez-Prada, M.

    2006-01-01

    Numerical models are starting to be used for determining the future behaviour of seismic faults and fault networks. Their final goal would be to forecast future large earthquakes. In order to use them for this task, it is necessary to synchronize each model with the current status of the actual....... Earthquakes, though, provide indirect but measurable clues of the stress and strain status in the lithosphere, which should be helpful for the synchronization of the models. The rupture area is one of the measurable parameters of earthquakes. Here we explore how it can be used to at least synchronize fault...... models between themselves and forecast synthetic earthquakes. Our purpose here is to forecast synthetic earthquakes in a simple but stochastic (random) fault model. By imposing the rupture area of the synthetic earthquakes of this model on other models, the latter become partially synchronized...

  16. Ionospheric anomalies detected by ionosonde and possibly related to crustal earthquakes in Greece

    Directory of Open Access Journals (Sweden)

    L. Perrone

    2018-03-01

    Full Text Available Ionosonde data and crustal earthquakes with magnitude M ≥ 6.0 observed in Greece during the 2003–2015 period were examined to check if the relationships obtained earlier between precursory ionospheric anomalies and earthquakes in Japan and central Italy are also valid for Greek earthquakes. The ionospheric anomalies are identified on the observed variations of the sporadic E-layer parameters (h′Es, foEs and foF2 at the ionospheric station of Athens. The corresponding empirical relationships between the seismo-ionospheric disturbances and the earthquake magnitude and the epicentral distance are obtained and found to be similar to those previously published for other case studies. The large lead times found for the ionospheric anomalies occurrence may confirm a rather long earthquake preparation period. The possibility of using the relationships obtained for earthquake prediction is finally discussed.

  17. Areas prone to slow slip events impede earthquake rupture propagation and promote afterslip

    Science.gov (United States)

    Rolandone, Frederique; Nocquet, Jean-Mathieu; Mothes, Patricia A.; Jarrin, Paul; Vallée, Martin; Cubas, Nadaya; Hernandez, Stephen; Plain, Morgan; Vaca, Sandro; Font, Yvonne

    2018-01-01

    At subduction zones, transient aseismic slip occurs either as afterslip following a large earthquake or as episodic slow slip events during the interseismic period. Afterslip and slow slip events are usually considered as distinct processes occurring on separate fault areas governed by different frictional properties. Continuous GPS (Global Positioning System) measurements following the 2016 Mw (moment magnitude) 7.8 Ecuador earthquake reveal that large and rapid afterslip developed at discrete areas of the megathrust that had previously hosted slow slip events. Regardless of whether they were locked or not before the earthquake, these areas appear to persistently release stress by aseismic slip throughout the earthquake cycle and outline the seismic rupture, an observation potentially leading to a better anticipation of future large earthquakes. PMID:29404404

  18. Introduction to the focus section on the 2015 Gorkha, Nepal, earthquake

    Science.gov (United States)

    Hough, Susan E.

    2015-01-01

    It has long been recognized that Nepal faces high earthquake hazard, with the most recent large (Mw>7.5) events in 1833 and 1934. When the 25 April 2015Mw 7.8 Gorkha earthquake struck, it appeared initially to be a realization of worst fears. In spite of its large magnitude and proximity to the densely populated Kathmandu valley, however, the level of damage was lower than anticipated, with most vernacular structures within the valley experiencing little or no structural damage. Outside the valley, catastrophic damage did occur in some villages, associated with the high vulnerability of stone masonry construction and, in many cases, landsliding. The unexpected observations from this expected earthquake provide an urgent impetus to understand the event itself and to better characterize hazard from future large Himalayan earthquakes. Toward this end, articles in this special focus section present and describe available data sets and initial results that better illuminate and interpret the earthquake and its effects.

  19. Periodic, chaotic, and doubled earthquake recurrence intervals on the deep San Andreas fault.

    Science.gov (United States)

    Shelly, David R

    2010-06-11

    Earthquake recurrence histories may provide clues to the timing of future events, but long intervals between large events obscure full recurrence variability. In contrast, small earthquakes occur frequently, and recurrence intervals are quantifiable on a much shorter time scale. In this work, I examine an 8.5-year sequence of more than 900 recurring low-frequency earthquake bursts composing tremor beneath the San Andreas fault near Parkfield, California. These events exhibit tightly clustered recurrence intervals that, at times, oscillate between approximately 3 and approximately 6 days, but the patterns sometimes change abruptly. Although the environments of large and low-frequency earthquakes are different, these observations suggest that similar complexity might underlie sequences of large earthquakes.

  20. Periodic, chaotic, and doubled earthquake recurrence intervals on the deep San Andreas Fault

    Science.gov (United States)

    Shelly, David R.

    2010-01-01

    Earthquake recurrence histories may provide clues to the timing of future events, but long intervals between large events obscure full recurrence variability. In contrast, small earthquakes occur frequently, and recurrence intervals are quantifiable on a much shorter time scale. In this work, I examine an 8.5-year sequence of more than 900 recurring low-frequency earthquake bursts composing tremor beneath the San Andreas fault near Parkfield, California. These events exhibit tightly clustered recurrence intervals that, at times, oscillate between ~3 and ~6 days, but the patterns sometimes change abruptly. Although the environments of large and low-frequency earthquakes are different, these observations suggest that similar complexity might underlie sequences of large earthquakes.

  1. The HayWired Earthquake Scenario

    Science.gov (United States)

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    interconnectedness of infrastructure, society, and our economy. How would this earthquake scenario, striking close to Silicon Valley, impact our interconnected world in ways and at a scale we have not experienced in any previous domestic earthquake?The area of present-day Contra Costa, Alameda, and Santa Clara Counties contended with a magnitude-6.8 earthquake in 1868 on the Hayward Fault. Although sparsely populated then, about 30 people were killed and extensive property damage resulted. The question of what an earthquake like that would do today has been examined before and is now revisited in the HayWired scenario. Scientists have documented a series of prehistoric earthquakes on the Hayward Fault and are confident that the threat of a future earthquake, like that modeled in the HayWired scenario, is real and could happen at any time. The team assembled to build this scenario has brought innovative new approaches to examining the natural hazards, impacts, and consequences of such an event. Such an earthquake would also be accompanied by widespread liquefaction and landslides, which are treated in greater detail than ever before. The team also considers how the now-prototype ShakeAlert earthquake early warning system could provide useful public alerts and automatic actions.Scientific Investigations Report 2017–5013 and accompanying data releases are the products of an effort led by the USGS, but this body of work was created through the combined efforts of a large team including partners who have come together to form the HayWired Coalition (see chapter A). Use of the HayWired scenario has already begun. More than a full year of intensive partner engagement, beginning in April 2017, is being directed toward producing the most in-depth look ever at the impacts and consequences of a large earthquake on the Hayward Fault. With the HayWired scenario, our hope is to encourage and support the active ongoing engagement of the entire community of the San Francisco Bay region by

  2. Aftereffects of Subduction-Zone Earthquakes: Potential Tsunami Hazards along the Japan Sea Coast.

    Science.gov (United States)

    Minoura, Koji; Sugawara, Daisuke; Yamanoi, Tohru; Yamada, Tsutomu

    2015-10-01

    The 2011 Tohoku-Oki Earthquake is a typical subduction-zone earthquake and is the 4th largest earthquake after the beginning of instrumental observation of earthquakes in the 19th century. In fact, the 2011 Tohoku-Oki Earthquake displaced the northeast Japan island arc horizontally and vertically. The displacement largely changed the tectonic situation of the arc from compressive to tensile. The 9th century in Japan was a period of natural hazards caused by frequent large-scale earthquakes. The aseismic tsunamis that inflicted damage on the Japan Sea coast in the 11th century were related to the occurrence of massive earthquakes that represented the final stage of a period of high seismic activity. Anti-compressive tectonics triggered by the subduction-zone earthquakes induced gravitational instability, which resulted in the generation of tsunamis caused by slope failing at the arc-back-arc boundary. The crustal displacement after the 2011 earthquake infers an increased risk of unexpected local tsunami flooding in the Japan Sea coastal areas.

  3. Short presentation on some researches activities about near field earthquakes

    International Nuclear Information System (INIS)

    Donald, John

    2002-01-01

    The major hazard posed by earthquakes is often thought to be due to moderate to large magnitude events. However, there have been many cases where earthquakes of moderate and even small magnitude have caused very significant destruction when they have coincided with population centres. Even though the area of intense ground shaking caused by such events is generally small, the epicentral motions can be severe enough to cause damage even in well-engineered structures. Two issues are addressed here, the first being the identification of the minimum earthquake magnitude likely to cause damage to engineered structures and the limits of the near-field for small-to-moderate magnitude earthquakes. The second issue addressed is whether features of near-field ground motions such as directivity, which can significantly enhance the destructive potential, occur in small-to-moderate magnitude events. The accelerograms from the 1986 San Salvador (El Salvador) earthquake indicate that it may be non conservative to assume that near-field directivity effects only need to be considered for earthquakes of moment magnitude M 6.5 and greater. (author)

  4. Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes

    NARCIS (Netherlands)

    Cheong, S.A.; Tan, T.L.; Chen, C.-C.; Chang, W.-L.; Liu, Z.; Chew, L.Y.; Sloot, P.M.A.; Johnson, N.F.

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting

  5. Reliability Based assessment of buildings under earthquakes due to gas extraction

    NARCIS (Netherlands)

    Steenbergen, R.D.J.M.; Vrouwenvelder, A.C.W.M.

    2014-01-01

    In the northern part of the Netherlands over de last decades shallow earthquakes are induced due to large scale gas extraction from the Groningen gas field. Earthquakes occur due to the compaction of the reservoir rock, which leads to subsidence at surface and strain build-up in the reservoir rock

  6. Posttraumatic Stress Disorder Symptom Structure in Chinese Adolescents Exposed to a Deadly Earthquake

    Science.gov (United States)

    Wang, Li; Long, Di; Li, Zhongquan; Armour, Cherie

    2011-01-01

    This present study examined the structure of posttraumatic stress disorder (PTSD) symptoms in a large sample of Chinese adolescents exposed to a deadly earthquake. A total of 2,800 middle school students aged 12 to 18 years participated in the study 6 months after the "Wenchuan Earthquake". Results of confirmatory factor analysis…

  7. Memory effect in M ≥ 7 earthquakes of Taiwan

    Science.gov (United States)

    Wang, Jeen-Hwa

    2014-07-01

    The M ≥ 7 earthquakes that occurred in the Taiwan region during 1906-2006 are taken to study the possibility of memory effect existing in the sequence of those large earthquakes. Those events are all mainshocks. The fluctuation analysis technique is applied to analyze two sequences in terms of earthquake magnitude and inter-event time represented in the natural time domain. For both magnitude and inter-event time, the calculations are made for three data sets, i.e., the original order data, the reverse-order data, and that of the mean values. Calculated results show that the exponents of scaling law of fluctuation versus window length are less than 0.5 for the sequences of both magnitude and inter-event time data. In addition, the phase portraits of two sequent magnitudes and two sequent inter-event times are also applied to explore if large (or small) earthquakes are followed by large (or small) events. Results lead to a negative answer. Together with all types of information in study, we make a conclusion that the earthquake sequence in study is short-term corrected and thus the short-term memory effect would be operative.

  8. Goce derived geoid changes before the Pisagua 2014 earthquake

    Directory of Open Access Journals (Sweden)

    Orlando Álvarez

    2018-01-01

    Full Text Available The analysis of space – time surface deformation during earthquakes reveals the variable state of stress that occurs at deep crustal levels, and this information can be used to better understand the seismic cycle. Understanding the possible mechanisms that produce earthquake precursors is a key issue for earthquake prediction. In the last years, modern geodesy can map the degree of seismic coupling during the interseismic period, as well as the coseismic and postseismic slip for great earthquakes along subduction zones. Earthquakes usually occur due to mass transfer and consequent gravity variations, where these changes have been monitored for intraplate earthquakes by means of terrestrial gravity measurements. When stresses and correspondent rupture areas are large, affecting hundreds of thousands of square kilometres (as occurs in some segments along plate interface zones, satellite gravimetry data become relevant. This is due to the higher spatial resolution of this type of data when compared to terrestrial data, and also due to their homogeneous precision and availability across the whole Earth. Satellite gravity missions as GOCE can map the Earth gravity field with unprecedented precision and resolution. We mapped geoid changes from two GOCE satellite models obtained by the direct approach, which combines data from other gravity missions as GRACE and LAGEOS regarding their best characteristics. The results show that the geoid height diminished from a year to five months before the main seismic event in the region where maximum slip occurred after the Pisagua Mw = 8.2 great megathrust earthquake. This diminution is interpreted as accelerated inland-directed interseismic mass transfer before the earthquake, coinciding with the intermediate degree of seismic coupling reported in the region. We highlight the advantage of satellite data for modelling surficial deformation related to pre-seismic displacements. This deformation, combined to

  9. <> earthquakes: a growing contribution to the Catalogue of Strong Italian Earthquakes

    Directory of Open Access Journals (Sweden)

    E. Guidoboni

    2000-06-01

    Full Text Available The particular structure of the research into historical seismology found in this catalogue has allowed a lot of information about unknown seismic events to be traced. This new contribution to seismologic knowledge mainly consists in: i the retrieval and organisation within a coherent framework of documentary evidence of earthquakes that took place between the Middle Ages and the sixteenth century; ii the improved knowledge of seismic events, even destructive events, which in the past had been "obscured" by large earthquakes; iii the identification of earthquakes in "silent" seismic areas. The complex elements to be taken into account when dealing with unknown seismic events have been outlined; much "new" information often falls into one of the following categories: simple chronological errors relative to other well-known events; descriptions of other natural phenomena, though defined in texts as "earthquakes" (landslides, hurricanes, tornadoes, etc.; unknown tremors belonging to known seismic periods; tremors that may be connected with events which have been catalogued under incorrect dates and with very approximate estimates of location and intensity. This proves that this was not a real seismic "silence" but a research vacuum.

  10. Geodetic Finite-Fault-based Earthquake Early Warning Performance for Great Earthquakes Worldwide

    Science.gov (United States)

    Ruhl, C. J.; Melgar, D.; Grapenthin, R.; Allen, R. M.

    2017-12-01

    GNSS-based earthquake early warning (EEW) algorithms estimate fault-finiteness and unsaturated moment magnitude for the largest, most damaging earthquakes. Because large events are infrequent, algorithms are not regularly exercised and insufficiently tested on few available datasets. The Geodetic Alarm System (G-larmS) is a GNSS-based finite-fault algorithm developed as part of the ShakeAlert EEW system in the western US. Performance evaluations using synthetic earthquakes offshore Cascadia showed that G-larmS satisfactorily recovers magnitude and fault length, providing useful alerts 30-40 s after origin time and timely warnings of ground motion for onshore urban areas. An end-to-end test of the ShakeAlert system demonstrated the need for GNSS data to accurately estimate ground motions in real-time. We replay real data from several subduction-zone earthquakes worldwide to demonstrate the value of GNSS-based EEW for the largest, most damaging events. We compare predicted ground acceleration (PGA) from first-alert-solutions with those recorded in major urban areas. In addition, where applicable, we compare observed tsunami heights to those predicted from the G-larmS solutions. We show that finite-fault inversion based on GNSS-data is essential to achieving the goals of EEW.

  11. The CATDAT damaging earthquakes database

    Directory of Open Access Journals (Sweden)

    J. E. Daniell

    2011-08-01

    Full Text Available The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes.

    Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon.

    Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected, and economic losses (direct, indirect, aid, and insured.

    Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto ($214 billion USD damage; 2011 HNDECI-adjusted dollars compared to the 2011 Tohoku (>$300 billion USD at time of writing, 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product, exchange rate, wage information, population, HDI (Human Development Index, and insurance information have been collected globally to form comparisons.

    This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global

  12. The CATDAT damaging earthquakes database

    Science.gov (United States)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.

    2011-08-01

    The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.

  13. [Courses in basic research methodology a valuable asset for clinicians. Twelve years' experiences in southern Sweden].

    Science.gov (United States)

    Håkansson, Anders; Lindberg, Eva Pettersson; Henriksson, Karin

    2002-03-07

    At the Department of Community Medicine at Lund University we have given courses in basic research methodology since 1989. The course has yielded 20 points of university credit, the equivalent of one full-time semester of studies, and it has been run part-time, covering one and a half years. Our aim has been to provide a large number of physicians with basic training in research methods, and to stimulate the engagement of new scientific students from the whole Southern Health Care Region. During the first ten years, 138 general practitioners (20% of the GPs of the region) and 202 specialists completed our courses. Up till now, 19 GPs (14%) and 19 specialists (9%) have begun PhD studies. During the last two years, another 100 physicians from southern Sweden have attended our courses, as well as GPs from Zealand in Denmark. We have been developing our course in basic research methods during a twelve-year period, and it is now well established in our health care region. We feel that we have succeeded in reaching the two goals we had set up: to give a large number of physicians a fundamental knowledge of research methods and to recruit and increase the number of PhD students. We believe that medical research and development must flourish also outside the traditional university settings.

  14. Seismic swarm associated with the 2008 eruption of Kasatochi Volcano, Alaska: earthquake locations and source parameters

    Science.gov (United States)

    Ruppert, Natalia G.; Prejean, Stephanie G.; Hansen, Roger A.

    2011-01-01

    An energetic seismic swarm accompanied an eruption of Kasatochi Volcano in the central Aleutian volcanic arc in August of 2008. In retrospect, the first earthquakes in the swarm were detected about 1 month prior to the eruption onset. Activity in the swarm quickly intensified less than 48 h prior to the first large explosion and subsequently subsided with decline of eruptive activity. The largest earthquake measured as moment magnitude 5.8, and a dozen additional earthquakes were larger than magnitude 4. The swarm exhibited both tectonic and volcanic characteristics. Its shear failure earthquake features were b value = 0.9, most earthquakes with impulsive P and S arrivals and higher-frequency content, and earthquake faulting parameters consistent with regional tectonic stresses. Its volcanic or fluid-influenced seismicity features were volcanic tremor, large CLVD components in moment tensor solutions, and increasing magnitudes with time. Earthquake location tests suggest that the earthquakes occurred in a distributed volume elongated in the NS direction either directly under the volcano or within 5-10 km south of it. Following the MW 5.8 event, earthquakes occurred in a new crustal volume slightly east and north of the previous earthquakes. The central Aleutian Arc is a tectonically active region with seismicity occurring in the crusts of the Pacific and North American plates in addition to interplate events. We postulate that the Kasatochi seismic swarm was a manifestation of the complex interaction of tectonic and magmatic processes in the Earth's crust. Although magmatic intrusion triggered the earthquakes in the swarm, the earthquakes failed in context of the regional stress field.

  15. Source discrimination between Mining blasts and Earthquakes in Tianshan orogenic belt, NW China

    Science.gov (United States)

    Tang, L.; Zhang, M.; Wen, L.

    2017-12-01

    In recent years, a large number of quarry blasts have been detonated in Tianshan Mountains of China. It is necessary to discriminate those non-earthquake records from the earthquake catalogs in order to determine the real seismicity of the region. In this study, we have investigated spectral ratios and amplitude ratios as discriminants for regional seismic-event identification using explosions and earthquakes recorded at Xinjiang Seismic Network (XJSN) of China. We used a data set that includes 1071 earthquakes and 2881 non-earthquakes as training data recorded by the XJSN between years of 2009 and 2016, with both types of events in a comparable local magnitude range (1.5 to 2.9). The non-earthquake and earthquake groups were well separated by amplitude ratios of Pg/Sg, with the separation increasing with frequency when averaged over three stations. The 8- to 15-Hz Pg/Sg ratio was proved to be the most precise and accurate discriminant, which works for more than 90% of the events. In contrast, the P spectral ratio performed considerably worse with a significant overlap (about 60% overlap) between the earthquake and explosion populations. The comparison results show amplitude ratios between compressional and shear waves discriminate better than low-frequency to high-frequency spectral ratios for individual phases. In discriminating between explosions and earthquakes, none of two discriminants were able to completely separate the two populations of events. However, a joint discrimination scheme employing simple majority voting reduces misclassifications to 10%. In the region of the study, 44% of the examined seismic events were determined to be non-earthquakes and 55% to be earthquakes. The earthquakes occurring on land are related to small faults, while the blasts are concentrated in large quarries.

  16. Creating a Global Building Inventory for Earthquake Loss Assessment and Risk Management

    Science.gov (United States)

    Jaiswal, Kishor; Wald, David J.

    2008-01-01

    Earthquakes have claimed approximately 8 million lives over the last 2,000 years (Dunbar, Lockridge and others, 1992) and fatality rates are likely to continue to rise with increased population and urbanizations of global settlements especially in developing countries. More than 75% of earthquake-related human casualties are caused by the collapse of buildings or structures (Coburn and Spence, 2002). It is disheartening to note that large fractions of the world's population still reside in informal, poorly-constructed & non-engineered dwellings which have high susceptibility to collapse during earthquakes. Moreover, with increasing urbanization half of world's population now lives in urban areas (United Nations, 2001), and half of these urban centers are located in earthquake-prone regions (Bilham, 2004). The poor performance of most building stocks during earthquakes remains a primary societal concern. However, despite this dark history and bleaker future trends, there are no comprehensive global building inventories of sufficient quality and coverage to adequately address and characterize future earthquake losses. Such an inventory is vital both for earthquake loss mitigation and for earthquake disaster response purposes. While the latter purpose is the motivation of this work, we hope that the global building inventory database described herein will find widespread use for other mitigation efforts as well. For a real-time earthquake impact alert system, such as U.S. Geological Survey's (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER), (Wald, Earle and others, 2006), we seek to rapidly evaluate potential casualties associated with earthquake ground shaking for any region of the world. The casualty estimation is based primarily on (1) rapid estimation of the ground shaking hazard, (2) aggregating the population exposure within different building types, and (3) estimating the casualties from the collapse of vulnerable buildings. Thus, the

  17. 1964 Great Alaska Earthquake: a photographic tour of Anchorage, Alaska

    Science.gov (United States)

    Thoms, Evan E.; Haeussler, Peter J.; Anderson, Rebecca D.; McGimsey, Robert G.

    2014-01-01

    On March 27, 1964, at 5:36 p.m., a magnitude 9.2 earthquake, the largest recorded earthquake in U.S. history, struck southcentral Alaska (fig. 1). The Great Alaska Earthquake (also known as the Good Friday Earthquake) occurred at a pivotal time in the history of earth science, and helped lead to the acceptance of plate tectonic theory (Cox, 1973; Brocher and others, 2014). All large subduction zone earthquakes are understood through insights learned from the 1964 event, and observations and interpretations of the earthquake have influenced the design of infrastructure and seismic monitoring systems now in place. The earthquake caused extensive damage across the State, and triggered local tsunamis that devastated the Alaskan towns of Whittier, Valdez, and Seward. In Anchorage, the main cause of damage was ground shaking, which lasted approximately 4.5 minutes. Many buildings could not withstand this motion and were damaged or collapsed even though their foundations remained intact. More significantly, ground shaking triggered a number of landslides along coastal and drainage valley bluffs underlain by the Bootlegger Cove Formation, a composite of facies containing variably mixed gravel, sand, silt, and clay which were deposited over much of upper Cook Inlet during the Late Pleistocene (Ulery and others, 1983). Cyclic (or strain) softening of the more sensitive clay facies caused overlying blocks of soil to slide sideways along surfaces dipping by only a few degrees. This guide is the document version of an interactive web map that was created as part of the commemoration events for the 50th anniversary of the 1964 Great Alaska Earthquake. It is accessible at the U.S. Geological Survey (USGS) Alaska Science Center website: http://alaska.usgs.gov/announcements/news/1964Earthquake/. The website features a map display with suggested tour stops in Anchorage, historical photographs taken shortly after the earthquake, repeat photography of selected sites, scanned documents

  18. Twitter Seismology: Earthquake Monitoring and Response in a Social World

    Science.gov (United States)

    Bowden, D. C.; Earle, P. S.; Guy, M.; Smoczyk, G.

    2011-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment USGS earthquake response products and the delivery of hazard information. The potential uses of Twitter for earthquake response include broadcasting earthquake alerts, rapidly detecting widely felt events, qualitatively assessing earthquake damage effects, communicating with the public, and participating in post-event collaboration. Several seismic networks and agencies are currently distributing Twitter earthquake alerts including the European-Mediterranean Seismological Centre (@LastQuake), Natural Resources Canada (@CANADAquakes), and the Indonesian meteorological agency (@infogempabmg); the USGS will soon distribute alerts via the @USGSted and @USGSbigquakes Twitter accounts. Beyond broadcasting alerts, the USGS is investigating how to use tweets that originate near the epicenter to detect and characterize shaking events. This is possible because people begin tweeting immediately after feeling an earthquake, and their short narratives and exclamations are available for analysis within 10's of seconds of the origin time. Using five months of tweets that contain the word "earthquake" and its equivalent in other languages, we generate a tweet-frequency time series. The time series clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a simple Short-Term-Average / Long-Term-Average algorithm similar to that commonly used to detect seismic phases. As with most auto-detection algorithms, the parameters can be tuned to catch more or less events at the cost of more or less false triggers. When tuned to a moderate sensitivity, the detector found 48 globally-distributed, confirmed seismic events with only 2 false triggers. A space-shuttle landing and "The Great California ShakeOut" caused the false triggers. This number of

  19. Taiwan Earthquake Damage Index Sin Mei Nga* and Masataka Andob a* Department of Geology, Chinese Culture University, No. 55, Hwa-Kang Road, Yang-Ming-Shan, Taipei 11114, Taiwan b Institute of Earth Sciences, Academia Sinica, 128, Sec2, Academia Road, Nangang, Taipei 11529, Taiwan * Corresponding author. Tel.: +886 (02) 28 61 05 11 ext.26133 fax: +886 (02) 28 61 49 59 E-mail: wsw2@ulive.pccu.edu.tw or sin_mei_josephine_ng@hotmail.com

    Science.gov (United States)

    Ng, S.

    2012-12-01

    Taking advantage of a previous study and twelve-year, free-field strong motion data in Taiwan, a preliminary, five-level earthquake damage index is newly proposed: I-No (no damage), II-Very Light, III-Light, IV-Moderate, and V-Heavy. For index I, PGA and PGV are, respectively, 450 gal and >75 cm/s. Ten damaging seismic events in the past twelve years are redefined using this new earthquake damage index, with the devastating Chi-Chi earthquake and one non-damaging event as reference earthquakes. This newly proposed index depicts seismic hazard of these earthquakes with higher accuracy when compared to the existing intensity scale in Taiwan region. For further analysis, Japan earthquakes are also plotted as references.

  20. Assessment of earthquake-induced landslides hazard in El Salvador after the 2001 earthquakes using macroseismic analysis

    Science.gov (United States)

    Esposito, Eliana; Violante, Crescenzo; Giunta, Giuseppe; Ángel Hernández, Miguel

    2016-04-01

    Two strong earthquakes and a number of smaller aftershocks struck El Salvador in the year 2001. The January 13 2001 earthquake, Mw 7.7, occurred along the Cocos plate, 40 km off El Salvador southern coast. It resulted in about 1300 deaths and widespread damage, mainly due to massive landsliding. Two of the largest earthquake-induced landslides, Las Barioleras and Las Colinas (about 2x105 m3) produced major damage to buildings and infrastructures and 500 fatalities. A neighborhood in Santa Tecla, west of San Salvador, was destroyed. The February 13 2001 earthquake, Mw 6.5, occurred 40 km east-southeast of San Salvador. This earthquake caused over 300 fatalities and triggered several landslides over an area of 2,500 km2 mostly in poorly consolidated volcaniclastic deposits. The La Leona landslide (5-7x105 m3) caused 12 fatalities and extensive damage to the Panamerican Highway. Two very large landslides of 1.5 km3 and 12 km3 produced hazardous barrier lakes at Rio El Desague and Rio Jiboa, respectively. More than 16.000 landslides occurred throughout the country after both quakes; most of them occurred in pyroclastic deposits, with a volume less than 1x103m3. The present work aims to define the relationship between the above described earthquake intensity, size and areal distribution of induced landslides, as well as to refine the earthquake intensity in sparsely populated zones by using landslide effects. Landslides triggered by the 2001 seismic sequences provided useful indication for a realistic seismic hazard assessment, providing a basis for understanding, evaluating, and mapping the hazard and risk associated with earthquake-induced landslides.

  1. Questionnaire investigation for the earthquake in Honjo city and Yazawa city, Akita Prefecture; Jishin ni kansuru ishiki chosa (Akitaken Honjoshi oyobi Yuzawashi ni okeru anketo kara)

    Energy Technology Data Exchange (ETDEWEB)

    Nogoshi, M [Akita University, Akita (Japan). College of Education; Kabutoya, S

    1996-05-01

    Consciousness for the earthquake was investigated by questionnaire surveys made in Honjo City and Yuzawa City, Akita Pref. in October 1995 twelve years after the Nihonkai Chubu Earthquake (M=7.7) in May 1983. The survey was conducted in terms of 27 items including personality, memory, knowledge/interest, psychology/action, mental attitude/preparations, wishes for researchers and administration, etc. Also included were the items on the Great Hanshin-Awaji Earthquake and the earthquake blank areas. The number of distributed questionnaires and the recovery rate of them are 1500 and 79.2% in Honjo City, and 1700 and 84.7% in Yuzawa City. From the survey, it was found that people have a lot of knowledge of and high interest in the earthquake and well remember it, and a lot of people know of tsunami, liquefaction phenomena, and the earthquake blank area. Further, they are afraid of earthquakes and think of their actions to be taken in case of earthquake. However, most people are little prepared for earthquakes. Important future subjects were suggested for the study of disaster prevention measures. 13 figs.

  2. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    KAUST Repository

    Razafindrakoto, Hoby

    2015-04-22

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  3. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    KAUST Repository

    Razafindrakoto, Hoby; Mai, Paul Martin; Genton, Marc G.; Zhang, Ling; Thingbaijam, Kiran Kumar

    2015-01-01

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  4. Earthquake Emergency Education in Dushanbe, Tajikistan

    Science.gov (United States)

    Mohadjer, Solmaz; Bendick, Rebecca; Halvorson, Sarah J.; Saydullaev, Umed; Hojiboev, Orifjon; Stickler, Christine; Adam, Zachary R.

    2010-01-01

    We developed a middle school earthquake science and hazards curriculum to promote earthquake awareness to students in the Central Asian country of Tajikistan. These materials include pre- and post-assessment activities, six science activities describing physical processes related to earthquakes, five activities on earthquake hazards and mitigation…

  5. Determination of Design Basis Earthquake ground motion

    International Nuclear Information System (INIS)

    Kato, Muneaki

    1997-01-01

    This paper describes principle of determining of Design Basis Earthquake following the Examination Guide, some examples on actual sites including earthquake sources to be considered, earthquake response spectrum and simulated seismic waves. In sppendix of this paper, furthermore, seismic safety review for N.P.P designed before publication of the Examination Guide was summarized with Check Basis Earthquake. (J.P.N.)

  6. Determination of Design Basis Earthquake ground motion

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Muneaki [Japan Atomic Power Co., Tokyo (Japan)

    1997-03-01

    This paper describes principle of determining of Design Basis Earthquake following the Examination Guide, some examples on actual sites including earthquake sources to be considered, earthquake response spectrum and simulated seismic waves. In sppendix of this paper, furthermore, seismic safety review for N.P.P designed before publication of the Examination Guide was summarized with Check Basis Earthquake. (J.P.N.)

  7. Real-Time Earthquake Monitoring with Spatio-Temporal Fields

    Science.gov (United States)

    Whittier, J. C.; Nittel, S.; Subasinghe, I.

    2017-10-01

    With live streaming sensors and sensor networks, increasingly large numbers of individual sensors are deployed in physical space. Sensor data streams are a fundamentally novel mechanism to deliver observations to information systems. They enable us to represent spatio-temporal continuous phenomena such as radiation accidents, toxic plumes, or earthquakes almost as instantaneously as they happen in the real world. Sensor data streams discretely sample an earthquake, while the earthquake is continuous over space and time. Programmers attempting to integrate many streams to analyze earthquake activity and scope need to write code to integrate potentially very large sets of asynchronously sampled, concurrent streams in tedious application code. In previous work, we proposed the field stream data model (Liang et al., 2016) for data stream engines. Abstracting the stream of an individual sensor as a temporal field, the field represents the Earth's movement at the sensor position as continuous. This simplifies analysis across many sensors significantly. In this paper, we undertake a feasibility study of using the field stream model and the open source Data Stream Engine (DSE) Apache Spark(Apache Spark, 2017) to implement a real-time earthquake event detection with a subset of the 250 GPS sensor data streams of the Southern California Integrated GPS Network (SCIGN). The field-based real-time stream queries compute maximum displacement values over the latest query window of each stream, and related spatially neighboring streams to identify earthquake events and their extent. Further, we correlated the detected events with an USGS earthquake event feed. The query results are visualized in real-time.

  8. Earthquakes and depleted gas reservoirs: which comes first?

    Science.gov (United States)

    Mucciarelli, M.; Donda, F.; Valensise, G.

    2015-10-01

    While scientists are paying increasing attention to the seismicity potentially induced by hydrocarbon exploitation, so far, little is known about the reverse problem, i.e. the impact of active faulting and earthquakes on hydrocarbon reservoirs. The 20 and 29 May 2012 earthquakes in Emilia, northern Italy (Mw 6.1 and 6.0), raised concerns among the public for being possibly human-induced, but also shed light on the possible use of gas wells as a marker of the seismogenic potential of an active fold and thrust belt. We compared the location, depth and production history of 455 gas wells drilled along the Ferrara-Romagna arc, a large hydrocarbon reserve in the southeastern Po Plain (northern Italy), with the location of the inferred surface projection of the causative faults of the 2012 Emilia earthquakes and of two pre-instrumental damaging earthquakes. We found that these earthquake sources fall within a cluster of sterile wells, surrounded by productive wells at a few kilometres' distance. Since the geology of the productive and sterile areas is quite similar, we suggest that past earthquakes caused the loss of all natural gas from the potential reservoirs lying above their causative faults. To validate our hypothesis we performed two different statistical tests (binomial and Monte Carlo) on the relative distribution of productive and sterile wells, with respect to seismogenic faults. Our findings have important practical implications: (1) they may allow major seismogenic sources to be singled out within large active thrust systems; (2) they suggest that reservoirs hosted in smaller anticlines are more likely to be intact; and (3) they also suggest that in order to minimize the hazard of triggering significant earthquakes, all new gas storage facilities should use exploited reservoirs rather than sterile hydrocarbon traps or aquifers.

  9. On operator diagnosis aid in severe earthquakes

    International Nuclear Information System (INIS)

    Lee, S.H.; Okrent, D.

    1988-01-01

    During a severe earthquake, any component, system, or structure may fail; the plant may be driven into a very complex situation in which instrumentaion and control systems may also fail and provide operators with unreliable information about the processing parameters crucial to plant safety. What can operators do when faced with such complexity. Even though the likelihood of such a severe earthquake may be very low, its consequence may be more serious if mitigative measures are not thought out and implemented in advance. The objectives of the present study is related to the measures to protect the plant from severe damage due to large earthquakes, namely, the improvement of operator capability to respond to seismic damage through the use of Emergency Procedure Guidelines (EPGs). The fact that the symptoms presented to operators may be unreliable in severe earthquakes endangers the validity of actions in EPGs. It is the purpose of this study to design a tool through which study may be done so that the weakness of EPGs may be identified in advance then, if possible, according to the practice results some learning may be obtained so that EPGs may be improved to accomodate the complexity to a maximum. In other words, the present study intends to provide a tool which may simulate available signals, including false ones, such that EPGs may be examined and operator actions may be studied. It is hoped to develop some knowledge needed to complement the currently available knowledge. The final product of this study shall be a program which may provide users the rationale on how it reachs conclusions such that users may improve their knowledge, as well as a program whose knowledge may be updated via user interfacing

  10. An information infrastructure for earthquake science

    Science.gov (United States)

    Jordan, T. H.; Scec/Itr Collaboration

    2003-04-01

    The Southern California Earthquake Center (SCEC), in collaboration with the San Diego Supercomputer Center, the USC Information Sciences Institute,IRIS, and the USGS, has received a large five-year grant from the NSF's ITR Program and its Geosciences Directorate to build a new information infrastructure for earthquake science. In many respects, the SCEC/ITR Project presents a microcosm of the IT efforts now being organized across the geoscience community, including the EarthScope initiative. The purpose of this presentation is to discuss the experience gained by the project thus far and lay out the challenges that lie ahead; our hope is to encourage cross-discipline collaboration in future IT advancements. Project goals have been formulated in terms of four "computational pathways" related to seismic hazard analysis (SHA). For example, Pathway 1 involves the construction of an open-source, object-oriented, and web-enabled framework for SHA computations that can incorporate a variety of earthquake forecast models, intensity-measure relationships, and site-response models, while Pathway 2 aims to utilize the predictive power of wavefield simulation in modeling time-dependent ground motion for scenario earthquakes and constructing intensity-measure relationships. The overall goal is to create a SCEC "community modeling environment" or collaboratory that will comprise the curated (on-line, documented, maintained) resources needed by researchers to develop and use these four computational pathways. Current activities include (1) the development and verification of the computational modules, (2) the standardization of data structures and interfaces needed for syntactic interoperability, (3) the development of knowledge representation and management tools, (4) the construction SCEC computational and data grid testbeds, and (5) the creation of user interfaces for knowledge-acquisition, code execution, and visualization. I will emphasize the increasing role of standardized

  11. Comparison of earthquake source parameters and interseismic plate coupling variations in global subduction zones (Invited)

    Science.gov (United States)

    Bilek, S. L.; Moyer, P. A.; Stankova-Pursley, J.

    2010-12-01

    Geodetically determined interseismic coupling variations have been found in subduction zones worldwide. These coupling variations have been linked to heterogeneities in interplate fault frictional conditions. These connections to fault friction imply that observed coupling variations are also important in influencing details in earthquake rupture behavior. Because of the wealth of newly available geodetic models along many subduction zones, it is now possible to examine detailed variations in coupling and compare to seismicity characteristics. Here we use a large catalog of earthquake source time functions and slip models for moderate to large magnitude earthquakes to explore these connections, comparing earthquake source parameters with available models of geodetic coupling along segments of the Japan, Kurile, Kamchatka, Peru, Chile, and Alaska subduction zones. In addition, we use published geodetic results along the Costa Rica margin to compare with source parameters of small magnitude earthquakes recorded with an onshore-offshore network of seismometers. For the moderate to large magnitude earthquakes, preliminary results suggest a complex relationship between earthquake parameters and estimates of strongly and weakly coupled segments of the plate interface. For example, along the Kamchatka subduction zone, these earthquakes occur primarily along the transition between strong and weak coupling, with significant heterogeneity in the pattern of moment scaled duration with respect to the coupling estimates. The longest scaled duration event in this catalog occurred in a region of strong coupling. Earthquakes along the transition between strong and weakly coupled exhibited the most complexity in the source time functions. Use of small magnitude (0.5 earthquake spectra, with higher corner frequencies and higher mean apparent stress for earthquakes that occur in along the Osa Peninsula relative to the Nicoya Peninsula, mimicking the along-strike variations in

  12. Physics of Earthquake Rupture Propagation

    Science.gov (United States)

    Xu, Shiqing; Fukuyama, Eiichi; Sagy, Amir; Doan, Mai-Linh

    2018-05-01

    A comprehensive understanding of earthquake rupture propagation requires the study of not only the sudden release of elastic strain energy during co-seismic slip, but also of other processes that operate at a variety of spatiotemporal scales. For example, the accumulation of the elastic strain energy usually takes decades to hundreds of years, and rupture propagation and termination modify the bulk properties of the surrounding medium that can influence the behavior of future earthquakes. To share recent findings in the multiscale investigation of earthquake rupture propagation, we held a session entitled "Physics of Earthquake Rupture Propagation" during the 2016 American Geophysical Union (AGU) Fall Meeting in San Francisco. The session included 46 poster and 32 oral presentations, reporting observations of natural earthquakes, numerical and experimental simulations of earthquake ruptures, and studies of earthquake fault friction. These presentations and discussions during and after the session suggested a need to document more formally the research findings, particularly new observations and views different from conventional ones, complexities in fault zone properties and loading conditions, the diversity of fault slip modes and their interactions, the evaluation of observational and model uncertainties, and comparison between empirical and physics-based models. Therefore, we organize this Special Issue (SI) of Tectonophysics under the same title as our AGU session, hoping to inspire future investigations. Eighteen articles (marked with "this issue") are included in this SI and grouped into the following six categories.

  13. Earthquake prediction by Kina Method

    International Nuclear Information System (INIS)

    Kianoosh, H.; Keypour, H.; Naderzadeh, A.; Motlagh, H.F.

    2005-01-01

    Earthquake prediction has been one of the earliest desires of the man. Scientists have worked hard to predict earthquakes for a long time. The results of these efforts can generally be divided into two methods of prediction: 1) Statistical Method, and 2) Empirical Method. In the first method, earthquakes are predicted using statistics and probabilities, while the second method utilizes variety of precursors for earthquake prediction. The latter method is time consuming and more costly. However, the result of neither method has fully satisfied the man up to now. In this paper a new method entitled 'Kiana Method' is introduced for earthquake prediction. This method offers more accurate results yet lower cost comparing to other conventional methods. In Kiana method the electrical and magnetic precursors are measured in an area. Then, the time and the magnitude of an earthquake in the future is calculated using electrical, and in particular, electrical capacitors formulas. In this method, by daily measurement of electrical resistance in an area we make clear that the area is capable of earthquake occurrence in the future or not. If the result shows a positive sign, then the occurrence time and the magnitude can be estimated by the measured quantities. This paper explains the procedure and details of this prediction method. (authors)

  14. Precisely locating the Klamath Falls, Oregon, earthquakes

    Science.gov (United States)

    Qamar, A.; Meagher, K.L.

    1993-01-01

    The Klamath Falls earthquakes on September 20, 1993, were the largest earthquakes centered in Oregon in more than 50 yrs. Only the magnitude 5.75 Milton-Freewater earthquake in 1936, which was centered near the Oregon-Washington border and felt in an area of about 190,000 sq km, compares in size with the recent Klamath Falls earthquakes. Although the 1993 earthquakes surprised many local residents, geologists have long recognized that strong earthquakes may occur along potentially active faults that pass through the Klamath Falls area. These faults are geologically related to similar faults in Oregon, Idaho, and Nevada that occasionally spawn strong earthquakes

  15. The Alaska earthquake, March 27, 1964: lessons and conclusions

    Science.gov (United States)

    Eckel, Edwin B.

    1970-01-01

    subsidence was superimposed on regional tectonic subsidence to heighten the flooding damage. Ground and surface waters were measurably affected by the earthquake, not only in Alaska but throughout the world. Expectably, local geologic conditions largely controlled the extent of structural damage, whether caused directly by seismic vibrations or by secondary effects such as those just described. Intensity was greatest in areas underlain by thick saturated unconsolidated deposits, least on indurated bedrock or permanently frozen ground, and intermediate on coarse well-drained gravel, on morainal deposits, or on moderately indurated sedimentary rocks. Local and even regional geology also controlled the distribution and extent of the earthquake's effects on hydrologic systems. In the conterminous United States, for example, seiches in wells and bodies of surface water were controlled by geologic structures of regional dimension. Devastating as the earthquake was, it had many long-term beneficial effects. Many of these were socioeconomic or engineering in nature; others were of scientific value. Much new and corroborative basic geologic and hydrologic information was accumulated in the course of the earthquake studies, and many new or improved investigative techniques were developed. Chief among these, perhaps, were the recognition that lakes can be used as giant tiltmeters, the refinement of methods for measuring land-level changes by observing displacements of barnacles and other sessile organisms, and the relating of hydrology to seismology by worldwide study of hydroseisms in surface-water bodies and in wells. The geologic and hydrologic lessons learned from studies of the Alaska earthquake also lead directly to better definition of the research needed to further our understanding of earthquakes and of how to avoid or lessen the effects of future ones. Research is needed on the origins and mechanisms of earthquakes, on crustal structure, and on the generation of tsunamis and

  16. Gas and Dust Phenomena of Mega-earthquakes and the Cause

    Science.gov (United States)

    Yue, Z.

    2013-12-01

    A mega-earthquake suddenly releases a large to extremely large amount of kinetic energy within a few tens to two hundreds seconds and over ten to hundreds kilometer distances in the Earth's crust and on ground surface. It also generates seismic waves that can be received globally and co-seismic ground damages such co-seismic ruptures and landslides. However, such vast, dramatic and devastating kinetic actions in the Earth's crustal rocks and on the ground soils cannot be known or predicted by people at few weeks, days, hours, or minutes before they are happening. Although seismologists can develop and use seismometers to report the locations and magnitudes of earthquakes within minutes of their occurrence, they cannot predict earthquakes at present. Therefore, damage earthquakes have caused and would continue to cause huge disasters, fatalities and injuries to our human beings. This problem may indicate that it is necessary to re-examine the cause of mega-earthquakes in addition to the conventional cause of active fault elastic rebounding. In the last ten years, many mega-earthquakes occurred in China and around the Pacific Ocean and caused many casualties to human beings and devastating disasters to environments. The author will give a brief review on the impacts of the mega-earthquakes happened in recent years. He will then present many gas and dust related phenomena associated with the sudden occurrences of these mega earthquakes. They include the 2001 Kunlunshan Earthquake M8.1, 2008 Wenchuan Earthquake M8.0 and the 2010 Yushu Earthquake M7.1 in China, the 2010 Haiti Earthquake M7.0, the 2010 Mexicali Earthquake M7.2, the 2010 Chile Earthquake M8.8, the 2011 Christchurch earthquake M6.3 and the 2011 Japan Earthquake M9.0 around the Pacific Ocean. He will discuss the cause of these gas and dust related phenomena. He will use these phenomena and their common cause to show that the earthquakes were caused the rapid migration and expansion of highly compressed and

  17. Future Developments for the Earthquake Early Warning System following the 2011 off the Pacific Coast of Tohoku Earthquake

    Science.gov (United States)

    Yamada, M.; Mori, J. J.

    2011-12-01

    The 2011 off the Pacific Coast of Tohoku Earthquake (Mw9.0) caused significant damage over a large area of northeastern Honshu. An earthquake early warning was issued to the public in the Tohoku region about 8 seconds after the first P-arrival, which is 31 seconds after the origin time. There was no 'blind zone', and warnings were received at all locations before S-wave arrivals, since the earthquake was fairly far offshore. Although the early warning message was properly reported in Tohoku region which was the most severely affected area, a message was not sent to the more distant Tokyo region because the intensity was underestimated. . This underestimation was because the magnitude determination in the first few seconds was relatively small (Mj8.1)., and there was no consideration of a finite fault with a long length. Another significant issue is that warnings were sometimes not properly provided for aftershocks. Immediately following the earthquake, the waveforms of some large aftershocks were contaminated by long-period surface waves from the mainshock, which made it difficult to pick P-wave arrivals. Also, correctly distinguishing and locating later aftershocks was sometimes difficult, when multiple events occurred within a short period of time. This masinhock begins with relatively small moment release for the first 10 s . Since the amplitude of the initial waveforms is small, most methods that use amplitudes and periods of the P-wave (e.g. Wu and Kanamori, 2005) cannot correctly determine the size of the4 earthquake in the first several seconds. The current JMA system uses the peak displacement amplitude for the magnitude estimation, and the magnitude saturated at about M8 1 minute after the first P-wave arrival. . Magnitudes of smaller earthquakes can be correctly identified from the first few seconds of P- or S-wave arrivals, but this M9 event cannot be characterized in such a short time. The only way to correctly characterize the size of the Tohoku

  18. Twelve Ways to Build CMS Crossings from ROOT Files

    CERN Document Server

    Chamont, D

    2003-01-01

    The simulation of CMS raw data requires the random selection of one hundred and fifty pileup events from a very large set of files, to be superimposed in memory to the signal event. The use of ROOT I/O for that purpose is quite unusual: the events are not read sequentially but pseudo-randomly, they are not processed one by one in memory but by bunches, and they do not contain orthodox ROOT objects but many foreign objects and templates. In this context, we have compared the performance of ROOT containers versus the STL vectors, and the use of trees versus a direct storage of containers. The strategy with best performances is by far the one using clones within trees, but it stays hard to tune and very dependant on the exact use-case. The use of STL vectors could bring more easily similar performances in a future ROOT release.

  19. Investigation of Backprojection Uncertainties With M6 Earthquakes

    Science.gov (United States)

    Fan, Wenyuan; Shearer, Peter M.

    2017-10-01

    We investigate possible biasing effects of inaccurate timing corrections on teleseismic P wave backprojection imaging of large earthquake ruptures. These errors occur because empirically estimated time shifts based on aligning P wave first arrivals are exact only at the hypocenter and provide approximate corrections for other parts of the rupture. Using the Japan subduction zone as a test region, we analyze 46 M6-M7 earthquakes over a 10 year period, including many aftershocks of the 2011 M9 Tohoku earthquake, performing waveform cross correlation of their initial P wave arrivals to obtain hypocenter timing corrections to global seismic stations. We then compare backprojection images for each earthquake using its own timing corrections with those obtained using the time corrections from other earthquakes. This provides a measure of how well subevents can be resolved with backprojection of a large rupture as a function of distance from the hypocenter. Our results show that backprojection is generally very robust and that the median subevent location error is about 25 km across the entire study region (˜700 km). The backprojection coherence loss and location errors do not noticeably converge to zero even when the event pairs are very close (<20 km). This indicates that most of the timing differences are due to 3-D structure close to each of the hypocenter regions, which limits the effectiveness of attempts to refine backprojection images using aftershock calibration, at least in this region.

  20. Antarctic icequakes triggered by the 2010 Maule earthquake in Chile

    Science.gov (United States)

    Peng, Zhigang; Walter, Jacob I.; Aster, Richard C.; Nyblade, Andrew; Wiens, Douglas A.; Anandakrishnan, Sridhar

    2014-09-01

    Seismic waves from distant, large earthquakes can almost instantaneously trigger shallow micro-earthquakes and deep tectonic tremor as they pass through Earth's crust. Such remotely triggered seismic activity mostly occurs in tectonically active regions. Triggered seismicity is generally considered to reflect shear failure on critically stressed fault planes and is thought to be driven by dynamic stress perturbations from both Love and Rayleigh types of surface seismic wave. Here we analyse seismic data from Antarctica in the six hours leading up to and following the 2010 Mw 8.8 Maule earthquake in Chile. We identify many high-frequency seismic signals during the passage of the Rayleigh waves generated by the Maule earthquake, and interpret them as small icequakes triggered by the Rayleigh waves. The source locations of these triggered icequakes are difficult to determine owing to sparse seismic network coverage, but the triggered events generate surface waves, so are probably formed by near-surface sources. Our observations are consistent with tensile fracturing of near-surface ice or other brittle fracture events caused by changes in volumetric strain as the high-amplitude Rayleigh waves passed through. We conclude that cryospheric systems can be sensitive to large distant earthquakes.

  1. Twelve years of cooperation in the field of radiation protection

    Energy Technology Data Exchange (ETDEWEB)

    Grapengiesser, Sten; Bennerstedt, Torkel

    2005-06-01

    SSI has pursued an international cooperation program since 1992 within the field of radiation protection and emergency preparedness for radiation accidents with the three Baltic countries as main beneficiaries. As the Baltic countries are members of the EU since May 2004, this bilateral support will now be phased out and replaced with other forms of cooperation. During the years passed, a large number of activities have been launched with a total budget of some 14 million ECU. The Baltic radiation protection authorities have played a big role in the cooperation and Baltic ministries, universities, nuclear technology installations and other industries using radiation have also been engaged in the projects. SKI, SKB, Studsvik and the Swedish nuclear power plants should be mentioned as major cooperation partners on the Swedish side. During autumn 2004 when such a large coordinated work program was coming to an end, SSI decided to hold a seminar with the purpose to follow up experiences from the work and discuss coming forms of cooperation. The seminar took place on the 18 of November 2004 and gathered some 80 participants, 29 of which from the Baltic countries. It was opened by Lars-Erik Holm, the SSI Director General, and the three Baltic countries then presented their views and impressions from the passed years of cooperation. The seminar was concluded with a panel discussion on 'How to proceed from today's situation'. The result was that SSI invited to a new coordination meeting during autumn 2005 to follow up and discuss coordination of radiation protection around the Baltic Sea together with the other Nordic radiation protection authorities.

  2. Twelve years of cooperation in the field of radiation protection

    Energy Technology Data Exchange (ETDEWEB)

    Grapengiesser, Sten; Bennerstedt, Torkel

    2005-06-01

    SSI has pursued an international cooperation program since 1992 within the field of radiation protection and emergency preparedness for radiation accidents with the three Baltic countries as main beneficiaries. As the Baltic countries are members of the EU since May 2004, this bilateral support will now be phased out and replaced with other forms of cooperation. During the years passed, a large number of activities have been launched with a total budget of some 14 million ECU. The Baltic radiation protection authorities have played a big role in the cooperation and Baltic ministries, universities, nuclear technology installations and other industries using radiation have also been engaged in the projects. SKI, SKB, Studsvik and the Swedish nuclear power plants should be mentioned as major cooperation partners on the Swedish side. During autumn 2004 when such a large coordinated work program was coming to an end, SSI decided to hold a seminar with the purpose to follow up experiences from the work and discuss coming forms of cooperation. The seminar took place on the 18 of November 2004 and gathered some 80 participants, 29 of which from the Baltic countries. It was opened by Lars-Erik Holm, the SSI Director General, and the three Baltic countries then presented their views and impressions from the passed years of cooperation. The seminar was concluded with a panel discussion on 'How to proceed from today's situation'. The result was that SSI invited to a new coordination meeting during autumn 2005 to follow up and discuss coordination of radiation protection around the Baltic Sea together with the other Nordic radiation protection authorities.

  3. Twelve years of cooperation in the field of radiation protection

    International Nuclear Information System (INIS)

    Grapengiesser, Sten; Bennerstedt, Torkel

    2005-06-01

    SSI has pursued an international cooperation program since 1992 within the field of radiation protection and emergency preparedness for radiation accidents with the three Baltic countries as main beneficiaries. As the Baltic countries are members of the EU since May 2004, this bilateral support will now be phased out and replaced with other forms of cooperation. During the years passed, a large number of activities have been launched with a total budget of some 14 million ECU. The Baltic radiation protection authorities have played a big role in the cooperation and Baltic ministries, universities, nuclear technology installations and other industries using radiation have also been engaged in the projects. SKI, SKB, Studsvik and the Swedish nuclear power plants should be mentioned as major cooperation partners on the Swedish side. During autumn 2004 when such a large coordinated work program was coming to an end, SSI decided to hold a seminar with the purpose to follow up experiences from the work and discuss coming forms of cooperation. The seminar took place on the 18 of November 2004 and gathered some 80 participants, 29 of which from the Baltic countries. It was opened by Lars-Erik Holm, the SSI Director General, and the three Baltic countries then presented their views and impressions from the passed years of cooperation. The seminar was concluded with a panel discussion on 'How to proceed from today's situation'. The result was that SSI invited to a new coordination meeting during autumn 2005 to follow up and discuss coordination of radiation protection around the Baltic Sea together with the other Nordic radiation protection authorities

  4. A new way of telling earthquake stories: MOBEE - the MOBile Earthquake Exhibition

    Science.gov (United States)

    Tataru, Dragos; Toma-Danila, Dragos; Nastase, Eduard

    2016-04-01

    In the last decades, the demand and acknowledged importance of science outreach, in general and geophysics in particular, has grown, as demonstrated by many international and national projects and other activities performed by research institutes. The National Institute for Earth Physics (NIEP) from Romania is the leading national institution on earthquake monitoring and research, having at the same time a declared focus on informing and educating a wide audience about geosciences and especially seismology. This is more than welcome, since Romania is a very active country from a seismological point of view, but not too reactive when it comes to diminishing the possible effect of a major earthquake. Over the last few decades, the country has experienced several major earthquakes which have claimed thousands of lives and millions in property damage (1940; 1977; 1986 and 1990 Vrancea earthquakes). In this context, during a partnership started in 2014 together with the National Art University and Siveco IT company, a group of researchers from NIEP initiated the MOBile Earthquake Exhibition (MOBEE) project. The main goal was to design a portable museum to bring on the road educational activities focused on seismology, seismic hazard and Earth science. The exhibition is mainly focused on school students of all ages as it explains the main topics of geophysics through a unique combination of posters, digital animations and apps, large markets and exciting hand-on experiments, 3D printed models and posters. This project is singular in Romania and aims to transmit properly reviewed actual information, regarding the definition of earthquakes, the way natural hazards can affect people, buildings and the environment and the measures to be taken for prevent an aftermath. Many of the presented concepts can be used by teachers as a complementary way of demonstrating physics facts and concepts and explaining processes that shape the dynamic Earth features. It also involves

  5. The Pocatello Valley, Idaho, earthquake