WorldWideScience

Sample records for earthquake recurrence models

  1. New geological perspectives on earthquake recurrence models

    International Nuclear Information System (INIS)

    Schwartz, D.P.

    1997-01-01

    In most areas of the world the record of historical seismicity is too short or uncertain to accurately characterize the future distribution of earthquakes of different sizes in time and space. Most faults have not ruptured once, let alone repeatedly. Ultimately, the ability to correctly forecast the magnitude, location, and probability of future earthquakes depends on how well one can quantify the past behavior of earthquake sources. Paleoseismological trenching of active faults, historical surface ruptures, liquefaction features, and shaking-induced ground deformation structures provides fundamental information on the past behavior of earthquake sources. These studies quantify (a) the timing of individual past earthquakes and fault slip rates, which lead to estimates of recurrence intervals and the development of recurrence models and (b) the amount of displacement during individual events, which allows estimates of the sizes of past earthquakes on a fault. When timing and slip per event are combined with information on fault zone geometry and structure, models that define individual rupture segments can be developed. Paleoseismicity data, in the form of timing and size of past events, provide a window into the driving mechanism of the earthquake engine--the cycle of stress build-up and release

  2. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  3. Earthquake recurrence models fail when earthquakes fail to reset the stress field

    Science.gov (United States)

    Tormann, Thessa; Wiemer, Stefan; Hardebeck, Jeanne L.

    2012-01-01

    Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning once more the predictability of earthquakes in general. Our study investigates the spatial pattern of b-values along the Parkfield segment through the seismic cycle and documents a stably stressed structure. The forecasted rate of M6 earthquakes based on Parkfield's microseismicity b-values corresponds well to observed rates. We interpret the observed b-value stability in terms of the evolution of the stress field in that area: the M6 Parkfield earthquakes do not fully unload the stress on the fault, explaining why time recurrent models fail. We present the 1989 M6.9 Loma Prieta earthquake as counter example, which did release a significant portion of the stress along its fault segment and yields a substantial change in b-values.

  4. Geological and historical evidence of irregular recurrent earthquakes in Japan.

    Science.gov (United States)

    Satake, Kenji

    2015-10-28

    Great (M∼8) earthquakes repeatedly occur along the subduction zones around Japan and cause fault slip of a few to several metres releasing strains accumulated from decades to centuries of plate motions. Assuming a simple 'characteristic earthquake' model that similar earthquakes repeat at regular intervals, probabilities of future earthquake occurrence have been calculated by a government committee. However, recent studies on past earthquakes including geological traces from giant (M∼9) earthquakes indicate a variety of size and recurrence interval of interplate earthquakes. Along the Kuril Trench off Hokkaido, limited historical records indicate that average recurrence interval of great earthquakes is approximately 100 years, but the tsunami deposits show that giant earthquakes occurred at a much longer interval of approximately 400 years. Along the Japan Trench off northern Honshu, recurrence of giant earthquakes similar to the 2011 Tohoku earthquake with an interval of approximately 600 years is inferred from historical records and tsunami deposits. Along the Sagami Trough near Tokyo, two types of Kanto earthquakes with recurrence interval of a few hundred years and a few thousand years had been recognized, but studies show that the recent three Kanto earthquakes had different source extents. Along the Nankai Trough off western Japan, recurrence of great earthquakes with an interval of approximately 100 years has been identified from historical literature, but tsunami deposits indicate that the sizes of the recurrent earthquakes are variable. Such variability makes it difficult to apply a simple 'characteristic earthquake' model for the long-term forecast, and several attempts such as use of geological data for the evaluation of future earthquake probabilities or the estimation of maximum earthquake size in each subduction zone are being conducted by government committees. © 2015 The Author(s).

  5. Earthquake Recurrence and the Resolution Potential of Tectono‐Geomorphic Records

    KAUST Repository

    Zielke, Olaf

    2018-04-17

    A long‐standing debate in active tectonics addresses how slip is accumulated through space and time along a given fault or fault section. This debate is in part still ongoing because of the lack of sufficiently long instrumental data that may constrain the recurrence characteristics of surface‐rupturing earthquakes along individual faults. Geomorphic and stratigraphic records are used instead to constrain this behavior. Although geomorphic data frequently indicate slip accumulation via quasicharacteristic same‐size offset increments, stratigraphic data indicate that earthquake timing observes a quasirandom distribution. Assuming that both observations are valid within their respective frameworks, I want to address here which recurrence model is able to reproduce this seemingly contradictory behavior. I further want to address how aleatory offset variability and epistemic measurement uncertainty affect our ability to resolve single‐earthquake surface slip and along‐fault slip‐accumulation patterns. I use a statistical model that samples probability density functions (PDFs) for geomorphic marker formation (storm events), marker displacement (surface‐rupturing earthquakes), and offset measurement, generating tectono‐geomorphic catalogs to investigate which PDF combination consistently reproduces the above‐mentioned field observations. Doing so, I find that neither a purely characteristic earthquake (CE) nor a Gutenberg–Richter (GR) earthquake recurrence model is able to consistently reproduce those field observations. A combination of both however, with moderate‐size earthquakes following the GR model and large earthquakes following the CE model, is able to reproduce quasirandom earthquake recurrence times while simultaneously generating quasicharacteristic geomorphic offset increments. Along‐fault slip accumulation is dominated by, but not exclusively linked to, the occurrence of similar‐size large earthquakes. Further, the resolution

  6. Tsunamigenic earthquakes in the Gulf of Cadiz: fault model and recurrence

    Directory of Open Access Journals (Sweden)

    L. M. Matias

    2013-01-01

    Full Text Available The Gulf of Cadiz, as part of the Azores-Gibraltar plate boundary, is recognized as a potential source of big earthquakes and tsunamis that may affect the bordering countries, as occurred on 1 November 1755. Preparing for the future, Portugal is establishing a national tsunami warning system in which the threat caused by any large-magnitude earthquake in the area is estimated from a comprehensive database of scenarios. In this paper we summarize the knowledge about the active tectonics in the Gulf of Cadiz and integrate the available seismological information in order to propose the generation model of destructive tsunamis to be applied in tsunami warnings. The fault model derived is then used to estimate the recurrence of large earthquakes using the fault slip rates obtained by Cunha et al. (2012 from thin-sheet neotectonic modelling. Finally we evaluate the consistency of seismicity rates derived from historical and instrumental catalogues with the convergence rates between Eurasia and Nubia given by plate kinematic models.

  7. Long-Term Fault Memory: A New Time-Dependent Recurrence Model for Large Earthquake Clusters on Plate Boundaries

    Science.gov (United States)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.

    2017-12-01

    A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would

  8. Information Theoric Framework for the Earthquake Recurrence Models : Methodica Firma Per Terra Non-Firma

    International Nuclear Information System (INIS)

    Esmer, Oezcan

    2006-01-01

    This paper first evaluates the earthquake prediction method (1999 ) used by US Geological Survey as the lead example and reviews also the recent models. Secondly, points out the ongoing debate on the predictability of earthquake recurrences and lists the main claims of both sides. The traditional methods and the 'frequentist' approach used in determining the earthquake probabilities cannot end the complaints that the earthquakes are unpredictable. It is argued that the prevailing 'crisis' in seismic research corresponds to the Pre-Maxent Age of the current situation. The period of Kuhnian 'Crisis' should give rise to a new paradigm based on the Information-Theoric framework including the inverse problem, Maxent and Bayesian methods. Paper aims to show that the information- theoric methods shall provide the required 'Methodica Firma' for the earthquake prediction models

  9. Earthquake Recurrence and the Resolution Potential of Tectono‐Geomorphic Records

    KAUST Repository

    Zielke, Olaf

    2018-01-01

    combination consistently reproduces the above‐mentioned field observations. Doing so, I find that neither a purely characteristic earthquake (CE) nor a Gutenberg–Richter (GR) earthquake recurrence model is able to consistently reproduce those field

  10. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    Directory of Open Access Journals (Sweden)

    Junjie Ren

    2013-01-01

    Full Text Available Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9 occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF and the Guanxian-Jiangyou fault (GJF. However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS and Interferometric Synthetic Aperture Radar (InSAR data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3 × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  11. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    Science.gov (United States)

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  12. Chilean megathrust earthquake recurrence linked to frictional contrast at depth

    Science.gov (United States)

    Moreno, M.; Li, S.; Melnick, D.; Bedford, J. R.; Baez, J. C.; Motagh, M.; Metzger, S.; Vajedian, S.; Sippl, C.; Gutknecht, B. D.; Contreras-Reyes, E.; Deng, Z.; Tassara, A.; Oncken, O.

    2018-04-01

    Fundamental processes of the seismic cycle in subduction zones, including those controlling the recurrence and size of great earthquakes, are still poorly understood. Here, by studying the 2016 earthquake in southern Chile—the first large event within the rupture zone of the 1960 earthquake (moment magnitude (Mw) = 9.5)—we show that the frictional zonation of the plate interface fault at depth mechanically controls the timing of more frequent, moderate-size deep events (Mw shallow earthquakes (Mw > 8.5). We model the evolution of stress build-up for a seismogenic zone with heterogeneous friction to examine the link between the 2016 and 1960 earthquakes. Our results suggest that the deeper segments of the seismogenic megathrust are weaker and interseismically loaded by a more strongly coupled, shallower asperity. Deeper segments fail earlier ( 60 yr recurrence), producing moderate-size events that precede the failure of the shallower region, which fails in a great earthquake (recurrence >110 yr). We interpret the contrasting frictional strength and lag time between deeper and shallower earthquakes to be controlled by variations in pore fluid pressure. Our integrated analysis strengthens understanding of the mechanics and timing of great megathrust earthquakes, and therefore could aid in the seismic hazard assessment of other subduction zones.

  13. Quasi-periodic recurrence of large earthquakes on the southern San Andreas fault

    Science.gov (United States)

    Scharer, Katherine M.; Biasi, Glenn P.; Weldon, Ray J.; Fumal, Tom E.

    2010-01-01

    It has been 153 yr since the last large earthquake on the southern San Andreas fault (California, United States), but the average interseismic interval is only ~100 yr. If the recurrence of large earthquakes is periodic, rather than random or clustered, the length of this period is notable and would generally increase the risk estimated in probabilistic seismic hazard analyses. Unfortunately, robust characterization of a distribution describing earthquake recurrence on a single fault is limited by the brevity of most earthquake records. Here we use statistical tests on a 3000 yr combined record of 29 ground-rupturing earthquakes from Wrightwood, California. We show that earthquake recurrence there is more regular than expected from a Poisson distribution and is not clustered, leading us to conclude that recurrence is quasi-periodic. The observation of unimodal time dependence is persistent across an observationally based sensitivity analysis that critically examines alternative interpretations of the geologic record. The results support formal forecast efforts that use renewal models to estimate probabilities of future earthquakes on the southern San Andreas fault. Only four intervals (15%) from the record are longer than the present open interval, highlighting the current hazard posed by this fault.

  14. Silica precipitation potentially controls earthquake recurrence in seismogenic zones.

    Science.gov (United States)

    Saishu, Hanae; Okamoto, Atsushi; Otsubo, Makoto

    2017-10-17

    Silica precipitation is assumed to play a significant role in post-earthquake recovery of the mechanical and hydrological properties of seismogenic zones. However, the relationship between the widespread quartz veins around seismogenic zones and earthquake recurrence is poorly understood. Here we propose a novel model of quartz vein formation associated with fluid advection from host rocks and silica precipitation in a crack, in order to quantify the timescale of crack sealing. When applied to sets of extensional quartz veins around the Nobeoka Thrust of SW Japan, an ancient seismogenic splay fault, our model indicates that a fluid pressure drop of 10-25 MPa facilitates the formation of typical extensional quartz veins over a period of 6.6 × 10 0 -5.6 × 10 1 years, and that 89%-100% of porosity is recovered within ~3 × 10 2 years. The former and latter sealing timescales correspond to the extensional stress period (~3 × 10 1 years) and the recurrence interval of megaearthquakes in the Nankai Trough (~3 × 10 2 years), respectively. We therefore suggest that silica precipitation in the accretionary wedge controls the recurrence interval of large earthquakes in subduction zones.

  15. Periodic, chaotic, and doubled earthquake recurrence intervals on the deep San Andreas fault.

    Science.gov (United States)

    Shelly, David R

    2010-06-11

    Earthquake recurrence histories may provide clues to the timing of future events, but long intervals between large events obscure full recurrence variability. In contrast, small earthquakes occur frequently, and recurrence intervals are quantifiable on a much shorter time scale. In this work, I examine an 8.5-year sequence of more than 900 recurring low-frequency earthquake bursts composing tremor beneath the San Andreas fault near Parkfield, California. These events exhibit tightly clustered recurrence intervals that, at times, oscillate between approximately 3 and approximately 6 days, but the patterns sometimes change abruptly. Although the environments of large and low-frequency earthquakes are different, these observations suggest that similar complexity might underlie sequences of large earthquakes.

  16. Periodic, chaotic, and doubled earthquake recurrence intervals on the deep San Andreas Fault

    Science.gov (United States)

    Shelly, David R.

    2010-01-01

    Earthquake recurrence histories may provide clues to the timing of future events, but long intervals between large events obscure full recurrence variability. In contrast, small earthquakes occur frequently, and recurrence intervals are quantifiable on a much shorter time scale. In this work, I examine an 8.5-year sequence of more than 900 recurring low-frequency earthquake bursts composing tremor beneath the San Andreas fault near Parkfield, California. These events exhibit tightly clustered recurrence intervals that, at times, oscillate between ~3 and ~6 days, but the patterns sometimes change abruptly. Although the environments of large and low-frequency earthquakes are different, these observations suggest that similar complexity might underlie sequences of large earthquakes.

  17. A minimalist model of characteristic earthquakes

    DEFF Research Database (Denmark)

    Vázquez-Prada, M.; González, Á.; Gómez, J.B.

    2002-01-01

    In a spirit akin to the sandpile model of self- organized criticality, we present a simple statistical model of the cellular-automaton type which simulates the role of an asperity in the dynamics of a one-dimensional fault. This model produces an earthquake spectrum similar to the characteristic-earthquake...... behaviour of some seismic faults. This model, that has no parameter, is amenable to an algebraic description as a Markov Chain. This possibility illuminates some important results, obtained by Monte Carlo simulations, such as the earthquake size-frequency relation and the recurrence time...... of the characteristic earthquake....

  18. Recurrent slow slip event likely hastened by the 2011 Tohoku earthquake

    Science.gov (United States)

    Hirose, Hitoshi; Kimura, Hisanori; Enescu, Bogdan; Aoi, Shin

    2012-01-01

    Slow slip events (SSEs) are another mode of fault deformation than the fast faulting of regular earthquakes. Such transient episodes have been observed at plate boundaries in a number of subduction zones around the globe. The SSEs near the Boso Peninsula, central Japan, are among the most documented SSEs, with the longest repeating history, of almost 30 y, and have a recurrence interval of 5 to 7 y. A remarkable characteristic of the slow slip episodes is the accompanying earthquake swarm activity. Our stable, long-term seismic observations enable us to detect SSEs using the recorded earthquake catalog, by considering an earthquake swarm as a proxy for a slow slip episode. Six recurrent episodes are identified in this way since 1982. The average duration of the SSE interoccurrence interval is 68 mo; however, there are significant fluctuations from this mean. While a regular cycle can be explained using a simple physical model, the mechanisms that are responsible for the observed fluctuations are poorly known. Here we show that the latest SSE in the Boso Peninsula was likely hastened by the stress transfer from the March 11, 2011 great Tohoku earthquake. Moreover, a similar mechanism accounts for the delay of an SSE in 1990 by a nearby earthquake. The low stress buildups and drops during the SSE cycle can explain the strong sensitivity of these SSEs to stress transfer from external sources. PMID:22949688

  19. Seismic Regionalization of Michoacan, Mexico and Recurrence Periods for Earthquakes

    Science.gov (United States)

    Magaña García, N.; Figueroa-Soto, Á.; Garduño-Monroy, V. H.; Zúñiga, R.

    2017-12-01

    Michoacán is one of the states with the highest occurrence of earthquakes in Mexico and it is a limit of convergence triggered by the subduction of Cocos plate over the North American plate, located in the zone of the Pacific Ocean of our country, in addition to the existence of active faults inside of the state like the Morelia-Acambay Fault System (MAFS).It is important to make a combination of seismic, paleosismological and geological studies to have good planning and development of urban complexes to mitigate disasters if destructive earthquakes appear. With statistical seismology it is possible to characterize the degree of seismic activity as well as to estimate the recurrence periods for earthquakes. For this work, seismicity catalog of Michoacán was compiled and homogenized in time and magnitude. This information was obtained from world and national agencies (SSN, CMT, etc), some data published by Mendoza and Martínez-López (2016) and starting from the seismic catalog homogenized by F. R. Zúñiga (Personal communication). From the analysis of the different focal mechanisms reported in the literature and geological studies, the seismic regionalization of the state of Michoacán complemented the one presented by Vázquez-Rosas (2012) and the recurrence periods for earthquakes within the four different seismotectonic regions. In addition, stable periods were determined for the b value of the Gutenberg-Richter (1944) using the Maximum Curvature and EMR (Entire Magnitude Range Method, 2005) techniques, which allowed us to determine recurrence periods: years for earthquakes upper to 7.5 for the subduction zone (A zone) with EMR technique and years with MAXC technique for the same years for earthquakes upper to 5 for B1 zone with EMR technique and years with MAXC technique; years for earthquakes upper to 7.0 for B2 zone with EMR technique and years with MAXC technique; and the last one, the Morelia-Acambay Fault Sistem zone (C zone) years for earthquakes

  20. Constraining the Long-Term Average of Earthquake Recurrence Intervals From Paleo- and Historic Earthquakes by Assimilating Information From Instrumental Seismicity

    Science.gov (United States)

    Zoeller, G.

    2017-12-01

    Paleo- and historic earthquakes are the most important source of information for the estimationof long-term recurrence intervals in fault zones, because sequences of paleoearthquakes cover more than one seismic cycle. On the other hand, these events are often rare, dating uncertainties are enormous and the problem of missing or misinterpreted events leads to additional problems. Taking these shortcomings into account, long-term recurrence intervals are usually unstable as long as no additional information are included. In the present study, we assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a ``clock-change'' model that leads to a Brownian Passage Time distribution for recurrence intervals. We take advantage of an earlier finding that the aperiodicity of this distribution can be related to the Gutenberg-Richter-b-value, which is usually around one and can be estimated easily from instrumental seismicity in the region under consideration. This allows to reduce the uncertainties in the estimation of the mean recurrence interval significantly, especially for short paleoearthquake sequences and high dating uncertainties. We present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times assuming a stationary Poisson process.

  1. Preliminary Results on Earthquake Recurrence Intervals, Rupture Segmentation, and Potential Earthquake Moment Magnitudes along the Tahoe-Sierra Frontal Fault Zone, Lake Tahoe, California

    Science.gov (United States)

    Howle, J.; Bawden, G. W.; Schweickert, R. A.; Hunter, L. E.; Rose, R.

    2012-12-01

    Utilizing high-resolution bare-earth LiDAR topography, field observations, and earlier results of Howle et al. (2012), we estimate latest Pleistocene/Holocene earthquake-recurrence intervals, propose scenarios for earthquake-rupture segmentation, and estimate potential earthquake moment magnitudes for the Tahoe-Sierra frontal fault zone (TSFFZ), west of Lake Tahoe, California. We have developed a new technique to estimate the vertical separation for the most recent and the previous ground-rupturing earthquakes at five sites along the Echo Peak and Mt. Tallac segments of the TSFFZ. At these sites are fault scarps with two bevels separated by an inflection point (compound fault scarps), indicating that the cumulative vertical separation (VS) across the scarp resulted from two events. This technique, modified from the modeling methods of Howle et al. (2012), uses the far-field plunge of the best-fit footwall vector and the fault-scarp morphology from high-resolution LiDAR profiles to estimate the per-event VS. From this data, we conclude that the adjacent and overlapping Echo Peak and Mt. Tallac segments have ruptured coseismically twice during the Holocene. The right-stepping, en echelon range-front segments of the TSFFZ show progressively greater VS rates and shorter earthquake-recurrence intervals from southeast to northwest. Our preliminary estimates suggest latest Pleistocene/ Holocene earthquake-recurrence intervals of 4.8±0.9x103 years for a coseismic rupture of the Echo Peak and Mt. Tallac segments, located at the southeastern end of the TSFFZ. For the Rubicon Peak segment, northwest of the Echo Peak and Mt. Tallac segments, our preliminary estimate of the maximum earthquake-recurrence interval is 2.8±1.0x103 years, based on data from two sites. The correspondence between high VS rates and short recurrence intervals suggests that earthquake sequences along the TSFFZ may initiate in the northwest part of the zone and then occur to the southeast with a lower

  2. Wrightwood and the earthquake cycle: What a long recurrence record tells us about how faults work

    Science.gov (United States)

    Weldon, R.; Scharer, K.; Fumal, T.; Biasi, G.

    2004-01-01

    The concept of the earthquake cycle is so well established that one often hears statements in the popular media like, "the Big One is overdue" and "the longer it waits, the bigger it will be." Surprisingly, data to critically test the variability in recurrence intervals, rupture displacements, and relationships between the two are almost nonexistent. To generate a long series of earthquake intervals and offsets, we have conducted paleoseismic investigations across the San Andreas fault near the town of Wrightwood, California, excavating 45 trenches over 18 years, and can now provide some answers to basic questions about recurrence behavior of large earthquakes. To date, we have characterized at least 30 prehistoric earthquakes in a 6000-yr-long record, complete for the past 1500 yr and for the interval 3000-1500 B.C. For the past 1500 yr, the mean recurrence interval is 105 yr (31-165 yr for individual intervals) and the mean slip is 3.2 m (0.7-7 m per event). The series is slightly more ordered than random and has a notable cluster of events, during which strain was released at 3 times the long-term average rate. Slip associated with an earthquake is not well predicted by the interval preceding it, and only the largest two earthquakes appear to affect the time interval to the next earthquake. Generally, short intervals tend to coincide with large displacements and long intervals with small displacements. The most significant correlation we find is that earthquakes are more frequent following periods of net strain accumulation spanning multiple seismic cycles. The extent of paleoearthquake ruptures may be inferred by correlating event ages between different sites along the San Andreas fault. Wrightwood and other nearby sites experience rupture that could be attributed to overlap of relatively independent segments that each behave in a more regular manner. However, the data are equally consistent with a model in which the irregular behavior seen at Wrightwood

  3. Earthquake correlations and networks: A comparative study

    International Nuclear Information System (INIS)

    Krishna Mohan, T. R.; Revathi, P. G.

    2011-01-01

    We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E 69, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.

  4. ON STRUCTURED AND DIFFUSE SEISMICITY, STIFFNESS OF EARTHQUAKE FOCI, AND NONLINEARITY OF MAGNITUDE RECURRENCE GRAPHS

    Directory of Open Access Journals (Sweden)

    Evgeny G. Bugaev

    2011-01-01

    Full Text Available Geological, geophysical and seismogeological studies are now conducted in a more detail and thus provide for determining seismic sources with higher accuracy, from the first meters to first dozens of meters [Waldhauser, Schaff, 2008]. It is now possible to consider uncertainty ellipses of earthquake hypocenters, that are recorded in the updated Earthquake Catalogue, as surfaces of earthquake focus generators. In our article, it is accepted that a maximum horizontal size of an uncertainty ellipse corresponds to an area of a focus generator, and seismic events are thus classified into two groups, earthquakes with nonstiff and stiff foci. Criteria of such a classification are two limits of elastic strain and brittle strain in case of uniaxial (3⋅10–5 or omnidirectional (10–6 compression. The criteria are established from results of analyses of parameters of seismic dislocations and earthquake foci with regard to studies of surface parameters and deformation parameters of fault zones. It is recommendable that the uniaxial compression criterion shall be applied to zones of interaction between tectonic plates, and the unilateral compression criterion shall be applied to low active (interplate areas. Sample cases demonstrate the use of data sets on nonstiff and stiff foci for separate evaluation of magnitude reoccurrence curves, analyses of structured and dissipated seismicity, review of the physical nature of nonlinearity of recurrence curves and conditions of preparation of strong earthquakes. Changes of parameters of the recurrence curves with changes of data collection square areas are considered. Reviewed are changes of parameters of the recurrence curves during preparation for the Japan major earthquake of 11 March 2011 prior to and after the major shock. It is emphasized that it is important to conduct even more detailed geological and geophysical studies and to improve precision and sensitivity of local seismological monitoring networks

  5. Prevent recurrence of nuclear disaster (3). Agenda on nuclear safety from earthquake engineering

    International Nuclear Information System (INIS)

    Kameda, Hiroyuki; Takada, Tsuyoshi; Ebisawa, Katsumi; Nakamura, Susumu

    2012-01-01

    Based on results of activities of committee on seismic safety of nuclear power plants (NPPs) of Japan Association for Earthquake Engineering, which started activities after Chuetsu-oki earthquake and then experienced Great East Japan Earthquake, (under close collaboration with the committee of Atomic Energy Society of Japan started activities simultaneously), and taking account of further development of concept, agenda on nuclear safety were proposed from earthquake engineering. In order to prevent recurrence of nuclear disaster, individual technical issues of earthquake engineering and comprehensive issues of integration technology, multidisciplinary collaboration and establishment of technology governance based on them were of prime importance. This article described important problems to be solved; (1) technical issues and mission of seismic safety of NPPs, (2) decision making based on risk assessment - basis of technical governance, (3) framework of risk, design and regulation - framework of required technology governance, (4) technical issues of earthquake engineering for nuclear safety, (5) role of earthquake engineering in nuclear power risk communication and (6) importance of multidisciplinary collaboration. Responsibility of engineering would be attributed to establishment of technology governance, cultivation of individual technology and integration technology, and social communications. (T. Tanaka)

  6. Earthquake correlations and networks: A comparative study

    Science.gov (United States)

    Krishna Mohan, T. R.; Revathi, P. G.

    2011-04-01

    We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E EULEEJ1539-375510.1103/PhysRevE.69.06610669, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.

  7. Major earthquakes occur regularly on an isolated plate boundary fault.

    Science.gov (United States)

    Berryman, Kelvin R; Cochran, Ursula A; Clark, Kate J; Biasi, Glenn P; Langridge, Robert M; Villamor, Pilar

    2012-06-29

    The scarcity of long geological records of major earthquakes, on different types of faults, makes testing hypotheses of regular versus random or clustered earthquake recurrence behavior difficult. We provide a fault-proximal major earthquake record spanning 8000 years on the strike-slip Alpine Fault in New Zealand. Cyclic stratigraphy at Hokuri Creek suggests that the fault ruptured to the surface 24 times, and event ages yield a 0.33 coefficient of variation in recurrence interval. We associate this near-regular earthquake recurrence with a geometrically simple strike-slip fault, with high slip rate, accommodating a high proportion of plate boundary motion that works in isolation from other faults. We propose that it is valid to apply time-dependent earthquake recurrence models for seismic hazard estimation to similar faults worldwide.

  8. Spatial Distribution of the Coefficient of Variation for the Paleo-Earthquakes in Japan

    Science.gov (United States)

    Nomura, S.; Ogata, Y.

    2015-12-01

    Renewal processes, point prccesses in which intervals between consecutive events are independently and identically distributed, are frequently used to describe this repeating earthquake mechanism and forecast the next earthquakes. However, one of the difficulties in applying recurrent earthquake models is the scarcity of the historical data. Most studied fault segments have few, or only one observed earthquake that often have poorly constrained historic and/or radiocarbon ages. The maximum likelihood estimate from such a small data set can have a large bias and error, which tends to yield high probability for the next event in a very short time span when the recurrence intervals have similar lengths. On the other hand, recurrence intervals at a fault depend on the long-term slip rate caused by the tectonic motion in average. In addition, recurrence times are also fluctuated by nearby earthquakes or fault activities which encourage or discourage surrounding seismicity. These factors have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus, this paper introduces a spatial structure on the key parameters of renewal processes for recurrent earthquakes and estimates it by using spatial statistics. Spatial variation of mean and variance parameters of recurrence times are estimated in Bayesian framework and the next earthquakes are forecasted by Bayesian predictive distributions. The proposal model is applied for recurrent earthquake catalog in Japan and its result is compared with the current forecast adopted by the Earthquake Research Committee of Japan.

  9. Geological evidence of recurrent great Kanto earthquakes at the Miura Peninsula, Japan

    Science.gov (United States)

    Shimazaki, K.; Kim, H. Y.; Chiba, T.; Satake, K.

    2011-12-01

    The Tokyo metropolitan area's well-documented earthquake history is dominated by the 1703 and 1923 great Kanto earthquakes produced by slip on the boundary between the subducting Philippine Sea plate and the overlying plate. Both earthquakes caused ˜1.5 m of uplift at the Miura Peninsula directly above the inferred fault rupture, and both were followed by tsunamis with heights of ˜5 m. We examined cores ˜2 m long from 8 tidal flat sites at the head of a small bay on the peninsula. The cores penetrated two to four layers of shelly gravel, as much as 0.5 m thick, with abundant shell fragments and mud clasts. The presence of gravel indicates strong tractive currents. Muddy bay deposits that bound the gravel layers show vertical changes in grain size and diatom assemblages consistent with abrupt shoaling at the times of the currents. The changes may further suggest gradual deepening of the bay during the intervals between the strong currents. We infer, based on 137Cs, 14C, and 210Pb dating, that the top two shelly gravel layers represent tsunamis associated with the 1703 and 1923 great Kanto earthquakes, and that the third layer was deposited by a tsunami during an earlier earthquake. The age range of this layer, AD 1060-1400, includes the time of an earthquake that occurred in 1293 according to a historical document. If so, the recurrence interval before the 1703 earthquake was almost twice as long as the interval between the 1703 and 1923 earthquakes.

  10. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    Science.gov (United States)

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  11. Salient Features of the 2015 Gorkha, Nepal Earthquake in Relation to Earthquake Cycle and Dynamic Rupture Models

    Science.gov (United States)

    Ampuero, J. P.; Meng, L.; Hough, S. E.; Martin, S. S.; Asimaki, D.

    2015-12-01

    Two salient features of the 2015 Gorkha, Nepal, earthquake provide new opportunities to evaluate models of earthquake cycle and dynamic rupture. The Gorkha earthquake broke only partially across the seismogenic depth of the Main Himalayan Thrust: its slip was confined in a narrow depth range near the bottom of the locked zone. As indicated by the belt of background seismicity and decades of geodetic monitoring, this is an area of stress concentration induced by deep fault creep. Previous conceptual models attribute such intermediate-size events to rheological segmentation along-dip, including a fault segment with intermediate rheology in between the stable and unstable slip segments. We will present results from earthquake cycle models that, in contrast, highlight the role of stress loading concentration, rather than frictional segmentation. These models produce "super-cycles" comprising recurrent characteristic events interspersed by deep, smaller non-characteristic events of overall increasing magnitude. Because the non-characteristic events are an intrinsic component of the earthquake super-cycle, the notion of Coulomb triggering or time-advance of the "big one" is ill-defined. The high-frequency (HF) ground motions produced in Kathmandu by the Gorkha earthquake were weaker than expected for such a magnitude and such close distance to the rupture, as attested by strong motion recordings and by macroseismic data. Static slip reached close to Kathmandu but had a long rise time, consistent with control by the along-dip extent of the rupture. Moreover, the HF (1 Hz) radiation sources, imaged by teleseismic back-projection of multiple dense arrays calibrated by aftershock data, was deep and far from Kathmandu. We argue that HF rupture imaging provided a better predictor of shaking intensity than finite source inversion. The deep location of HF radiation can be attributed to rupture over heterogeneous initial stresses left by the background seismic activity

  12. The Hanford Site's Gable Mountain structure: A comparison of the recurrence of design earthquakes based on fault slip rates and a probabilistic exposure model

    International Nuclear Information System (INIS)

    Rohay, A.C.

    1991-01-01

    Gable Mountain is a segment of the Umtanum Ridge-Gable Mountain structural trend, an east-west trending series of anticlines, one of the major geologic structures on the Hanford Site. A probabilistic seismic exposure model indicates that Gable Mountain and two adjacent segments contribute significantly to the seismic hazard at the Hanford Site. Geologic measurements of the uplift of initially horizontal (11-12 Ma) basalt flows indicate that a broad, continuous, primary anticline grew at an average rate of 0.009-0.011 mm/a, and narrow, segmented, secondary anticlines grew at rates of 0.009 mm/a at Gable Butte and 0.018 mm/a at Gable Mountain. The buried Southeast Anticline appears to have a different geometry, consisting of a single, intermediate-width anticline with an estimated growth rate of 0.007 mm/a. The recurrence rate and maximum magnitude of earthquakes for the fault models were used to estimate the fault slip rate for each of the fault models and to determine the implied structural growth rate of the segments. The current model for Gable Mountain-Gable Butte predicts 0.004 mm/a of vertical uplift due to primary faulting and 0.008 mm/a due to secondary faulting. These rates are roughly half the structurally estimated rates for Gable Mountain, but the model does not account for the smaller secondary fold at Gable Butte. The model predicted an uplift rate for the Southeast Anticline of 0.006 mm/a, caused by the low open-quotes fault capabilityclose quotes weighting rather than a different fault geometry. The effects of previous modifications to the fault models are examined and potential future modifications are suggested. For example, the earthquake recurrence relationship used in the current exposure model has a b-value of 1.15, compared to a previous value of 0.85. This increases the implied deformation rates due to secondary fault models, and therefore supports the use of this regionally determined b-value to this fault/fold system

  13. Dating Informed Correlations and Large Earthquake Recurrence at the Hokuri Creek Paleoseismic Site, Alpine Fault, South Island, New Zealand

    Science.gov (United States)

    Biasi, G. P.; Clark, K.; Berryman, K. R.; Cochran, U. A.; Prior, C.

    2010-12-01

    -correlate sections at the site. Within a series of dates from a section, ordering with intrinsic precision of the dates indicates an uncertainty at event horizons on the order of 50 years, while the transitions from peat to silt indicating an earthquake are separated by several times this amount. The effect is to create a stair-stepping date sequence that often allows us to link sections and improve dating resolution in both sections. The combined section provides clear evidence for at least 18 earthquake-induced cycles. Event recurrence would be about 390 years in a simple average. Internal evidence and close examination of date sequences provide preliminary indications of as many as 22 earthquakes could be represented at Hokuri Creek, and a recurrence interval of ~320 years. Both sequences indicate a middle sequence from 3800 to 1000 BC in which recurrence intervals are resolvably longer than average. Variability in recurrence is relatively small - relatively few intervals are even >1.5x the average. This indicates that large earthquakes on the Alpine Fault of South Island, New Zealand are best fit by a time-predictable model.

  14. Reading a 400,000-year record of earthquake frequency for an intraplate fault.

    Science.gov (United States)

    Williams, Randolph T; Goodwin, Laurel B; Sharp, Warren D; Mozley, Peter S

    2017-05-09

    Our understanding of the frequency of large earthquakes at timescales longer than instrumental and historical records is based mostly on paleoseismic studies of fast-moving plate-boundary faults. Similar study of intraplate faults has been limited until now, because intraplate earthquake recurrence intervals are generally long (10s to 100s of thousands of years) relative to conventional paleoseismic records determined by trenching. Long-term variations in the earthquake recurrence intervals of intraplate faults therefore are poorly understood. Longer paleoseismic records for intraplate faults are required both to better quantify their earthquake recurrence intervals and to test competing models of earthquake frequency (e.g., time-dependent, time-independent, and clustered). We present the results of U-Th dating of calcite veins in the Loma Blanca normal fault zone, Rio Grande rift, New Mexico, United States, that constrain earthquake recurrence intervals over much of the past ∼550 ka-the longest direct record of seismic frequency documented for any fault to date. The 13 distinct seismic events delineated by this effort demonstrate that for >400 ka, the Loma Blanca fault produced periodic large earthquakes, consistent with a time-dependent model of earthquake recurrence. However, this time-dependent series was interrupted by a cluster of earthquakes at ∼430 ka. The carbon isotope composition of calcite formed during this seismic cluster records rapid degassing of CO 2 , suggesting an interval of anomalous fluid source. In concert with U-Th dates recording decreased recurrence intervals, we infer seismicity during this interval records fault-valve behavior. These data provide insight into the long-term seismic behavior of the Loma Blanca fault and, by inference, other intraplate faults.

  15. Reading a 400,000-year record of earthquake frequency for an intraplate fault

    Science.gov (United States)

    Williams, Randolph T.; Goodwin, Laurel B.; Sharp, Warren D.; Mozley, Peter S.

    2017-05-01

    Our understanding of the frequency of large earthquakes at timescales longer than instrumental and historical records is based mostly on paleoseismic studies of fast-moving plate-boundary faults. Similar study of intraplate faults has been limited until now, because intraplate earthquake recurrence intervals are generally long (10s to 100s of thousands of years) relative to conventional paleoseismic records determined by trenching. Long-term variations in the earthquake recurrence intervals of intraplate faults therefore are poorly understood. Longer paleoseismic records for intraplate faults are required both to better quantify their earthquake recurrence intervals and to test competing models of earthquake frequency (e.g., time-dependent, time-independent, and clustered). We present the results of U-Th dating of calcite veins in the Loma Blanca normal fault zone, Rio Grande rift, New Mexico, United States, that constrain earthquake recurrence intervals over much of the past ˜550 ka—the longest direct record of seismic frequency documented for any fault to date. The 13 distinct seismic events delineated by this effort demonstrate that for >400 ka, the Loma Blanca fault produced periodic large earthquakes, consistent with a time-dependent model of earthquake recurrence. However, this time-dependent series was interrupted by a cluster of earthquakes at ˜430 ka. The carbon isotope composition of calcite formed during this seismic cluster records rapid degassing of CO2, suggesting an interval of anomalous fluid source. In concert with U-Th dates recording decreased recurrence intervals, we infer seismicity during this interval records fault-valve behavior. These data provide insight into the long-term seismic behavior of the Loma Blanca fault and, by inference, other intraplate faults.

  16. Interaction of the san jacinto and san andreas fault zones, southern california: triggered earthquake migration and coupled recurrence intervals.

    Science.gov (United States)

    Sanders, C O

    1993-05-14

    Two lines of evidence suggest that large earthquakes that occur on either the San Jacinto fault zone (SJFZ) or the San Andreas fault zone (SAFZ) may be triggered by large earthquakes that occur on the other. First, the great 1857 Fort Tejon earthquake in the SAFZ seems to have triggered a progressive sequence of earthquakes in the SJFZ. These earthquakes occurred at times and locations that are consistent with triggering by a strain pulse that propagated southeastward at a rate of 1.7 kilometers per year along the SJFZ after the 1857 earthquake. Second, the similarity in average recurrence intervals in the SJFZ (about 150 years) and in the Mojave segment of the SAFZ (132 years) suggests that large earthquakes in the northern SJFZ may stimulate the relatively frequent major earthquakes on the Mojave segment. Analysis of historic earthquake occurrence in the SJFZ suggests little likelihood of extended quiescence between earthquake sequences.

  17. THE MISSING EARTHQUAKES OF HUMBOLDT COUNTY: RECONCILING RECURRENCE INTERVAL ESTIMATES, SOUTHERN CASCADIA SUBDUCTION ZONE

    Science.gov (United States)

    Patton, J. R.; Leroy, T. H.

    2009-12-01

    Earthquake and tsunami hazard for northwestern California and southern Oregon is predominately based on estimates of recurrence for earthquakes on the Cascadia subduction zone and upper plate thrust faults, each with unique deformation and recurrence histories. Coastal northern California is uniquely located to enable us to distinguish these different sources of seismic hazard as the accretionary prism extends on land in this region. This region experiences ground deformation from rupture of upper plate thrust faults like the Little Salmon fault. Most of this region is thought to be above the locked zone of the megathrust, so is subject to vertical deformation during the earthquake cycle. Secondary evidence of earthquake history is found here in the form of marsh soils that coseismically subside and commonly are overlain by estuarine mud and rarely tsunami sand. It is not currently known what the source of the subsidence is for this region; it may be due to upper plate rupture, megathrust rupture, or a combination of the two. Given that many earlier investigations utilized bulk peat for 14C age determinations and that these early studies were largely reconnaissance work, these studies need to be reevaluated. Recurrence Interval estimates are inconsistent when comparing terrestrial (~500 years) and marine (~220 years) data sets. This inconsistency may be due to 1) different sources of archival bias in marine and terrestrial data sets and/or 2) different sources of deformation. Factors controlling successful archiving of paleoseismic data are considered as this relates to geologic setting and how that might change through time. We compile, evaluate, and rank existing paleoseismic data in order to prioritize future paleoseismic investigations. 14C ages are recalibrated and quality assessments are made for each age determination. We then evaluate geologic setting and prioritize important research locations and goals based on these existing data. Terrestrial core

  18. Surface rupturing earthquakes repeated in the 300 years along the ISTL active fault system, central Japan

    Science.gov (United States)

    Katsube, Aya; Kondo, Hisao; Kurosawa, Hideki

    2017-06-01

    Surface rupturing earthquakes produced by intraplate active faults generally have long recurrence intervals of a few thousands to tens of thousands of years. We here report the first evidence for an extremely short recurrence interval of 300 years for surface rupturing earthquakes on an intraplate system in Japan. The Kamishiro fault of the Itoigawa-Shizuoka Tectonic Line (ISTL) active fault system generated a Mw 6.2 earthquake in 2014. A paleoseismic trench excavation across the 2014 surface rupture showed the evidence for the 2014 event and two prior paleoearthquakes. The slip of the penultimate earthquake was similar to that of 2014 earthquake, and its timing was constrained to be after A.D. 1645. Judging from the timing, the damaged area, and the amount of slip, the penultimate earthquake most probably corresponds to a historical earthquake in A.D. 1714. The recurrence interval of the two most recent earthquakes is thus extremely short compared with intervals on other active faults known globally. Furthermore, the slip repetition during the last three earthquakes is in accordance with the time-predictable recurrence model rather than the characteristic earthquake model. In addition, the spatial extent of the 2014 surface rupture accords with the distribution of a serpentinite block, suggesting that the relatively low coefficient of friction may account for the unusually frequent earthquakes. These findings would affect long-term forecast of earthquake probability and seismic hazard assessment on active faults.

  19. Recurrent frequency-size distribution of characteristic events

    Directory of Open Access Journals (Sweden)

    S. G. Abaimov

    2009-04-01

    Full Text Available Statistical frequency-size (frequency-magnitude properties of earthquake occurrence play an important role in seismic hazard assessments. The behavior of earthquakes is represented by two different statistics: interoccurrent behavior in a region and recurrent behavior at a given point on a fault (or at a given fault. The interoccurrent frequency-size behavior has been investigated by many authors and generally obeys the power-law Gutenberg-Richter distribution to a good approximation. It is expected that the recurrent frequency-size behavior should obey different statistics. However, this problem has received little attention because historic earthquake sequences do not contain enough events to reconstruct the necessary statistics. To overcome this lack of data, this paper investigates the recurrent frequency-size behavior for several problems. First, the sequences of creep events on a creeping section of the San Andreas fault are investigated. The applicability of the Brownian passage-time, lognormal, and Weibull distributions to the recurrent frequency-size statistics of slip events is tested and the Weibull distribution is found to be the best-fit distribution. To verify this result the behaviors of numerical slider-block and sand-pile models are investigated and the Weibull distribution is confirmed as the applicable distribution for these models as well. Exponents β of the best-fit Weibull distributions for the observed creep event sequences and for the slider-block model are found to have similar values ranging from 1.6 to 2.2 with the corresponding aperiodicities CV of the applied distribution ranging from 0.47 to 0.64. We also note similarities between recurrent time-interval statistics and recurrent frequency-size statistics.

  20. Modeling, Forecasting and Mitigating Extreme Earthquakes

    Science.gov (United States)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  1. Strong motion modeling at the Paducah Diffusion Facility for a large New Madrid earthquake

    International Nuclear Information System (INIS)

    Herrmann, R.B.

    1991-01-01

    The Paducah Diffusion Facility is within 80 kilometers of the location of the very large New Madrid earthquakes which occurred during the winter of 1811-1812. Because of their size, seismic moment of 2.0 x 10 27 dyne-cm or moment magnitude M w = 7.5, the possible recurrence of these earthquakes is a major element in the assessment of seismic hazard at the facility. Probabilistic hazard analysis can provide uniform hazard response spectra estimates for structure evaluation, but a deterministic modeling of a such a large earthquake can provide strong constraints on the expected duration of motion. The large earthquake is modeled by specifying the earthquake fault and its orientation with respect to the site, and by specifying the rupture process. Synthetic time histories, based on forward modeling of the wavefield, from each subelement are combined to yield a three component time history at the site. Various simulations are performed to sufficiently exercise possible spatial and temporal distributions of energy release on the fault. Preliminary results demonstrate the sensitivity of the method to various assumptions, and also indicate strongly that the total duration of ground motion at the site is controlled primarily by the length of the rupture process on the fault

  2. Summary of November 2010 meeting to evaluate turbidite data for constraining the recurrence parameters of great Cascadia earthquakes for the update of national seismic hazard maps

    Science.gov (United States)

    Frankel, Arthur D.

    2011-01-01

    This report summarizes a meeting of geologists, marine sedimentologists, geophysicists, and seismologists that was held on November 18–19, 2010 at Oregon State University in Corvallis, Oregon. The overall goal of the meeting was to evaluate observations of turbidite deposits to provide constraints on the recurrence time and rupture extent of great Cascadia subduction zone (CSZ) earthquakes for the next update of the U.S. national seismic hazard maps (NSHM). The meeting was convened at Oregon State University because this is the major center for collecting and evaluating turbidite evidence of great Cascadia earthquakes by Chris Goldfinger and his colleagues. We especially wanted the participants to see some of the numerous deep sea cores this group has collected that contain the turbidite deposits. Great earthquakes on the CSZ pose a major tsunami, ground-shaking, and ground-failure hazard to the Pacific Northwest. Figure 1 shows a map of the Pacific Northwest with a model for the rupture zone of a moment magnitude Mw 9.0 earthquake on the CSZ and the ground shaking intensity (in ShakeMap format) expected from such an earthquake, based on empirical ground-motion prediction equations. The damaging effects of such an earthquake would occur over a wide swath of the Pacific Northwest and an accompanying tsunami would likely cause devastation along the Pacifc Northwest coast and possibly cause damage and loss of life in other areas of the Pacific. A magnitude 8 earthquake on the CSZ would cause damaging ground shaking and ground failure over a substantial area and could also generate a destructive tsunami. The recent tragic occurrence of the 2011 Mw 9.0 Tohoku-Oki, Japan, earthquake highlights the importance of having accurate estimates of the recurrence times and magnitudes of great earthquakes on subduction zones. For the U.S. national seismic hazard maps, estimating the hazard from the Cascadia subduction zone has been based on coastal paleoseismic evidence of great

  3. Statistical distributions of earthquakes and related non-linear features in seismic waves

    International Nuclear Information System (INIS)

    Apostol, B.-F.

    2006-01-01

    A few basic facts in the science of the earthquakes are briefly reviewed. An accumulation, or growth, model is put forward for the focal mechanisms and the critical focal zone of the earthquakes, which relates the earthquake average recurrence time to the released seismic energy. The temporal statistical distribution for average recurrence time is introduced for earthquakes, and, on this basis, the Omori-type distribution in energy is derived, as well as the distribution in magnitude, by making use of the semi-empirical Gutenberg-Richter law relating seismic energy to earthquake magnitude. On geometric grounds, the accumulation model suggests the value r = 1/3 for the Omori parameter in the power-law of energy distribution, which leads to β = 1,17 for the coefficient in the Gutenberg-Richter recurrence law, in fair agreement with the statistical analysis of the empirical data. Making use of this value, the empirical Bath's law is discussed for the average magnitude of the aftershocks (which is 1.2 less than the magnitude of the main seismic shock), by assuming that the aftershocks are relaxation events of the seismic zone. The time distribution of the earthquakes with a fixed average recurrence time is also derived, the earthquake occurrence prediction is discussed by means of the average recurrence time and the seismicity rate, and application of this discussion to the seismic region Vrancea, Romania, is outlined. Finally, a special effect of non-linear behaviour of the seismic waves is discussed, by describing an exact solution derived recently for the elastic waves equation with cubic anharmonicities, its relevance, and its connection to the approximate quasi-plane waves picture. The properties of the seismic activity accompanying a main seismic shock, both like foreshocks and aftershocks, are relegated to forthcoming publications. (author)

  4. A reliable simultaneous representation of seismic hazard and of ground shaking recurrence

    Science.gov (United States)

    Peresan, A.; Panza, G. F.; Magrin, A.; Vaccari, F.

    2015-12-01

    Different earthquake hazard maps may be appropriate for different purposes - such as emergency management, insurance and engineering design. Accounting for the lower occurrence rate of larger sporadic earthquakes may allow to formulate cost-effective policies in some specific applications, provided that statistically sound recurrence estimates are used, which is not typically the case of PSHA (Probabilistic Seismic Hazard Assessment). We illustrate the procedure to associate the expected ground motions from Neo-deterministic Seismic Hazard Assessment (NDSHA) to an estimate of their recurrence. Neo-deterministic refers to a scenario-based approach, which allows for the construction of a broad range of earthquake scenarios via full waveforms modeling. From the synthetic seismograms the estimates of peak ground acceleration, velocity and displacement, or any other parameter relevant to seismic engineering, can be extracted. NDSHA, in its standard form, defines the hazard computed from a wide set of scenario earthquakes (including the largest deterministically or historically defined credible earthquake, MCE) and it does not supply the frequency of occurrence of the expected ground shaking. A recent enhanced variant of NDSHA that reliably accounts for recurrence has been developed and it is applied to the Italian territory. The characterization of the frequency-magnitude relation can be performed by any statistically sound method supported by data (e.g. multi-scale seismicity model), so that a recurrence estimate is associated to each of the pertinent sources. In this way a standard NDSHA map of ground shaking is obtained simultaneously with the map of the corresponding recurrences. The introduction of recurrence estimates in NDSHA naturally allows for the generation of ground shaking maps at specified return periods. This permits a straightforward comparison between NDSHA and PSHA maps.

  5. The 1985 central chile earthquake: a repeat of previous great earthquakes in the region?

    Science.gov (United States)

    Comte, D; Eisenberg, A; Lorca, E; Pardo, M; Ponce, L; Saragoni, R; Singh, S K; Suárez, G

    1986-07-25

    A great earthquake (surface-wave magnitude, 7.8) occurred along the coast of central Chile on 3 March 1985, causing heavy damage to coastal towns. Intense foreshock activity near the epicenter of the main shock occurred for 11 days before the earthquake. The aftershocks of the 1985 earthquake define a rupture area of 170 by 110 square kilometers. The earthquake was forecast on the basis of the nearly constant repeat time (83 +/- 9 years) of great earthquakes in this region. An analysis of previous earthquakes suggests that the rupture lengths of great shocks in the region vary by a factor of about 3. The nearly constant repeat time and variable rupture lengths cannot be reconciled with time- or slip-predictable models of earthquake recurrence. The great earthquakes in the region seem to involve a variable rupture mode and yet, for unknown reasons, remain periodic. Historical data suggest that the region south of the 1985 rupture zone should now be considered a gap of high seismic potential that may rupture in a great earthquake in the next few tens of years.

  6. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  7. Quantifying slip balance in the earthquake cycle: Coseismic slip model constrained by interseismic coupling

    KAUST Repository

    Wang, Lifeng; Hainzl, Sebastian; Mai, Paul Martin

    2015-01-01

    The long-term slip on faults has to follow, on average, the plate motion, while slip deficit is accumulated over shorter time scales (e.g., between the large earthquakes). Accumulated slip deficits eventually have to be released by earthquakes and aseismic processes. In this study, we propose a new inversion approach for coseismic slip, taking interseismic slip deficit as prior information. We assume a linear correlation between coseismic slip and interseismic slip deficit, and invert for the coefficients that link the coseismic displacements to the required strain accumulation time and seismic release level of the earthquake. We apply our approach to the 2011 M9 Tohoku-Oki earthquake and the 2004 M6 Parkfield earthquake. Under the assumption that the largest slip almost fully releases the local strain (as indicated by borehole measurements, Lin et al., 2013), our results suggest that the strain accumulated along the Tohoku-Oki earthquake segment has been almost fully released during the 2011 M9 rupture. The remaining slip deficit can be attributed to the postseismic processes. Similar conclusions can be drawn for the 2004 M6 Parkfield earthquake. We also estimate the required time of strain accumulation for the 2004 M6 Parkfield earthquake to be ~25 years (confidence interval of [17, 43] years), consistent with the observed average recurrence time of ~22 years for M6 earthquakes in Parkfield. For the Tohoku-Oki earthquake, we estimate the recurrence time of~500-700 years. This new inversion approach for evaluating slip balance can be generally applied to any earthquake for which dense geodetic measurements are available.

  8. Quantifying slip balance in the earthquake cycle: Coseismic slip model constrained by interseismic coupling

    KAUST Repository

    Wang, Lifeng

    2015-11-11

    The long-term slip on faults has to follow, on average, the plate motion, while slip deficit is accumulated over shorter time scales (e.g., between the large earthquakes). Accumulated slip deficits eventually have to be released by earthquakes and aseismic processes. In this study, we propose a new inversion approach for coseismic slip, taking interseismic slip deficit as prior information. We assume a linear correlation between coseismic slip and interseismic slip deficit, and invert for the coefficients that link the coseismic displacements to the required strain accumulation time and seismic release level of the earthquake. We apply our approach to the 2011 M9 Tohoku-Oki earthquake and the 2004 M6 Parkfield earthquake. Under the assumption that the largest slip almost fully releases the local strain (as indicated by borehole measurements, Lin et al., 2013), our results suggest that the strain accumulated along the Tohoku-Oki earthquake segment has been almost fully released during the 2011 M9 rupture. The remaining slip deficit can be attributed to the postseismic processes. Similar conclusions can be drawn for the 2004 M6 Parkfield earthquake. We also estimate the required time of strain accumulation for the 2004 M6 Parkfield earthquake to be ~25 years (confidence interval of [17, 43] years), consistent with the observed average recurrence time of ~22 years for M6 earthquakes in Parkfield. For the Tohoku-Oki earthquake, we estimate the recurrence time of~500-700 years. This new inversion approach for evaluating slip balance can be generally applied to any earthquake for which dense geodetic measurements are available.

  9. A Virtual Tour of the 1868 Hayward Earthquake in Google EarthTM

    Science.gov (United States)

    Lackey, H. G.; Blair, J. L.; Boatwright, J.; Brocher, T.

    2007-12-01

    The 1868 Hayward earthquake has been overshadowed by the subsequent 1906 San Francisco earthquake that destroyed much of San Francisco. Nonetheless, a modern recurrence of the 1868 earthquake would cause widespread damage to the densely populated Bay Area, particularly in the east Bay communities that have grown up virtually on top of the Hayward fault. Our concern is heightened by paleoseismic studies suggesting that the recurrence interval for the past five earthquakes on the southern Hayward fault is 140 to 170 years. Our objective is to build an educational web site that illustrates the cause and effect of the 1868 earthquake drawing on scientific and historic information. We will use Google EarthTM software to visually illustrate complex scientific concepts in a way that is understandable to a non-scientific audience. This web site will lead the viewer from a regional summary of the plate tectonics and faulting system of western North America, to more specific information about the 1868 Hayward earthquake itself. Text and Google EarthTM layers will include modeled shaking of the earthquake, relocations of historic photographs, reconstruction of damaged buildings as 3-D models, and additional scientific data that may come from the many scientific studies conducted for the 140th anniversary of the event. Earthquake engineering concerns will be stressed, including population density, vulnerable infrastructure, and lifelines. We will also present detailed maps of the Hayward fault, measurements of fault creep, and geologic evidence of its recurrence. Understanding the science behind earthquake hazards is an important step in preparing for the next significant earthquake. We hope to communicate to the public and students of all ages, through visualizations, not only the cause and effect of the 1868 earthquake, but also modern seismic hazards of the San Francisco Bay region.

  10. How Long Is Long Enough? Estimation of Slip-Rate and Earthquake Recurrence Interval on a Simple Plate-Boundary Fault Using 3D Paleoseismic Trenching

    Science.gov (United States)

    Wechsler, N.; Rockwell, T. K.; Klinger, Y.; Agnon, A.; Marco, S.

    2012-12-01

    Models used to forecast future seismicity make fundamental assumptions about the behavior of faults and fault systems in the long term, but in many cases this long-term behavior is assumed using short-term and perhaps non-representative observations. The question arises - how long of a record is long enough to represent actual fault behavior, both in terms of recurrence of earthquakes and of moment release (aka slip-rate). We test earthquake recurrence and slip models via high-resolution three-dimensional trenching of the Beteiha (Bet-Zayda) site on the Dead Sea Transform (DST) in northern Israel. We extend the earthquake history of this simple plate boundary fault to establish slip rate for the past 3-4kyr, to determine the amount of slip per event and to study the fundamental behavior, thereby testing competing rupture models (characteristic, slip-patch, slip-loading, and Gutenberg Richter type distribution). To this end we opened more than 900m of trenches, mapped 8 buried channels and dated more than 80 radiocarbon samples. By mapping buried channels, offset by the DST on both sides of the fault, we obtained for each an estimate of displacement. Coupled with fault crossing trenches to determine event history, we construct earthquake and slip history for the fault for the past 2kyr. We observe evidence for a total of 9-10 surface-rupturing earthquakes with varying offset amounts. 6-7 events occurred in the 1st millennium, compared to just 2-3 in the 2nd millennium CE. From our observations it is clear that the fault is not behaving in a periodic fashion. A 4kyr old buried channel yields a slip rate of 3.5-4mm/yr, consistent with GPS rates for this segment. Yet in spite of the apparent agreement between GPS, Pleistocene to present slip rate, and the lifetime rate of the DST, the past 800-1000 year period appears deficit in strain release. Thus, in terms of moment release, most of the fault has remained locked and is accumulating elastic strain. In contrast, the

  11. LASSCI2009.2: layered earthquake rupture forecast model for central Italy, submitted to the CSEP project

    Directory of Open Access Journals (Sweden)

    Francesco Visini

    2010-11-01

    Full Text Available The Collaboratory for the Study of Earthquake Predictability (CSEP selected Italy as a testing region for probabilistic earthquake forecast models in October, 2008. The model we have submitted for the two medium-term forecast periods of 5 and 10 years (from 2009 is a time-dependent, geologically based earthquake rupture forecast that is defined for central Italy only (11-15˚ E; 41-45˚ N. The model took into account three separate layers of seismogenic sources: background seismicity; seismotectonic provinces; and individual faults that can produce major earthquakes (seismogenic boxes. For CSEP testing purposes, the background seismicity layer covered a range of magnitudes from 5.0 to 5.3 and the seismicity rates were obtained by truncated Gutenberg-Richter relationships for cells centered on the CSEP grid. Then the seismotectonic provinces layer returned the expected rates of medium-to-large earthquakes following a traditional Cornell-type approach. Finally, for the seismogenic boxes layer, the rates were based on the geometry and kinematics of the faults that different earthquake recurrence models have been assigned to, ranging from pure Gutenberg-Richter behavior to characteristic events, with the intermediate behavior named as the hybrid model. The results for different magnitude ranges highlight the contribution of each of the three layers to the total computation. The expected rates for M >6.0 on April 1, 2009 (thus computed before the L'Aquila, 2009, MW= 6.3 earthquake are of particular interest. They showed local maxima in the two seismogenic-box sources of Paganica and Sulmona, one of which was activated by the L'Aquila earthquake of April 6, 2009. Earthquake rates as of August 1, 2009, (now under test also showed a maximum close to the Sulmona source for MW ~6.5; significant seismicity rates (10-4 to 10-3 in 5 years for destructive events (magnitude up to 7.0 were located in other individual sources identified as being capable of such

  12. Parallelization of the Coupled Earthquake Model

    Science.gov (United States)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  13. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    Science.gov (United States)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  14. Fault slip and earthquake recurrence along strike-slip faults — Contributions of high-resolution geomorphic data

    KAUST Repository

    Zielke, Olaf

    2015-01-01

    Understanding earthquake (EQ) recurrence relies on information about the timing and size of past EQ ruptures along a given fault. Knowledge of a fault\\'s rupture history provides valuable information on its potential future behavior, enabling seismic hazard estimates and loss mitigation. Stratigraphic and geomorphic evidence of faulting is used to constrain the recurrence of surface rupturing EQs. Analysis of the latter data sets culminated during the mid-1980s in the formulation of now classical EQ recurrence models, now routinely used to assess seismic hazard. Within the last decade, Light Detection and Ranging (lidar) surveying technology and other high-resolution data sets became increasingly available to tectono-geomorphic studies, promising to contribute to better-informed models of EQ recurrence and slip-accumulation patterns. After reviewing motivation and background, we outline requirements to successfully reconstruct a fault\\'s offset accumulation pattern from geomorphic evidence. We address sources of uncertainty affecting offset measurement and advocate approaches to minimize them. A number of recent studies focus on single-EQ slip distributions and along-fault slip accumulation patterns. We put them in context with paleoseismic studies along the respective faults by comparing coefficients of variation CV for EQ inter-event time and slip-per-event and find that a) single-event offsets vary over a wide range of length-scales and the sources for offset variability differ with length-scale, b) at fault-segment length-scales, single-event offsets are essentially constant, c) along-fault offset accumulation as resolved in the geomorphic record is dominated by essentially same-size, large offset increments, and d) there is generally no one-to-one correlation between the offset accumulation pattern constrained in the geomorphic record and EQ occurrence as identified in the stratigraphic record, revealing the higher resolution and preservation potential of

  15. Recurrent Holocene movement on the Susitna Glacier Thrust Fault: The structure that initiated the Mw 7.9 Denali Fault earthquake, central Alaska

    Science.gov (United States)

    Personius, Stephen; Crone, Anthony J.; Burns, Patricia A.; Reitman, Nadine G.

    2017-01-01

    We conducted a trench investigation and analyzed pre‐ and postearthquake topography to determine the timing and size of prehistoric surface ruptures on the Susitna Glacier fault (SGF), the thrust fault that initiated the 2002 Mw 7.9 Denali fault earthquake sequence in central Alaska. In two of our three hand‐excavated trenches, we found clear evidence for a single pre‐2002 earthquake (penultimate earthquake [PE]) and determined an age of 2210±420  cal. B.P. (2σ) for this event. We used structure‐from‐motion software to create a pre‐2002‐earthquake digital surface model (DSM) from 1:62,800‐scale aerial photography taken in 1980 and compared this DSM with postearthquake 5‐m/pixel Interferometric Synthetic Aperature Radar topography taken in 2010. Topographic profiles measured from the pre‐earthquake DSM show features that we interpret as fault and fold scarps. These landforms were about the same size as those formed in 2002, so we infer that the PE was similar in size to the initial (Mw 7.2) subevent of the 2002 sequence. A recurrence interval of 2270 yrs and dip slip of ∼4.8  m yield a single‐interval slip rate of ∼1.8  mm/yr. The lack of evidence for pre‐PE deformation indicates probable episodic (clustering) behavior on the SGF that may be related to strain migration among other similarly oriented thrust faults that together accommodate shortening south of the Denali fault. We suspect that slip‐partitioned thrust‐triggered earthquakes may be a common occurrence on the Denali fault system, but documenting the frequency of such events will be very difficult, given the lack of long‐term paleoseismic records, the number of potential thrust‐earthquake sources, and the pervasive glacial erosion in the region.

  16. Earthquake Clusters and Spatio-temporal Migration of earthquakes in Northeastern Tibetan Plateau: a Finite Element Modeling

    Science.gov (United States)

    Sun, Y.; Luo, G.

    2017-12-01

    Seismicity in a region is usually characterized by earthquake clusters and earthquake migration along its major fault zones. However, we do not fully understand why and how earthquake clusters and spatio-temporal migration of earthquakes occur. The northeastern Tibetan Plateau is a good example for us to investigate these problems. In this study, we construct and use a three-dimensional viscoelastoplastic finite-element model to simulate earthquake cycles and spatio-temporal migration of earthquakes along major fault zones in northeastern Tibetan Plateau. We calculate stress evolution and fault interactions, and explore effects of topographic loading and viscosity of middle-lower crust and upper mantle on model results. Model results show that earthquakes and fault interactions increase Coulomb stress on the neighboring faults or segments, accelerating the future earthquakes in this region. Thus, earthquakes occur sequentially in a short time, leading to regional earthquake clusters. Through long-term evolution, stresses on some seismogenic faults, which are far apart, may almost simultaneously reach the critical state of fault failure, probably also leading to regional earthquake clusters and earthquake migration. Based on our model synthetic seismic catalog and paleoseismic data, we analyze probability of earthquake migration between major faults in northeastern Tibetan Plateau. We find that following the 1920 M 8.5 Haiyuan earthquake and the 1927 M 8.0 Gulang earthquake, the next big event (M≥7) in northeastern Tibetan Plateau would be most likely to occur on the Haiyuan fault.

  17. Spatial Evaluation and Verification of Earthquake Simulators

    Science.gov (United States)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  18. Evaluating spatial and temporal relationships between an earthquake cluster near Entiat, central Washington, and the large December 1872 Entiat earthquake

    Science.gov (United States)

    Brocher, Thomas M.; Blakely, Richard J.; Sherrod, Brian

    2017-01-01

    We investigate spatial and temporal relations between an ongoing and prolific seismicity cluster in central Washington, near Entiat, and the 14 December 1872 Entiat earthquake, the largest historic crustal earthquake in Washington. A fault scarp produced by the 1872 earthquake lies within the Entiat cluster; the locations and areas of both the cluster and the estimated 1872 rupture surface are comparable. Seismic intensities and the 1–2 m of coseismic displacement suggest a magnitude range between 6.5 and 7.0 for the 1872 earthquake. Aftershock forecast models for (1) the first several hours following the 1872 earthquake, (2) the largest felt earthquakes from 1900 to 1974, and (3) the seismicity within the Entiat cluster from 1976 through 2016 are also consistent with this magnitude range. Based on this aftershock modeling, most of the current seismicity in the Entiat cluster could represent aftershocks of the 1872 earthquake. Other earthquakes, especially those with long recurrence intervals, have long‐lived aftershock sequences, including the Mw">MwMw 7.5 1891 Nobi earthquake in Japan, with aftershocks continuing 100 yrs after the mainshock. Although we do not rule out ongoing tectonic deformation in this region, a long‐lived aftershock sequence can account for these observations.

  19. Temporal properties of seismicity and largest earthquakes in SE Carpathians

    Directory of Open Access Journals (Sweden)

    S. Byrdina

    2006-01-01

    Full Text Available In order to estimate the hazard rate distribution of the largest seismic events in Vrancea, South-Eastern Carpathians, we study temporal properties of historical and instrumental catalogues of seismicity. First, on the basis of Generalized Extreme Value theory we estimate the average return period of the largest events. Then, following Bak et al. (2002 and Corral (2005a, we study scaling properties of recurrence times between earthquakes in appropriate spatial volumes. We come to the conclusion that the seismicity is temporally clustered, and that the distribution of recurrence times is significantly different from a Poisson process even for times largely exceeding corresponding periods of foreshock and aftershock activity. Modeling the recurrence times by a gamma distributed variable, we finally estimate hazard rates with respect to the time elapsed from the last large earthquake.

  20. Probabilistic Models For Earthquakes With Large Return Periods In Himalaya Region

    Science.gov (United States)

    Chaudhary, Chhavi; Sharma, Mukat Lal

    2017-12-01

    Determination of the frequency of large earthquakes is of paramount importance for seismic risk assessment as large events contribute to significant fraction of the total deformation and these long return period events with low probability of occurrence are not easily captured by classical distributions. Generally, with a small catalogue these larger events follow different distribution function from the smaller and intermediate events. It is thus of special importance to use statistical methods that analyse as closely as possible the range of its extreme values or the tail of the distributions in addition to the main distributions. The generalised Pareto distribution family is widely used for modelling the events which are crossing a specified threshold value. The Pareto, Truncated Pareto, and Tapered Pareto are the special cases of the generalised Pareto family. In this work, the probability of earthquake occurrence has been estimated using the Pareto, Truncated Pareto, and Tapered Pareto distributions. As a case study, the Himalayas whose orogeny lies in generation of large earthquakes and which is one of the most active zones of the world, has been considered. The whole Himalayan region has been divided into five seismic source zones according to seismotectonic and clustering of events. Estimated probabilities of occurrence of earthquakes have also been compared with the modified Gutenberg-Richter distribution and the characteristics recurrence distribution. The statistical analysis reveals that the Tapered Pareto distribution better describes seismicity for the seismic source zones in comparison to other distributions considered in the present study.

  1. Modeling fast and slow earthquakes at various scales.

    Science.gov (United States)

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  2. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation.

    Science.gov (United States)

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-05-10

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011.

  3. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation

    Science.gov (United States)

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-01-01

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011. PMID:27161897

  4. What Can We Learn from a Simple Physics-Based Earthquake Simulator?

    Science.gov (United States)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2018-03-01

    Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of

  5. Short-term volcano-tectonic earthquake forecasts based on a moving mean recurrence time algorithm: the El Hierro seismo-volcanic crisis experience

    Science.gov (United States)

    García, Alicia; De la Cruz-Reyna, Servando; Marrero, José M.; Ortiz, Ramón

    2016-05-01

    Under certain conditions, volcano-tectonic (VT) earthquakes may pose significant hazards to people living in or near active volcanic regions, especially on volcanic islands; however, hazard arising from VT activity caused by localized volcanic sources is rarely addressed in the literature. The evolution of VT earthquakes resulting from a magmatic intrusion shows some orderly behaviour that may allow the occurrence and magnitude of major events to be forecast. Thus governmental decision makers can be supplied with warnings of the increased probability of larger-magnitude earthquakes on the short-term timescale. We present here a methodology for forecasting the occurrence of large-magnitude VT events during volcanic crises; it is based on a mean recurrence time (MRT) algorithm that translates the Gutenberg-Richter distribution parameter fluctuations into time windows of increased probability of a major VT earthquake. The MRT forecasting algorithm was developed after observing a repetitive pattern in the seismic swarm episodes occurring between July and November 2011 at El Hierro (Canary Islands). From then on, this methodology has been applied to the consecutive seismic crises registered at El Hierro, achieving a high success rate in the real-time forecasting, within 10-day time windows, of volcano-tectonic earthquakes.

  6. ON POTENTIAL REPRESENTATIONS OF THE DISTRIBUTION LAW OF RARE STRONGEST EARTHQUAKES

    Directory of Open Access Journals (Sweden)

    M. V. Rodkin

    2014-01-01

    Full Text Available Assessment of long-term seismic hazard is critically dependent on the behavior of tail of the distribution function of rare strongest earthquakes. Analyses of empirical data cannot however yield the credible solution of this problem because the instrumental catalogs of earthquake are available only for a rather short time intervals, and the uncertainty in estimations of magnitude of paleoearthquakes is high. From the available data, it was possible only to propose a number of alternative models characterizing the distribution of rare strongest earthquakes. There are the following models: the model based on theGuttenberg – Richter law suggested to be valid until a maximum possible seismic event (Мmах, models of 'bend down' of earthquake recurrence curve, and the characteristic earthquakes model. We discuss these models from the general physical concepts supported by the theory of extreme values (with reference to the generalized extreme value (GEV distribution and the generalized Pareto distribution (GPD and the multiplicative cascade model of seismic regime. In terms of the multiplicative cascade model, seismic regime is treated as a large number of episodes of avalanche-type relaxation of metastable states which take place in a set of metastable sub-systems.The model of magnitude-unlimited continuation of the Guttenberg – Richter law is invalid from the physical point of view because it corresponds to an infinite mean value of seismic energy and infinite capacity of the process generating seismicity. A model of an abrupt cut of this law by a maximum possible event, Мmах is not fully logical either.A model with the 'bend-down' of earthquake recurrence curve can ensure both continuity of the distribution law and finiteness of seismic energy value. Results of studies with the use of the theory of extreme values provide a convincing support to the model of 'bend-down' of earthquakes’ recurrence curve. Moreover they testify also that the

  7. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    Science.gov (United States)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  8. Paleoearthquake rupture behavior and recurrence of great earthquakes along the Haiyuan fault, northwestern China

    Institute of Scientific and Technical Information of China (English)

    ZHANG Peizhen; MIN Wei; DENG Qidong; MAO Fengying

    2005-01-01

    The Haiyuan fault is a major seismogenic fault in north-central China where the1920 Haiyuan earthquake of magnitude 8.5 occurred, resulting in more than 220000 deaths. The fault zone can be divided into three segments based on their geometric patterns and associated geomorphology. To study paleoseismology and recurrent history of devastating earthquakes along the fault, we dug 17 trenches along different segments of the fault zone. Although only 10of them allow the paleoearthquake event to be dated, together with the 8 trenches dug previously they still provide adequate information that enables us to capture major paleoearthquakes occurring along the fault during the past geological time. We discovered 3 events along the eastern segment during the past 14000 a, 7 events along the middle segment during the past 9000 a, and 6 events along the western segment during the past 10000 a. These events clearly depict two temporal clusters. The first cluster occurs from 4600 to 6400 a, and the second occurs from 1000to 2800 a, approximately. Each cluster lasts about 2000 a. Time period between these two clusters is also about 2000 a. Based on fault geometry, segmentation pattern, and paleoearthquake events along the Haiyuan fault we can identify three scales of earthquake rupture: rupture of one segment, cascade rupture of two segments, and cascade rupture of entire fault (three segments).Interactions of slip patches on the surface of the fault may cause rupture on one patch or ruptures of more than two to three patchs to form the complex patterns of cascade rupture events.

  9. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    Science.gov (United States)

    Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  10. Earthquake recurrence and magnitude and seismic deformation of the northwestern Okhotsk plate, northeast Russia

    Science.gov (United States)

    Hindle, D.; Mackey, K.

    2011-02-01

    Recorded seismicity from the northwestern Okhotsk plate, northeast Asia, is currently insufficient to account for the predicted slip rates along its boundaries due to plate tectonics. However, the magnitude-frequency relationship for earthquakes from the region suggests that larger earthquakes are possible in the future and that events of ˜Mw 7.5 which should occur every ˜100-350 years would account for almost all the slip of the plate along its boundaries due to Eurasia-North America convergence. We use models for seismic slip distribution along the bounding faults of Okhotsk to conclude that relatively little aseismic strain release is occurring and that larger future earthquakes are likely in the region. Our models broadly support the idea of a single Okhotsk plate, with the large majority of tectonic strain released along its boundaries.

  11. Dense Ocean Floor Network for Earthquakes and Tsunamis; DONET/ DONET2, Part2 -Development and data application for the mega thrust earthquakes around the Nankai trough-

    Science.gov (United States)

    Kaneda, Y.; Kawaguchi, K.; Araki, E.; Matsumoto, H.; Nakamura, T.; Nakano, M.; Kamiya, S.; Ariyoshi, K.; Baba, T.; Ohori, M.; Hori, T.; Takahashi, N.; Kaneko, S.; Donet Research; Development Group

    2010-12-01

    assimilation method using DONET data is very important to improve the recurrence cycle simulation model. 5) Understanding of the interaction between the crust and upper mantle around the Nankai trough subduction zone. We will deploy DONET not only in the Tonankai seismogenic zone but also DONET2 with high voltages in the Nankai seismogenic zone western the Nankai trough: The total system will be deployed to understand the seismic linkage between the Tonankai and Nankai earthquakes: Using DONET and DONET2 data, we will be able to observe the crustal activities and before and after slips at the Tonankai earthquake and Nankai earthquake. And we will improve the recurrence cycle simulation model by the advanced data assimilation method. Actually, we constructed one observatory in DONET and observed some earthquakes and tsunamis. We will introduce details of DONET/DONET2 and some observed data.

  12. An interdisciplinary approach for earthquake modelling and forecasting

    Science.gov (United States)

    Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.

    2016-12-01

    Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.

  13. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California.

    Science.gov (United States)

    Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F

    2011-10-04

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.

  14. Earthquake Source Spectral Study beyond the Omega-Square Model

    Science.gov (United States)

    Uchide, T.; Imanishi, K.

    2017-12-01

    Earthquake source spectra have been used for characterizing earthquake source processes quantitatively and, at the same time, simply, so that we can analyze the source spectra for many earthquakes, especially for small earthquakes, at once and compare them each other. A standard model for the source spectra is the omega-square model, which has the flat spectrum and the falloff inversely proportional to the square of frequencies at low and high frequencies, respectively, which are bordered by a corner frequency. The corner frequency has often been converted to the stress drop under the assumption of circular crack models. However, recent studies claimed the existence of another corner frequency [Denolle and Shearer, 2016; Uchide and Imanishi, 2016] thanks to the recent development of seismic networks. We have found that many earthquakes in areas other than the area studied by Uchide and Imanishi [2016] also have source spectra deviating from the omega-square model. Another part of the earthquake spectra we now focus on is the falloff rate at high frequencies, which will affect the seismic energy estimation [e.g., Hirano and Yagi, 2017]. In June, 2016, we deployed seven velocity seismometers in the northern Ibaraki prefecture, where the shallow crustal seismicity mainly with normal-faulting events was activated by the 2011 Tohoku-oki earthquake. We have recorded seismograms at 1000 samples per second and at a short distance from the source, so that we can investigate the high-frequency components of the earthquake source spectra. Although we are still in the stage of discovery and confirmation of the deviation from the standard omega-square model, the update of the earthquake source spectrum model will help us systematically extract more information on the earthquake source process.

  15. Visible Earthquakes: a web-based tool for visualizing and modeling InSAR earthquake data

    Science.gov (United States)

    Funning, G. J.; Cockett, R.

    2012-12-01

    InSAR (Interferometric Synthetic Aperture Radar) is a technique for measuring the deformation of the ground using satellite radar data. One of the principal applications of this method is in the study of earthquakes; in the past 20 years over 70 earthquakes have been studied in this way, and forthcoming satellite missions promise to enable the routine and timely study of events in the future. Despite the utility of the technique and its widespread adoption by the research community, InSAR does not feature in the teaching curricula of most university geoscience departments. This is, we believe, due to a lack of accessibility to software and data. Existing tools for the visualization and modeling of interferograms are often research-oriented, command line-based and/or prohibitively expensive. Here we present a new web-based interactive tool for comparing real InSAR data with simple elastic models. The overall design of this tool was focused on ease of access and use. This tool should allow interested nonspecialists to gain a feel for the use of such data and greatly facilitate integration of InSAR into upper division geoscience courses, giving students practice in comparing actual data to modeled results. The tool, provisionally named 'Visible Earthquakes', uses web-based technologies to instantly render the displacement field that would be observable using InSAR for a given fault location, geometry, orientation, and slip. The user can adjust these 'source parameters' using a simple, clickable interface, and see how these affect the resulting model interferogram. By visually matching the model interferogram to a real earthquake interferogram (processed separately and included in the web tool) a user can produce their own estimates of the earthquake's source parameters. Once satisfied with the fit of their models, users can submit their results and see how they compare with the distribution of all other contributed earthquake models, as well as the mean and median

  16. Fault roughness and strength heterogeneity control earthquake size and stress drop

    KAUST Repository

    Zielke, Olaf

    2017-01-13

    An earthquake\\'s stress drop is related to the frictional breakdown during sliding and constitutes a fundamental quantity of the rupture process. High-speed laboratory friction experiments that emulate the rupture process imply stress drop values that greatly exceed those commonly reported for natural earthquakes. We hypothesize that this stress drop discrepancy is due to fault-surface roughness and strength heterogeneity: an earthquake\\'s moment release and its recurrence probability depend not only on stress drop and rupture dimension but also on the geometric roughness of the ruptured fault and the location of failing strength asperities along it. Using large-scale numerical simulations for earthquake ruptures under varying roughness and strength conditions, we verify our hypothesis, showing that smoother faults may generate larger earthquakes than rougher faults under identical tectonic loading conditions. We further discuss the potential impact of fault roughness on earthquake recurrence probability. This finding provides important information, also for seismic hazard analysis.

  17. Assessment of earthquake-induced tsunami hazard at a power plant site

    International Nuclear Information System (INIS)

    Ghosh, A.K.

    2008-01-01

    This paper presents a study of the tsunami hazard due to submarine earthquakes at a power plant site on the east coast of India. The paper considers various sources of earthquakes from the tectonic information, and records of past earthquakes and tsunamis. Magnitude-frequency relationship for earthquake occurrence rate and a simplified model for tsunami run-up height as a function of earthquake magnitude and the distance between the source and site have been developed. Finally, considering equal likelihood of generation of earthquakes anywhere on each of the faults, the tsunami hazard has been evaluated and presented as a relationship between tsunami height and its mean recurrence interval (MRI). Probability of exceedence of a certain wave height in a given period of time is also presented. These studies will be helpful in making an estimate of the tsunami-induced flooding potential at the site

  18. A way to synchronize models with seismic faults for earthquake forecasting

    DEFF Research Database (Denmark)

    González, Á.; Gómez, J.B.; Vázquez-Prada, M.

    2006-01-01

    Numerical models are starting to be used for determining the future behaviour of seismic faults and fault networks. Their final goal would be to forecast future large earthquakes. In order to use them for this task, it is necessary to synchronize each model with the current status of the actual....... Earthquakes, though, provide indirect but measurable clues of the stress and strain status in the lithosphere, which should be helpful for the synchronization of the models. The rupture area is one of the measurable parameters of earthquakes. Here we explore how it can be used to at least synchronize fault...... models between themselves and forecast synthetic earthquakes. Our purpose here is to forecast synthetic earthquakes in a simple but stochastic (random) fault model. By imposing the rupture area of the synthetic earthquakes of this model on other models, the latter become partially synchronized...

  19. Toward a comprehensive areal model of earthquake-induced landslides

    Science.gov (United States)

    Miles, S.B.; Keefer, D.K.

    2009-01-01

    This paper provides a review of regional-scale modeling of earthquake-induced landslide hazard with respect to the needs for disaster risk reduction and sustainable development. Based on this review, it sets out important research themes and suggests computing with words (CW), a methodology that includes fuzzy logic systems, as a fruitful modeling methodology for addressing many of these research themes. A range of research, reviewed here, has been conducted applying CW to various aspects of earthquake-induced landslide hazard zonation, but none facilitate comprehensive modeling of all types of earthquake-induced landslides. A new comprehensive areal model of earthquake-induced landslides (CAMEL) is introduced here that was developed using fuzzy logic systems. CAMEL provides an integrated framework for modeling all types of earthquake-induced landslides using geographic information systems. CAMEL is designed to facilitate quantitative and qualitative representation of terrain conditions and knowledge about these conditions on the likely areal concentration of each landslide type. CAMEL is highly modifiable and adaptable; new knowledge can be easily added, while existing knowledge can be changed to better match local knowledge and conditions. As such, CAMEL should not be viewed as a complete alternative to other earthquake-induced landslide models. CAMEL provides an open framework for incorporating other models, such as Newmark's displacement method, together with previously incompatible empirical and local knowledge. ?? 2009 ASCE.

  20. An Overview of Soil Models for Earthquake Response Analysis

    Directory of Open Access Journals (Sweden)

    Halida Yunita

    2015-01-01

    Full Text Available Earthquakes can damage thousands of buildings and infrastructure as well as cause the loss of thousands of lives. During an earthquake, the damage to buildings is mostly caused by the effect of local soil conditions. Depending on the soil type, the earthquake waves propagating from the epicenter to the ground surface will result in various behaviors of the soil. Several studies have been conducted to accurately obtain the soil response during an earthquake. The soil model used must be able to characterize the stress-strain behavior of the soil during the earthquake. This paper compares equivalent linear and nonlinear soil model responses. Analysis was performed on two soil types, Site Class D and Site Class E. An equivalent linear soil model leads to a constant value of shear modulus, while in a nonlinear soil model, the shear modulus changes constantly,depending on the stress level, and shows inelastic behavior. The results from a comparison of both soil models are displayed in the form of maximum acceleration profiles and stress-strain curves.

  1. Modeling earthquake sequences along the Manila subduction zone: Effects of three-dimensional fault geometry

    Science.gov (United States)

    Yu, Hongyu; Liu, Yajing; Yang, Hongfeng; Ning, Jieyuan

    2018-05-01

    To assess the potential of catastrophic megathrust earthquakes (MW > 8) along the Manila Trench, the eastern boundary of the South China Sea, we incorporate a 3D non-planar fault geometry in the framework of rate-state friction to simulate earthquake rupture sequences along the fault segment between 15°N-19°N of northern Luzon. Our simulation results demonstrate that the first-order fault geometry heterogeneity, the transitional-segment (possibly related to the subducting Scarborough seamount chain) connecting the steeper south segment and the flatter north segment, controls earthquake rupture behaviors. The strong along-strike curvature at the transitional-segment typically leads to partial ruptures of MW 8.3 and MW 7.8 along the southern and northern segments respectively. The entire fault occasionally ruptures in MW 8.8 events when the cumulative stress in the transitional-segment is sufficiently high to overcome the geometrical inhibition. Fault shear stress evolution, represented by the S-ratio, is clearly modulated by the width of seismogenic zone (W). At a constant plate convergence rate, a larger W indicates on average lower interseismic stress loading rate and longer rupture recurrence period, and could slow down or sometimes stop ruptures that initiated from a narrower portion. Moreover, the modeled interseismic slip rate before whole-fault rupture events is comparable with the coupling state that was inferred from the interplate seismicity distribution, suggesting the Manila trench could potentially rupture in a M8+ earthquake.

  2. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    Science.gov (United States)

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-08-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.

  3. Combining multiple earthquake models in real time for earthquake early warning

    Science.gov (United States)

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  4. Modelling the elements of country vulnerability to earthquake disasters.

    Science.gov (United States)

    Asef, M R

    2008-09-01

    Earthquakes have probably been the most deadly form of natural disaster in the past century. Diversity of earthquake specifications in terms of magnitude, intensity and frequency at the semicontinental scale has initiated various kinds of disasters at a regional scale. Additionally, diverse characteristics of countries in terms of population size, disaster preparedness, economic strength and building construction development often causes an earthquake of a certain characteristic to have different impacts on the affected region. This research focuses on the appropriate criteria for identifying the severity of major earthquake disasters based on some key observed symptoms. Accordingly, the article presents a methodology for identification and relative quantification of severity of earthquake disasters. This has led to an earthquake disaster vulnerability model at the country scale. Data analysis based on this model suggested a quantitative, comparative and meaningful interpretation of the vulnerability of concerned countries, and successfully explained which countries are more vulnerable to major disasters.

  5. Evidence for a twelfth large earthquake on the southern hayward fault in the past 1900 years

    Science.gov (United States)

    Lienkaemper, J.J.; Williams, P.L.; Guilderson, T.P.

    2010-01-01

    We present age and stratigraphic evidence for an additional paleoearthquake at the Tyson Lagoon site. The acquisition of 19 additional radiocarbon dates and the inclusion of this additional event has resolved a large age discrepancy in our earlier earthquake chronology. The age of event E10 was previously poorly constrained, thus increasing the uncertainty in the mean recurrence interval (RI), a critical factor in seismic hazard evaluation. Reinspection of many trench logs revealed substantial evidence suggesting that an additional earthquake occurred between E10 and E9 within unit u45. Strata in older u45 are faulted in the main fault zone and overlain by scarp colluviums in two locations.We conclude that an additional surfacerupturing event (E9.5) occurred between E9 and E10. Since 91 A.D. (??40 yr, 1??), 11 paleoearthquakes preceded the M 6:8 earthquake in 1868, yielding a mean RI of 161 ?? 65 yr (1??, standard deviation of recurrence intervals). However, the standard error of the mean (SEM) is well determined at ??10 yr. Since ~1300 A.D., the mean rate has increased slightly, but is indistinguishable from the overall rate within the uncertainties. Recurrence for the 12-event sequence seems fairly regular: the coefficient of variation is 0.40, and it yields a 30-yr earthquake probability of 29%. The apparent regularity in timing implied by this earthquake chronology lends support for the use of time-dependent renewal models rather than assuming a random process to forecast earthquakes, at least for the southern Hayward fault.

  6. Recurrent slow slip events as a barrier to the northward rupture propagation of the 2016 Pedernales earthquake (Central Ecuador)

    Science.gov (United States)

    Vaca, Sandro; Vallée, Martin; Nocquet, Jean-Mathieu; Battaglia, Jean; Régnier, Marc

    2018-01-01

    The northern Ecuador segment of the Nazca/South America subduction zone shows spatially heterogeneous interseismic coupling. Two highly coupled zones (0.4° S-0.35° N and 0.8° N-4.0° N) are separated by a low coupled area, hereafter referred to as the Punta Galera-Mompiche Zone (PGMZ). Large interplate earthquakes repeatedly occurred within the coupled zones in 1958 (Mw 7.7) and 1979 (Mw 8.1) for the northern patch and in 1942 (Mw 7.8) and 2016 (Mw 7.8) for the southern patch, while the whole segment is thought to have rupture during the 1906 Mw 8.4-8.8 great earthquake. We find that during the last decade, the PGMZ has experienced regular and frequent seismic swarms. For the best documented sequence (December 2013-January 2014), a joint seismological and geodetic analysis reveals a six-week-long Slow Slip Event (SSE) associated with a seismic swarm. During this period, the microseismicity is organized into families of similar earthquakes spatially and temporally correlated with the evolution of the aseismic slip. The moment release (3.4 × 1018 Nm, Mw 6.3), over a 60 × 40 km area, is considerably larger than the moment released by earthquakes (5.8 × 1015 Nm, Mw 4.4) during the same time period. In 2007-2008, a similar seismic-aseismic episode occurred, with higher magnitudes both for the seismic and aseismic processes. Cross-correlation analyses of the seismic waveforms over a 15 years-long period further suggest a 2-year repeat time for seismic swarms, which also implies that SSEs recurrently affect this area. Such SSEs contribute to release the accumulated stress, likely explaining why the 2016 Pedernales earthquake did not propagate northward into the PGMZ.

  7. Prediction of strong earthquake motions on rock surface using evolutionary process models

    International Nuclear Information System (INIS)

    Kameda, H.; Sugito, M.

    1984-01-01

    Stochastic process models are developed for prediction of strong earthquake motions for engineering design purposes. Earthquake motions with nonstationary frequency content are modeled by using the concept of evolutionary processes. Discussion is focused on the earthquake motions on bed rocks which are important for construction of nuclear power plants in seismic regions. On this basis, two earthquake motion prediction models are developed, one (EMP-IB Model) for prediction with given magnitude and epicentral distance, and the other (EMP-IIB Model) to account for the successive fault ruptures and the site location relative to the fault of great earthquakes. (Author) [pt

  8. Modeling of earthquake ground motion in the frequency domain

    Science.gov (United States)

    Thrainsson, Hjortur

    In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation

  9. 4D stress evolution models of the San Andreas Fault System: Investigating time- and depth-dependent stress thresholds over multiple earthquake cycles

    Science.gov (United States)

    Burkhard, L. M.; Smith-Konter, B. R.

    2017-12-01

    4D simulations of stress evolution provide a rare insight into earthquake cycle crustal stress variations at seismogenic depths where earthquake ruptures nucleate. Paleoseismic estimates of earthquake offset and chronology, spanning multiple earthquakes cycles, are available for many well-studied segments of the San Andreas Fault System (SAFS). Here we construct new 4D earthquake cycle time-series simulations to further study the temporally and spatially varying stress threshold conditions of the SAFS throughout the paleoseismic record. Interseismic strain accumulation, co-seismic stress drop, and postseismic viscoelastic relaxation processes are evaluated as a function of variable slip and locking depths along 42 major fault segments. Paleoseismic earthquake rupture histories provide a slip chronology dating back over 1000 years. Using GAGE Facility GPS and new Sentinel-1A InSAR data, we tune model locking depths and slip rates to compute the 4D stress accumulation within the seismogenic crust. Revised estimates of stress accumulation rate are most significant along the Imperial (2.8 MPa/100yr) and Coachella (1.2 MPa/100yr) faults, with a maximum change in stress rate along some segments of 11-17% in comparison with our previous estimates. Revised estimates of earthquake cycle stress accumulation are most significant along the Imperial (2.25 MPa), Coachella (2.9 MPa), and Carrizo (3.2 MPa) segments, with a 15-29% decrease in stress due to locking depth and slip rate updates, and also postseismic relaxation from the El Mayor-Cucapah earthquake. Because stress drops of major strike-slip earthquakes rarely exceed 10 MPa, these models may provide a lower bound on estimates of stress evolution throughout the historical era, and perhaps an upper bound on the expected recurrence interval of a particular fault segment. Furthermore, time-series stress models reveal temporally varying stress concentrations at 5-10 km depths, due to the interaction of neighboring fault

  10. Controls on the long term earthquake behavior of an intraplate fault revealed by U-Th and stable isotope analyses of syntectonic calcite veins

    Science.gov (United States)

    Williams, Randolph; Goodwin, Laurel; Sharp, Warren; Mozley, Peter

    2017-04-01

    U-Th dates on calcite precipitated in coseismic extension fractures in the Loma Blanca normal fault zone, Rio Grande rift, NM, USA, constrain earthquake recurrence intervals from 150-565 ka. This is the longest direct record of seismicity documented for a fault in any tectonic environment. Combined U-Th and stable isotope analyses of these calcite veins define 13 distinct earthquake events. These data show that for more than 400 ka the Loma Blanca fault produced earthquakes with a mean recurrence interval of 40 ± 7 ka. The coefficient of variation for these events is 0.40, indicating strongly periodic seismicity consistent with a time-dependent model of earthquake recurrence. Stochastic statistical analyses further validate the inference that earthquake behavior on the Loma Blanca was time-dependent. The time-dependent nature of these earthquakes suggests that the seismic cycle was fundamentally controlled by a stress renewal process. However, this periodic cycle was punctuated by an episode of clustered seismicity at 430 ka. Recurrence intervals within the earthquake cluster were as low as 5-11 ka. Breccia veins formed during this episode exhibit carbon isotope signatures consistent with having formed through pronounced degassing of a CO2 charged brine during post-failure, fault-localized fluid migration. The 40 ka periodicity of the long-term earthquake record of the Loma Blanca fault is similar in magnitude to recurrence intervals documented through paleoseismic studies of other normal faults in the Rio Grande rift and Basin and Range Province. We propose that it represents a background rate of failure in intraplate extension. The short-term, clustered seismicity that occurred on the fault records an interruption of the stress renewal process, likely by elevated fluid pressure in deeper structural levels of the fault, consistent with fault-valve behavior. The relationship between recurrence interval and inferred fluid degassing suggests that pore fluid pressure

  11. Time series modelling of the Kobe-Osaka earthquake recordings

    Directory of Open Access Journals (Sweden)

    N. Singh

    2002-01-01

    generated by an earthquake. With a view of comparing these two types of waveforms, Singh (1992 developed a technique for identifying a model in time domain. Fortunately this technique has been found useful in modelling the recordings of the killer earthquake occurred in the Kobe-Osaka region of Japan at 5.46 am on 17 January, 1995. The aim of the present study is to show how well the method for identifying a model (developed by Singh (1992 can be used for describing the vibrations of the above mentioned earthquake recorded at Charters Towers in Queensland, Australia.

  12. GEM - The Global Earthquake Model

    Science.gov (United States)

    Smolka, A.

    2009-04-01

    Over 500,000 people died in the last decade due to earthquakes and tsunamis, mostly in the developing world, where the risk is increasing due to rapid population growth. In many seismic regions, no hazard and risk models exist, and even where models do exist, they are intelligible only by experts, or available only for commercial purposes. The Global Earthquake Model (GEM) answers the need for an openly accessible risk management tool. GEM is an internationally sanctioned public private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) which will establish an authoritative standard for calculating and communicating earthquake hazard and risk, and will be designed to serve as the critical instrument to support decisions and actions that reduce earthquake losses worldwide. GEM will integrate developments on the forefront of scientific and engineering knowledge of earthquakes, at global, regional and local scale. The work is organized in three modules: hazard, risk, and socio-economic impact. The hazard module calculates probabilities of earthquake occurrence and resulting shaking at any given location. The risk module calculates fatalities, injuries, and damage based on expected shaking, building vulnerability, and the distribution of population and of exposed values and facilities. The socio-economic impact module delivers tools for making educated decisions to mitigate and manage risk. GEM will be a versatile online tool, with open source code and a map-based graphical interface. The underlying data will be open wherever possible, and its modular input and output will be adapted to multiple user groups: scientists and engineers, risk managers and decision makers in the public and private sectors, and the public-at- large. GEM will be the first global model for seismic risk assessment at a national and regional scale, and aims to achieve broad scientific participation and independence. Its development will occur in a

  13. Slip in the 1857 and earlier large earthquakes along the Carrizo Plain, San Andreas Fault.

    Science.gov (United States)

    Zielke, Olaf; Arrowsmith, J Ramón; Grant Ludwig, Lisa; Akçiz, Sinan O

    2010-02-26

    The moment magnitude (Mw) 7.9 Fort Tejon earthquake of 1857, with a approximately 350-kilometer-long surface rupture, was the most recent major earthquake along the south-central San Andreas Fault, California. Based on previous measurements of its surface slip distribution, rupture along the approximately 60-kilometer-long Carrizo segment was thought to control the recurrence of 1857-like earthquakes. New high-resolution topographic data show that the average slip along the Carrizo segment during the 1857 event was 5.3 +/- 1.4 meters, eliminating the core assumption for a linkage between Carrizo segment rupture and recurrence of major earthquakes along the south-central San Andreas Fault. Earthquake slip along the Carrizo segment may recur in earthquake clusters with cumulative slip of approximately 5 meters.

  14. Living with earthquakes - development and usage of earthquake-resistant construction methods in European and Asian Antiquity

    Science.gov (United States)

    Kázmér, Miklós; Major, Balázs; Hariyadi, Agus; Pramumijoyo, Subagyo; Ditto Haryana, Yohanes

    2010-05-01

    Earthquakes are among the most horrible events of nature due to unexpected occurrence, for which no spiritual means are available for protection. The only way of preserving life and property is applying earthquake-resistant construction methods. Ancient Greek architects of public buildings applied steel clamps embedded in lead casing to hold together columns and masonry walls during frequent earthquakes in the Aegean region. Elastic steel provided strength, while plastic lead casing absorbed minor shifts of blocks without fracturing rigid stone. Romans invented concrete and built all sizes of buildings as a single, unflexible unit. Masonry surrounding and decorating concrete core of the wall did not bear load. Concrete resisted minor shaking, yielding only to forces higher than fracture limits. Roman building traditions survived the Dark Ages and 12th century Crusader castles erected in earthquake-prone Syria survive until today in reasonably good condition. Concrete and steel clamping persisted side-by-side in the Roman Empire. Concrete was used for cheap construction as compared to building of masonry. Applying lead-encased steel increased costs, and was avoided whenever possible. Columns of the various forums in Italian Pompeii mostly lack steel fittings despite situated in well-known earthquake-prone area. Whether frequent recurrence of earthquakes in the Naples region was known to inhabitants of Pompeii might be a matter of debate. Seemingly the shock of the AD 62 earthquake was not enough to apply well-known protective engineering methods throughout the reconstruction of the city before the AD 79 volcanic catastrophe. An independent engineering tradition developed on the island of Java (Indonesia). The mortar-less construction technique of 8-9th century Hindu masonry shrines around Yogyakarta would allow scattering of blocks during earthquakes. To prevent dilapidation an intricate mortise-and-tenon system was carved into adjacent faces of blocks. Only the

  15. Characteristics of broadband slow earthquakes explained by a Brownian model

    Science.gov (United States)

    Ide, S.; Takeo, A.

    2017-12-01

    Brownian slow earthquake (BSE) model (Ide, 2008; 2010) is a stochastic model for the temporal change of seismic moment release by slow earthquakes, which can be considered as a broadband phenomena including tectonic tremors, low frequency earthquakes, and very low frequency (VLF) earthquakes in the seismological frequency range, and slow slip events in geodetic range. Although the concept of broadband slow earthquake may not have been widely accepted, most of recent observations are consistent with this concept. Then, we review the characteristics of slow earthquakes and how they are explained by BSE model. In BSE model, the characteristic size of slow earthquake source is represented by a random variable, changed by a Gaussian fluctuation added at every time step. The model also includes a time constant, which divides the model behavior into short- and long-time regimes. In nature, the time constant corresponds to the spatial limit of tremor/SSE zone. In the long-time regime, the seismic moment rate is constant, which explains the moment-duration scaling law (Ide et al., 2007). For a shorter duration, the moment rate increases with size, as often observed for VLF earthquakes (Ide et al., 2008). The ratio between seismic energy and seismic moment is constant, as shown in Japan, Cascadia, and Mexico (Maury et al., 2017). The moment rate spectrum has a section of -1 slope, limited by two frequencies corresponding to the above time constant and the time increment of the stochastic process. Such broadband spectra have been observed for slow earthquakes near the trench axis (Kaneko et al., 2017). This spectrum also explains why we can obtain VLF signals by stacking broadband seismograms relative to tremor occurrence (e.g., Takeo et al., 2010; Ide and Yabe, 2014). The fluctuation in BSE model can be non-Gaussian, as far as the variance is finite, as supported by the central limit theorem. Recent observations suggest that tremors and LFEs are spatially characteristic

  16. Meeting the Challenge of Earthquake Risk Globalisation: Towards the Global Earthquake Model GEM (Sergey Soloviev Medal Lecture)

    Science.gov (United States)

    Zschau, J.

    2009-04-01

    Earthquake risk, like natural risks in general, has become a highly dynamic and globally interdependent phenomenon. Due to the "urban explosion" in the Third World, an increasingly complex cross linking of critical infrastructure and lifelines in the industrial nations and a growing globalisation of the world's economies, we are presently facing a dramatic increase of our society's vulnerability to earthquakes in practically all seismic regions on our globe. Such fast and global changes cannot be captured with conventional earthquake risk models anymore. The sciences in this field are, therefore, asked to come up with new solutions that are no longer exclusively aiming at the best possible quantification of the present risks but also keep an eye on their changes with time and allow to project these into the future. This does not apply to the vulnerablity component of earthquake risk alone, but also to its hazard component which has been realized to be time-dependent, too. The challenges of earthquake risk dynamics and -globalisation have recently been accepted by the Global Science Forum of the Organisation for Economic Co-operation and Development (OECD - GSF) who initiated the "Global Earthquake Model (GEM)", a public-private partnership for establishing an independent standard to calculate, monitor and communicate earthquake risk globally, raise awareness and promote mitigation.

  17. Evidence of Multiple Ground-rupturing Earthquakes in the Past 4000 Years along the Pasuruan Fault, East Java, Indonesia

    Science.gov (United States)

    Marliyani, G. I.; Arrowsmith, R.; Helmi, H.

    2015-12-01

    Instrumental and historical records of earthquakes, supplemented by paleoeseismic constraints can help reveal the earthquake potential of an area. The Pasuruan fault is a high angle normal fault with prominent youthful scarps cutting young deltaic sediments in the north coast of East Java, Indonesia and may pose significant hazard to the densely populated region. This fault has not been considered a significant structure, and mapped as a lineament with no sense of motion. Information regarding past earthquakes along this fault is not available. The fault is well defined both in the imagery and in the field as a ~13km long, 2-50m-high scarp. Open and filled fractures and natural exposures of the south-dipping fault plane indicate normal sense of motion. We excavated two fault-perpendicular trenches across a relay ramp identified during our surface mapping. Evidence for past earthquakes (documented in both trenches) includes upward fault termination with associated fissure fills, colluvial wedges and scarp-derived debris, folding, and angular unconformities. The ages of the events are constrained by 23 radiocarbon dates on detrital charcoal. We calibrated the dates using IntCal13 and used Oxcal to build the age model of the events. Our preliminary age model indicates that since 2006±134 B.C., there has been at least five ground rupturing earthquakes along the fault. The oldest event identified in the trench however, is not well-dated. Our modeled 95th percentile ranges of the next four earlier earthquakes (and their mean) are A.D. 1762-1850 (1806), A.D. 1646-1770 (1708), A.D. 1078-1648 (1363), and A.D. 726-1092 (909), yielding a rough recurrence rate of 302±63 yrs. These new data imply that Pasuruan fault is more active than previously thought. Additional well-dated earthquakes are necessary to build a solid earthquake recurrence model. Rupture along the whole section implies a minimum earthquake magnitude of 6.3, considering 13km as the minimum surface rupture

  18. Great earthquakes and slow slip events along the Sagami trough and outline of the Kanto Asperity Project

    Science.gov (United States)

    Kobayashi, R.; Yamamoto, Y.; Sato, T.; Shishikura, M.; Ito, H.; Shinohara, M.; Kawamura, K.; Shibazaki, B.

    2010-12-01

    The Kanto region is one of the most densely populated urban areas in the world. Complicated plate configurations are due to T-T-T type triple junction, island arc-island arc collision zone, and very shallow angle between axis of the Sagami trough and subducting direction. Great earthquakes along the Sagami trough have repeatedly occurred. The 1703 Genroku and 1923 (Taisho) Kanto earthquakes caused severe damages in the Tokyo metropolitan area. Intriguingly slow slip events have also repeatedly occurred in an area adjacent to the asperities of the great earthquakes, off Boso peninsula (e.g., Ozawa et al 2007). In the cases of the Nankai and Cascadia subduction zones, slow slip events occur at deeper levels than the asperity, in a transition zone between the asperity and a region of steady slip. In contrast, slow slip events in the Kanto region have occurred at relatively shallow depths, at the same level as the asperity, raising the possibility of friction controlled by different conditions to those (temperature and pressure) encountered at Nankai and Cascadia. We focus on three different types of seismic events occurring repeatedly at the almost same depth of the seismogenic zone along the Sagami trough (5-20 km) (1) The 1923 M~7.9 Taisho earthquake, located in Sagami Bay. Maximum slip is about 6 m, the recurrence interval is 200-400 yr, and the coupling rate is 80-100% (“coupling rates” = “slip amounts during earthquakes or slow-slip events” / [“rate of motion of the Philippine Sea Plate” - “recurrence interval”]) . (2) The 1703 M~8.2 Genroku earthquake, located in Sagami Bay, but also extending to the southern part of Boso Peninsula. Maximum slip is 15-20 m, the recurrence interval is ~2000 yr, and the coupling rate at the southern part of the Boso Peninsula is 10-30%. (3) Boso slow-slip events, located southeast of Boso Peninsula. Maximum slip is 15-20 cm over ~10 days, the recurrence interval is 5-6 yr, and the coupling rate is 70

  19. The failure of earthquake failure models

    Science.gov (United States)

    Gomberg, J.

    2001-01-01

    In this study I show that simple heuristic models and numerical calculations suggest that an entire class of commonly invoked models of earthquake failure processes cannot explain triggering of seismicity by transient or "dynamic" stress changes, such as stress changes associated with passing seismic waves. The models of this class have the common feature that the physical property characterizing failure increases at an accelerating rate when a fault is loaded (stressed) at a constant rate. Examples include models that invoke rate state friction or subcritical crack growth, in which the properties characterizing failure are slip or crack length, respectively. Failure occurs when the rate at which these grow accelerates to values exceeding some critical threshold. These accelerating failure models do not predict the finite durations of dynamically triggered earthquake sequences (e.g., at aftershock or remote distances). Some of the failure models belonging to this class have been used to explain static stress triggering of aftershocks. This may imply that the physical processes underlying dynamic triggering differs or that currently applied models of static triggering require modification. If the former is the case, we might appeal to physical mechanisms relying on oscillatory deformations such as compaction of saturated fault gouge leading to pore pressure increase, or cyclic fatigue. However, if dynamic and static triggering mechanisms differ, one still needs to ask why static triggering models that neglect these dynamic mechanisms appear to explain many observations. If the static and dynamic triggering mechanisms are the same, perhaps assumptions about accelerating failure and/or that triggering advances the failure times of a population of inevitable earthquakes are incorrect.

  20. Self-exciting point process in modeling earthquake occurrences

    International Nuclear Information System (INIS)

    Pratiwi, H.; Slamet, I.; Respatiwulan; Saputro, D. R. S.

    2017-01-01

    In this paper, we present a procedure for modeling earthquake based on spatial-temporal point process. The magnitude distribution is expressed as truncated exponential and the event frequency is modeled with a spatial-temporal point process that is characterized uniquely by its associated conditional intensity process. The earthquakes can be regarded as point patterns that have a temporal clustering feature so we use self-exciting point process for modeling the conditional intensity function. The choice of main shocks is conducted via window algorithm by Gardner and Knopoff and the model can be fitted by maximum likelihood method for three random variables. (paper)

  1. Historical earthquake research in Austria

    Science.gov (United States)

    Hammerl, Christa

    2017-12-01

    Austria has a moderate seismicity, and on average the population feels 40 earthquakes per year or approximately three earthquakes per month. A severe earthquake with light building damage is expected roughly every 2 to 3 years in Austria. Severe damage to buildings ( I 0 > 8° EMS) occurs significantly less frequently, the average period of recurrence is about 75 years. For this reason the historical earthquake research has been of special importance in Austria. The interest in historical earthquakes in the past in the Austro-Hungarian Empire is outlined, beginning with an initiative of the Austrian Academy of Sciences and the development of historical earthquake research as an independent research field after the 1978 "Zwentendorf plebiscite" on whether the nuclear power plant will start up. The applied methods are introduced briefly along with the most important studies and last but not least as an example of a recently carried out case study, one of the strongest past earthquakes in Austria, the earthquake of 17 July 1670, is presented. The research into historical earthquakes in Austria concentrates on seismic events of the pre-instrumental period. The investigations are not only of historical interest, but also contribute to the completeness and correctness of the Austrian earthquake catalogue, which is the basis for seismic hazard analysis and as such benefits the public, communities, civil engineers, architects, civil protection, and many others.

  2. Paleoseismic Records of 1762 and Similar Prior Earthquakes Along the South-Eastern Coast of Bangladesh

    Science.gov (United States)

    Mondal, D. R.; McHugh, C. M.; Mortlock, R. A.; Gurung, D.; Bastas-Hernandez, A.; Steckler, M. S.; Seeber, L.; Mustaque, S.; Goodbred, S. L., Jr.; Akhter, S. H.; Saha, P.

    2014-12-01

    The great 1762 Arakan earthquake caused subsidence and uplift along 700km of the Arakan coast, and is thought to derive from a huge megathrust rupture reaching northward onto the southeastern coast of Bangladesh. Paleoseismic investigations were conducted in that area to document effects of that and prior earthquakes. U/Th ages obtained from isochron analysis of uplifted dead coral heads of the Poritesspecies, collected along a south to north transect from the islands east coast reveal at least three growth interruptions caused by abrupt relative sea-level changes within the past 1300 years that we interpret to be associated with megathrust ruptures. The ages show distinct events approximately 250, 900 and 1300 years ago. The youngest of these events corresponds to the 1762 Great Arakan earthquake. The two prior events at ~1100 and 700 AD, suggest an average recurrence interval of 400-600 years. Along the coast of Teknaf, we mapped a ~2m uplifted terrace. Marine shells on top of the terrace dated with C-14 at 1695-1791 AD link the uplift to the 1762 Great Arakan earthquake. Based on this evidence and previous work (Wang et al., 2013 and Aung et al., 2008), we estimated the 1762 rupture to be at least 700 km long, from Chebuda Island to the Sitakund anticline encompassing the Teknaf Peninsula. Considering 14 mm/yr convergence rate and 400-600 yrs recurrence interval, this rupture zone has now accumulated elastic deformation to generate a M~8.4 earthquake, close to the M8.8 estimated by Cummins (2007) for the 1762 earthquake. Published recurrence intervals based on C-14 ages along the Myanmar coast ~90 km south of Bangladesh reveal three ruptures within the last 3400 years with an average recurrence interval of 1000-2000 years (Aung et al., 2008). While the 1762 rupture reached across both areas, some of the prior ruptures may be confined to one or the other of these areas, with a smaller magnitude. Our precise U-Th ages provide evidence of recurrence intervals of

  3. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.; Mai, Paul Martin; Thingbaijam, Kiran Kumar; Razafindrakoto, H. N. T.; Genton, Marc G.

    2014-01-01

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  4. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  5. Revisiting Slow Slip Events Occurrence in Boso Peninsula, Japan, Combining GPS Data and Repeating Earthquakes Analysis

    Science.gov (United States)

    Gardonio, B.; Marsan, D.; Socquet, A.; Bouchon, M.; Jara, J.; Sun, Q.; Cotte, N.; Campillo, M.

    2018-02-01

    Slow slip events (SSEs) regularly occur near the Boso Peninsula, central Japan. Their time of recurrence has been decreasing from 6.4 to 2.2 years from 1996 to 2014. It is important to better constrain the slip history of this area, especially as models show that the recurrence intervals could become shorter prior to the occurrence of a large interplate earthquake nearby. We analyze the seismic waveforms of more than 2,900 events (M≥1.0) taking place in the Boso Peninsula, Japan, from 1 April 2004 to 4 November 2015, calculating the correlation and the coherence between each pair of events in order to define groups of repeating earthquakes. The cumulative number of repeating earthquakes suggests the existence of two slow slip events that have escaped detection so far. Small transient displacements observed in the time series of nearby GPS stations confirm these results. The detection scheme coupling repeating earthquakes and GPS analysis allow to detect small SSEs that were not seen before by classical methods. This work brings new information on the diversity of SSEs and demonstrates that the SSEs in Boso area present a more complex history than previously considered.

  6. Standards for Documenting Finite‐Fault Earthquake Rupture Models

    KAUST Repository

    Mai, Paul Martin

    2016-04-06

    In this article, we propose standards for documenting and disseminating finite‐fault earthquake rupture models, and related data and metadata. A comprehensive documentation of the rupture models, a detailed description of the data processing steps, and facilitating the access to the actual data that went into the earthquake source inversion are required to promote follow‐up research and to ensure interoperability, transparency, and reproducibility of the published slip‐inversion solutions. We suggest a formatting scheme that describes the kinematic rupture process in an unambiguous way to support subsequent research. We also provide guidelines on how to document the data, metadata, and data processing. The proposed standards and formats represent a first step to establishing best practices for comprehensively documenting input and output of finite‐fault earthquake source studies.

  7. Standards for Documenting Finite‐Fault Earthquake Rupture Models

    KAUST Repository

    Mai, Paul Martin; Shearer, Peter; Ampuero, Jean‐Paul; Lay, Thorne

    2016-01-01

    In this article, we propose standards for documenting and disseminating finite‐fault earthquake rupture models, and related data and metadata. A comprehensive documentation of the rupture models, a detailed description of the data processing steps, and facilitating the access to the actual data that went into the earthquake source inversion are required to promote follow‐up research and to ensure interoperability, transparency, and reproducibility of the published slip‐inversion solutions. We suggest a formatting scheme that describes the kinematic rupture process in an unambiguous way to support subsequent research. We also provide guidelines on how to document the data, metadata, and data processing. The proposed standards and formats represent a first step to establishing best practices for comprehensively documenting input and output of finite‐fault earthquake source studies.

  8. Human casualties in earthquakes: Modelling and mitigation

    Science.gov (United States)

    Spence, R.J.S.; So, E.K.M.

    2011-01-01

    Earthquake risk modelling is needed for the planning of post-event emergency operations, for the development of insurance schemes, for the planning of mitigation measures in the existing building stock, and for the development of appropriate building regulations; in all of these applications estimates of casualty numbers are essential. But there are many questions about casualty estimation which are still poorly understood. These questions relate to the causes and nature of the injuries and deaths, and the extent to which they can be quantified. This paper looks at the evidence on these questions from recent studies. It then reviews casualty estimation models available, and finally compares the performance of some casualty models in making rapid post-event casualty estimates in recent earthquakes.

  9. Singular limit analysis of a model for earthquake faulting

    DEFF Research Database (Denmark)

    Bossolini, Elena; Brøns, Morten; Kristiansen, Kristian Uldall

    2017-01-01

    In this paper we consider the one dimensional spring-block model describing earthquake faulting. By using geometric singular perturbation theory and the blow-up method we provide a detailed description of the periodicity of the earthquake episodes. In particular, the limit cycles arise from...

  10. Seismic ground motion modelling and damage earthquake scenarios: A bridge between seismologists and seismic engineers

    International Nuclear Information System (INIS)

    Panza, G.F.; Romanelli, F.; Vaccari. F.; . E-mails: Luis.Decanini@uniroma1.it; Fabrizio.Mollaioli@uniroma1.it)

    2002-07-01

    The input for the seismic risk analysis can be expressed with a description of 'roundshaking scenarios', or with probabilistic maps of perhaps relevant parameters. The probabilistic approach, unavoidably based upon rough assumptions and models (e.g. recurrence and attenuation laws), can be misleading, as it cannot take into account, with satisfactory accuracy, some of the most important aspects like rupture process, directivity and site effects. This is evidenced by the comparison of recent recordings with the values predicted by the probabilistic methods. We prefer a scenario-based, deterministic approach in view of the limited seismological data, of the local irregularity of the occurrence of strong earthquakes, and of the multiscale seismicity model, that is capable to reconcile two apparently conflicting ideas: the Characteristic Earthquake concept and the Self Organized Criticality paradigm. Where the numerical modeling is successfully compared with records, the synthetic seismograms permit the microzoning, based upon a set of possible scenario earthquakes. Where no recordings are available the synthetic signals can be used to estimate the ground motion without having to wait for a strong earthquake to occur (pre-disaster microzonation). In both cases the use of modeling is necessary since the so-called local site effects can be strongly dependent upon the properties of the seismic source and can be properly defined only by means of envelopes. The joint use of reliable synthetic signals and observations permits the computation of advanced hazard indicators (e.g. damaging potential) that take into account local soil properties. The envelope of synthetic elastic energy spectra reproduces the distribution of the energy demand in the most relevant frequency range for seismic engineering. The synthetic accelerograms can be fruitfully used for design and strengthening of structures, also when innovative techniques, like seismic isolation, are employed. For these

  11. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    KAUST Repository

    Razafindrakoto, Hoby

    2015-04-22

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  12. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    KAUST Repository

    Razafindrakoto, Hoby; Mai, Paul Martin; Genton, Marc G.; Zhang, Ling; Thingbaijam, Kiran Kumar

    2015-01-01

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  13. A new model on the cause of Tangshan earthquakes in 1976

    Science.gov (United States)

    Wang, Jian

    2001-09-01

    In this paper the shortages of explanations on the cause of Tangshan earthquakes in 1976 are pointed out. Earthquake phenomena around Tangshan earthquakes are analyzed synthetically, it is noticed that the most prominent seismic phenomenon are seismic denseness of M L=4, but M L=3 and M L=2 is not active in the same temporal-spatial interval, which occurred from 1973 to 1975. We think that the phenomenon should correspond to relative integrity of the crust medium under higher regional stress. Assuming that the seismicity in circumjacent region could reflect the jostling extent of surrounding plates toward the Chinese mainland, it is inferred that there are multi-dynamical processes in North China region in 1970s, which supply the basic dynamical source to Tangshan earthquakes. A model of multi-dynamical processes and local weakening of the crust is proposed to explain the cause of Tangshan earthquakes. This model could unpuzzle many seismic phenomena related to Tangshan earthquakes.

  14. Short-term and long-term earthquake occurrence models for Italy: ETES, ERS and LTST

    Directory of Open Access Journals (Sweden)

    Maura Murru

    2010-11-01

    Full Text Available This study describes three earthquake occurrence models as applied to the whole Italian territory, to assess the occurrence probabilities of future (M ≥5.0 earthquakes: two as short-term (24 hour models, and one as long-term (5 and 10 years. The first model for short-term forecasts is a purely stochastic epidemic type earthquake sequence (ETES model. The second short-term model is an epidemic rate-state (ERS forecast based on a model that is physically constrained by the application to the earthquake clustering of the Dieterich rate-state constitutive law. The third forecast is based on a long-term stress transfer (LTST model that considers the perturbations of earthquake probability for interacting faults by static Coulomb stress changes. These models have been submitted to the Collaboratory for the Study of Earthquake Predictability (CSEP for forecast testing for Italy (ETH-Zurich, and they were locked down to test their validity on real data in a future setting starting from August 1, 2009.

  15. Relaxation creep model of impending earthquake

    Energy Technology Data Exchange (ETDEWEB)

    Morgounov, V. A. [Russian Academy of Sciences, Institute of Physics of the Earth, Moscow (Russian Federation)

    2001-04-01

    The alternative view of the current status and perspective of seismic prediction studies is discussed. In the problem of the ascertainment of the uncertainty relation Cognoscibility-Unpredictability of Earthquakes, priorities of works on short-term earthquake prediction are defined due to the advantage that the final stage of nucleation of earthquake is characterized by a substantial activation of the process while its strain rate increases by the orders of magnitude and considerably increased signal-to-noise ratio. Based on the creep phenomenon under stress relaxation conditions, a model is proposed to explain different images of precursors of impending tectonic earthquakes. The onset of tertiary creep appears to correspond to the onset of instability and inevitably fails unless it unloaded. At this stage, the process acquires the self-regulating character to the greatest extent the property of irreversibility, one of the important components of prediction reliability. Data in situ suggest a principal possibility to diagnose the process of preparation by ground measurements of acoustic and electromagnetic emission in the rocks under constant strain in the condition of self-relaxed stress until the moment of fracture are discussed in context. It was obtained that electromagnetic emission precedes but does not accompany the phase of macrocrak development.

  16. The Non-Regularity of Earthquake Recurrence in California: Lessons From Long Paleoseismic Records in Simple vs Complex Fault Regions (Invited)

    Science.gov (United States)

    Rockwell, T. K.

    2010-12-01

    A long paleoseismic record at Hog Lake on the central San Jacinto fault (SJF) in southern California documents evidence for 18 surface ruptures in the past 3.8-4 ka. This yields a long-term recurrence interval of about 210 years, consistent with its slip rate of ~16 mm/yr and field observations of 3-4 m of displacement per event. However, during the past 3800 years, the fault has switched from a quasi-periodic mode of earthquake production, during which the recurrence interval is similar to the long-term average, to clustered behavior with the inter-event periods as short as a few decades. There are also some periods as long as 450 years during which there were no surface ruptures, and these periods are commonly followed by one to several closely-timed ruptures. The coefficient of variation (CV) for the timing of these earthquakes is about 0.6 for the past 4000 years (17 intervals). Similar behavior has been observed on the San Andreas Fault (SAF) south of the Transverse Ranges where clusters of earthquakes have been followed by periods of lower seismic production, and the CV is as high as 0.7 for some portions of the fault. In contrast, the central North Anatolian Fault (NAF) in Turkey, which ruptured in 1944, appears to have produced ruptures with similar displacement at fairly regular intervals for the past 1600 years. With a CV of 0.16 for timing, and close to 0.1 for displacement, the 1944 rupture segment near Gerede appears to have been both periodic and characteristic. The SJF and SAF are part of a broad plate boundary system with multiple parallel strands with significant slip rates. Additional faults lay to the east (Eastern California shear zone) and west (faults of the LA basin and southern California Borderland), which makes the southern SAF system a complex and broad plate boundary zone. In comparison, the 1944 rupture section of the NAF is simple, straight and highly localized, which contrasts with the complex system of parallel faults in southern

  17. Interevent times in a new alarm-based earthquake forecasting model

    Science.gov (United States)

    Talbi, Abdelhak; Nanjo, Kazuyoshi; Zhuang, Jiancang; Satake, Kenji; Hamdache, Mohamed

    2013-09-01

    This study introduces a new earthquake forecasting model that uses the moment ratio (MR) of the first to second order moments of earthquake interevent times as a precursory alarm index to forecast large earthquake events. This MR model is based on the idea that the MR is associated with anomalous long-term changes in background seismicity prior to large earthquake events. In a given region, the MR statistic is defined as the inverse of the index of dispersion or Fano factor, with MR values (or scores) providing a biased estimate of the relative regional frequency of background events, here termed the background fraction. To test the forecasting performance of this proposed MR model, a composite Japan-wide earthquake catalogue for the years between 679 and 2012 was compiled using the Japan Meteorological Agency catalogue for the period between 1923 and 2012, and the Utsu historical seismicity records between 679 and 1922. MR values were estimated by sampling interevent times from events with magnitude M ≥ 6 using an earthquake random sampling (ERS) algorithm developed during previous research. Three retrospective tests of M ≥ 7 target earthquakes were undertaken to evaluate the long-, intermediate- and short-term performance of MR forecasting, using mainly Molchan diagrams and optimal spatial maps obtained by minimizing forecasting error defined by miss and alarm rate addition. This testing indicates that the MR forecasting technique performs well at long-, intermediate- and short-term. The MR maps produced during long-term testing indicate significant alarm levels before 15 of the 18 shallow earthquakes within the testing region during the past two decades, with an alarm region covering about 20 per cent (alarm rate) of the testing region. The number of shallow events missed by forecasting was reduced by about 60 per cent after using the MR method instead of the relative intensity (RI) forecasting method. At short term, our model succeeded in forecasting the

  18. Stochastic dynamic modeling of regular and slow earthquakes

    Science.gov (United States)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal

  19. Foreshock and aftershocks in simple earthquake models.

    Science.gov (United States)

    Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R

    2015-02-27

    Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.

  20. Research and development of earthquake-resistant structure model for nuclear fuel facility

    Energy Technology Data Exchange (ETDEWEB)

    Uryu, Mitsuru; Terada, S.; Shioya, I. [and others

    1999-05-01

    It is important for a nuclear fuel facility to reduce an input intensity of earthquake on the upper part of the building. To study of a response of the building caused by earthquake, an earthquake-resistant structure model is constructed. The weight of the structure model is 90 ton, and is supported by multiple layers of natural ruber and steel. And a weight support device which is called 'softlanding' is also installed to prevent the structure model from loosing the function at excess deformation. The softlanding device consists of Teflon. Dynamic response characteristics of the structure model caused by sine wave and simulated seismic waves are measured and analyzed. Soil tests of the fourth geologic stratum on which the structure model is sited are made to confirm the safety of soil-structure interactions caused by earthquake. (M. Suetake)

  1. Three Millennia of Seemingly Time-Predictable Earthquakes, Tell Ateret

    Science.gov (United States)

    Agnon, Amotz; Marco, Shmuel; Ellenblum, Ronnie

    2014-05-01

    Among various idealized recurrence models of large earthquakes, the "time-predictable" model has a straightforward mechanical interpretation, consistent with simple friction laws. On a time-predictable fault, the time interval between an earthquake and its predecessor is proportional to the slip during the predecessor. The alternative "slip-predictable" model states that the slip during earthquake rupture is proportional to the preceding time interval. Verifying these models requires extended records of high precision data for both timing and amount of slip. The precision of paleoearthquake data can rarely confirm or rule out predictability, and recent papers argue for either time- or slip-predictable behavior. The Ateret site, on the trace of the Dead Sea fault at the Jordan Gorge segment, offers unique precision for determining space-time patterns. Five consecutive slip events, each associated with deformed and offset sets of walls, are correlated with historical earthquakes. Two correlations are based on detailed archaeological, historical, and numismatic evidence. The other three are tentative. The offsets of three of the events are determined with high precision; the other two are not as certain. Accepting all five correlations, the fault exhibits a striking time-predictable behavior, with a long term slip rate of 3 mm/yr. However, the 30 October 1759 ~0.5 m rupture predicts a subsequent rupture along the Jordan Gorge toward the end of the last century. We speculate that earthquakres on secondary faults (the 25 November 1759 on the Rachaya branch and the 1 January 1837 on the Roum branch, both M≥7) have disrupted the 3 kyr time-predictable pattern.

  2. Earthquake engineering development before and after the March 4, 1977, Vrancea, Romania earthquake

    International Nuclear Information System (INIS)

    Georgescu, E.-S.

    2002-01-01

    At 25 years since the of the Vrancea earthquake of March, 4th 1977, we can analyze in an open and critical way its impact on the evolution of earthquake engineering codes and protection policies in Romania. The earthquake (M G-R = 7.2; M w = 7.5), produced 1,570 casualties and more than 11,300 injured persons (90% of the victims in Bucharest), seismic losses were estimated at more then USD 2 billions. The 1977 earthquake represented a significant episode of XXth century in seismic zones of Romania and neighboring countries. The INCERC seismic record of March 4, 1977 put, for the first time, in evidence the spectral content of long period seismic motions of Vrancea earthquakes, the duration, the number of cycles and values of actual accelerations, with important effects of overloading upon flexible structures. The seismic coefficients k s , the spectral curve (the dynamic coefficient β r ) and the seismic zonation map, the requirements in the antiseismic design norms were drastically, changed while the microzonation maps of the time ceased to be used, and the specific Vrancea earthquake recurrence was reconsidered based on hazard studies Thus, the paper emphasises: - the existing engineering knowledge, earthquake code and zoning maps requirements until 1977 as well as seismology and structural lessons since 1977; - recent aspects of implementing of the Earthquake Code P.100/1992 and harmonization with Eurocodes, in conjunction with the specific of urban and rural seismic risk and enforcing policies on strengthening of existing buildings; - a strategic view of disaster prevention, using earthquake scenarios and loss assessments, insurance, earthquake education and training; - the need of a closer transfer of knowledge between seismologists, engineers and officials in charge with disaster prevention public policies. (author)

  3. Preliminary report on Petatlan, Mexico: earthquake of 14 March 1979

    Energy Technology Data Exchange (ETDEWEB)

    1979-01-01

    A major earthquake, M/sub s/ = 7.6, occurred off the southern coast of Mexico near the town of Petatlan on 14 March 1979. The earthquake ruptured a 50-km-long section of the Middle American subduction zone, a seismic gap last ruptured by a major earthquake (M/sub s/ = 7.5) in 1943. Since adjacent gaps of approximately the same size have not had a large earthquake since 1911, and one of these suffered three major earthquakes in four years (1907, 1909, 1911), recurrence times for large events here are highly variable. Thus, this general area remains one of high seismic risk, and provides a focus for investigation of segmentation in the subduction processes. 2 figures.

  4. A new curved fault model and method development for asperities of the 1703 Genroku and 1923 Kanto earthquakes

    Science.gov (United States)

    Kobayashi, R.; Koketsu, K.

    2008-12-01

    Great earthquakes along the Sagami trough, where the Philippine Sea slab is subducting, have repeatedly occurred. The 1703 Genroku and 1923 (Taisho) Kanto earthquakes (M 8.2 and M 7.9, respectively) are known as typical ones, and cause severe damages in the metropolitan area. The recurrence periods of Genroku- and Taisho-type earthquakes inferred from studies of wave cut terraces are about 200-400 and 2000 years, respectively (e.g., Earthquake Research Committee, 2004). We have inferred the source process of the 1923 Kanto earthquake from geodetic, teleseismic, and strong motion data (Kobayashi and Koketsu, 2005). Two asperities of the 1923 Kanto earthquake are located around the western part of Kanagawa prefecture (the base of the Izu peninsula) and around the Miura peninsula. After we adopted an updated fault plane model, which is based on a recent model of the Philippine Sea slab, the asperity around the Miura peninsula moves to the north (Sato et al., 2005). We have also investigated the slip distribution of the 1703 Genroku earthquake. We used crustal uplift and subsidence data investigated by Shishikura (2003), and inferred the slip distribution by using the same geometry of the fault as the 1923 Kanto earthquake. The peak of slip of 16 m is located the southern part of the Boso peninsula. Shape of the upper surface of the Philippine Sea slab is important to constrain extent of the asperities well. Sato et al. (2005) presented the shape in inland part, but less information in oceanic part except for the Tokyo bay. Kimura (2006) and Takeda et al. (2007) presented the shape in oceanic part. In this study, we compiled these slab models, and planed to reanalyze the slip distributions of the 1703 and 1923 earthquakes. We developed a new curved fault plane on the plate boundary between the Philippine Sea slab and inland plate. The curved fault plane was divided into 56 triangle subfaults. Point sources for the Green's function calculations are located at centroids

  5. ARMA models for earthquake ground motions. Seismic Safety Margins Research Program

    International Nuclear Information System (INIS)

    Chang, Mark K.; Kwiatkowski, Jan W.; Nau, Robert F.; Oliver, Robert M.; Pister, Karl S.

    1981-02-01

    This report contains an analysis of four major California earthquake records using a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It has been possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters and test the residuals generated by these models. It has also been possible to show the connections, similarities and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed in this report is suitable for simulating earthquake ground motions in the time domain and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. (author)

  6. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    1997-01-01

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modeling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial Valley earthquake in California (U .S .A.). The results of the study indicate that while all three approaches can successfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  7. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modelling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial valley earthquake in California (USA). The results of the study indicate that while all three approaches can succesfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  8. Geological evidence for Holocene earthquakes and tsunamis along the Nankai-Suruga Trough, Japan

    Science.gov (United States)

    Garrett, Ed; Fujiwara, Osamu; Garrett, Philip; Heyvaert, Vanessa M. A.; Shishikura, Masanobu; Yokoyama, Yusuke; Hubert-Ferrari, Aurélia; Brückner, Helmut; Nakamura, Atsunori; De Batist, Marc

    2016-04-01

    The Nankai-Suruga Trough, lying immediately south of Japan's densely populated and highly industrialised southern coastline, generates devastating great earthquakes (magnitude > 8). Intense shaking, crustal deformation and tsunami generation accompany these ruptures. Forecasting the hazards associated with future earthquakes along this >700 km long fault requires a comprehensive understanding of past fault behaviour. While the region benefits from a long and detailed historical record, palaeoseismology has the potential to provide a longer-term perspective and additional insights. Here, we summarise the current state of knowledge regarding geological evidence for past earthquakes and tsunamis, incorporating literature originally published in both Japanese and English. This evidence comes from a wide variety of sources, including uplifted marine terraces and biota, marine and lacustrine turbidites, liquefaction features, subsided marshes and tsunami deposits in coastal lakes and lowlands. We enhance available results with new age modelling approaches. While publications describe proposed evidence from > 70 sites, only a limited number provide compelling, well-dated evidence. The best available records allow us to map the most likely rupture zones of eleven earthquakes occurring during the historical period. Our spatiotemporal compilation suggests the AD 1707 earthquake ruptured almost the full length of the subduction zone and that earthquakes in AD 1361 and 684 were predecessors of similar magnitude. Intervening earthquakes were of lesser magnitude, highlighting variability in rupture mode. Recurrence intervals for ruptures of the a single seismic segment range from less than 100 to more than 450 years during the historical period. Over longer timescales, palaeoseismic evidence suggests intervals ranging from 100 to 700 years. However, these figures reflect thresholds of evidence creation and preservation as well as genuine recurrence intervals. At present, we have

  9. Statistical modelling for recurrent events: an application to sports injuries.

    Science.gov (United States)

    Ullah, Shahid; Gabbett, Tim J; Finch, Caroline F

    2014-09-01

    Injuries are often recurrent, with subsequent injuries influenced by previous occurrences and hence correlation between events needs to be taken into account when analysing such data. This paper compares five different survival models (Cox proportional hazards (CoxPH) model and the following generalisations to recurrent event data: Andersen-Gill (A-G), frailty, Wei-Lin-Weissfeld total time (WLW-TT) marginal, Prentice-Williams-Peterson gap time (PWP-GT) conditional models) for the analysis of recurrent injury data. Empirical evaluation and comparison of different models were performed using model selection criteria and goodness-of-fit statistics. Simulation studies assessed the size and power of each model fit. The modelling approach is demonstrated through direct application to Australian National Rugby League recurrent injury data collected over the 2008 playing season. Of the 35 players analysed, 14 (40%) players had more than 1 injury and 47 contact injuries were sustained over 29 matches. The CoxPH model provided the poorest fit to the recurrent sports injury data. The fit was improved with the A-G and frailty models, compared to WLW-TT and PWP-GT models. Despite little difference in model fit between the A-G and frailty models, in the interest of fewer statistical assumptions it is recommended that, where relevant, future studies involving modelling of recurrent sports injury data use the frailty model in preference to the CoxPH model or its other generalisations. The paper provides a rationale for future statistical modelling approaches for recurrent sports injury. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  10. Irregular recurrence of large earthquakes along the san andreas fault: evidence from trees.

    Science.gov (United States)

    Jacoby, G C; Sheppard, P R; Sieh, K E

    1988-07-08

    Old trees growing along the San Andreas fault near Wrightwood, California, record in their annual ring-width patterns the effects of a major earthquake in the fall or winter of 1812 to 1813. Paleoseismic data and historical information indicate that this event was the "San Juan Capistrano" earthquake of 8 December 1812, with a magnitude of 7.5. The discovery that at least 12 kilometers of the Mojave segment of the San Andreas fault ruptured in 1812, only 44 years before the great January 1857 rupture, demonstrates that intervals between large earthquakes on this part of the fault are highly variable. This variability increases the uncertainty of forecasting destructive earthquakes on the basis of past behavior and accentuates the need for a more fundamental knowledge of San Andreas fault dynamics.

  11. From Data-Sharing to Model-Sharing: SCEC and the Development of Earthquake System Science (Invited)

    Science.gov (United States)

    Jordan, T. H.

    2009-12-01

    Earthquake system science seeks to construct system-level models of earthquake phenomena and use them to predict emergent seismic behavior—an ambitious enterprise that requires high degree of interdisciplinary, multi-institutional collaboration. This presentation will explore model-sharing structures that have been successful in promoting earthquake system science within the Southern California Earthquake Center (SCEC). These include disciplinary working groups to aggregate data into community models; numerical-simulation working groups to investigate system-specific phenomena (process modeling) and further improve the data models (inverse modeling); and interdisciplinary working groups to synthesize predictive system-level models. SCEC has developed a cyberinfrastructure, called the Community Modeling Environment, that can distribute the community models; manage large suites of numerical simulations; vertically integrate the hardware, software, and wetware needed for system-level modeling; and promote the interactions among working groups needed for model validation and refinement. Various socio-scientific structures contribute to successful model-sharing. Two of the most important are “communities of trust” and collaborations between government and academic scientists on mission-oriented objectives. The latter include improvements of earthquake forecasts and seismic hazard models and the use of earthquake scenarios in promoting public awareness and disaster management.

  12. Ground-motion modeling of the 1906 San Francisco earthquake, part I: Validation using the 1989 Loma Prieta earthquake

    Science.gov (United States)

    Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.

    2008-01-01

    We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.

  13. Mathematical models for estimating earthquake casualties and damage cost through regression analysis using matrices

    International Nuclear Information System (INIS)

    Urrutia, J D; Bautista, L A; Baccay, E B

    2014-01-01

    The aim of this study was to develop mathematical models for estimating earthquake casualties such as death, number of injured persons, affected families and total cost of damage. To quantify the direct damages from earthquakes to human beings and properties given the magnitude, intensity, depth of focus, location of epicentre and time duration, the regression models were made. The researchers formulated models through regression analysis using matrices and used α = 0.01. The study considered thirty destructive earthquakes that hit the Philippines from the inclusive years 1968 to 2012. Relevant data about these said earthquakes were obtained from Philippine Institute of Volcanology and Seismology. Data on damages and casualties were gathered from the records of National Disaster Risk Reduction and Management Council. This study will be of great value in emergency planning, initiating and updating programs for earthquake hazard reduction in the Philippines, which is an earthquake-prone country.

  14. Source modeling of the 2015 Mw 7.8 Nepal (Gorkha) earthquake sequence: Implications for geodynamics and earthquake hazards

    Science.gov (United States)

    McNamara, D. E.; Yeck, W. L.; Barnhart, W. D.; Schulte-Pelkum, V.; Bergman, E.; Adhikari, L. B.; Dixit, A.; Hough, S. E.; Benz, H. M.; Earle, P. S.

    2017-09-01

    The Gorkha earthquake on April 25th, 2015 was a long anticipated, low-angle thrust-faulting event on the shallow décollement between the India and Eurasia plates. We present a detailed multiple-event hypocenter relocation analysis of the Mw 7.8 Gorkha Nepal earthquake sequence, constrained by local seismic stations, and a geodetic rupture model based on InSAR and GPS data. We integrate these observations to place the Gorkha earthquake sequence into a seismotectonic context and evaluate potential earthquake hazard. Major results from this study include (1) a comprehensive catalog of calibrated hypocenters for the Gorkha earthquake sequence; (2) the Gorkha earthquake ruptured a 150 × 60 km patch of the Main Himalayan Thrust (MHT), the décollement defining the plate boundary at depth, over an area surrounding but predominantly north of the capital city of Kathmandu (3) the distribution of aftershock seismicity surrounds the mainshock maximum slip patch; (4) aftershocks occur at or below the mainshock rupture plane with depths generally increasing to the north beneath the higher Himalaya, possibly outlining a 10-15 km thick subduction channel between the overriding Eurasian and subducting Indian plates; (5) the largest Mw 7.3 aftershock and the highest concentration of aftershocks occurred to the southeast the mainshock rupture, on a segment of the MHT décollement that was positively stressed towards failure; (6) the near surface portion of the MHT south of Kathmandu shows no aftershocks or slip during the mainshock. Results from this study characterize the details of the Gorkha earthquake sequence and provide constraints on where earthquake hazard remains high, and thus where future, damaging earthquakes may occur in this densely populated region. Up-dip segments of the MHT should be considered to be high hazard for future damaging earthquakes.

  15. Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination

    Science.gov (United States)

    Walter, W. R.; Pasyanos, M. E.; Matzel, E.; Gok, R.; Sweeney, J.; Ford, S. R.; Rodgers, A. J.

    2008-12-01

    Empirically explosions have been discriminated from natural earthquakes using regional amplitude ratio techniques such as P/S in a variety of frequency bands. We demonstrate that such ratios discriminate nuclear tests from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling. For example, regional waveform modeling shows strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East.

  16. Earthquake source model using strong motion displacement

    Indian Academy of Sciences (India)

    The strong motion displacement records available during an earthquake can be treated as the response of the earth as the a structural system to unknown forces acting at unknown locations. Thus, if the part of the earth participating in ground motion is modelled as a known finite elastic medium, one can attempt to model the ...

  17. Benefits of multidisciplinary collaboration for earthquake casualty estimation models: recent case studies

    Science.gov (United States)

    So, E.

    2010-12-01

    Earthquake casualty loss estimation, which depends primarily on building-specific casualty rates, has long suffered from a lack of cross-disciplinary collaboration in post-earthquake data gathering. An increase in our understanding of what contributes to casualties in earthquakes involve coordinated data-gathering efforts amongst disciplines; these are essential for improved global casualty estimation models. It is evident from examining past casualty loss models and reviewing field data collected from recent events, that generalized casualty rates cannot be applied globally for different building types, even within individual countries. For a particular structure type, regional and topographic building design effects, combined with variable material and workmanship quality all contribute to this multi-variant outcome. In addition, social factors affect building-specific casualty rates, including social status and education levels, and human behaviors in general, in that they modify egress and survivability rates. Without considering complex physical pathways, loss models purely based on historic casualty data, or even worse, rates derived from other countries, will be of very limited value. What’s more, as the world’s population, housing stock, and living and cultural environments change, methods of loss modeling must accommodate these variables, especially when considering casualties. To truly take advantage of observed earthquake losses, not only do damage surveys need better coordination of international and national reconnaissance teams, but these teams must integrate difference areas of expertise including engineering, public health and medicine. Research is needed to find methods to achieve consistent and practical ways of collecting and modeling casualties in earthquakes. International collaboration will also be necessary to transfer such expertise and resources to the communities in the cities which most need it. Coupling the theories and findings from

  18. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    OpenAIRE

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-01-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a dept...

  19. Great earthquakes along the Western United States continental margin: implications for hazards, stratigraphy and turbidite lithology

    Science.gov (United States)

    Nelson, C. H.; Gutiérrez Pastor, J.; Goldfinger, C.; Escutia, C.

    2012-11-01

    We summarize the importance of great earthquakes (Mw ≳ 8) for hazards, stratigraphy of basin floors, and turbidite lithology along the active tectonic continental margins of the Cascadia subduction zone and the northern San Andreas Transform Fault by utilizing studies of swath bathymetry visual core descriptions, grain size analysis, X-ray radiographs and physical properties. Recurrence times of Holocene turbidites as proxies for earthquakes on the Cascadia and northern California margins are analyzed using two methods: (1) radiometric dating (14C method), and (2) relative dating, using hemipelagic sediment thickness and sedimentation rates (H method). The H method provides (1) the best estimate of minimum recurrence times, which are the most important for seismic hazards risk analysis, and (2) the most complete dataset of recurrence times, which shows a normal distribution pattern for paleoseismic turbidite frequencies. We observe that, on these tectonically active continental margins, during the sea-level highstand of Holocene time, triggering of turbidity currents is controlled dominantly by earthquakes, and paleoseismic turbidites have an average recurrence time of ~550 yr in northern Cascadia Basin and ~200 yr along northern California margin. The minimum recurrence times for great earthquakes are approximately 300 yr for the Cascadia subduction zone and 130 yr for the northern San Andreas Fault, which indicates both fault systems are in (Cascadia) or very close (San Andreas) to the early window for another great earthquake. On active tectonic margins with great earthquakes, the volumes of mass transport deposits (MTDs) are limited on basin floors along the margins. The maximum run-out distances of MTD sheets across abyssal-basin floors along active margins are an order of magnitude less (~100 km) than on passive margins (~1000 km). The great earthquakes along the Cascadia and northern California margins cause seismic strengthening of the sediment, which

  20. Earthquake prediction

    International Nuclear Information System (INIS)

    Ward, P.L.

    1978-01-01

    The state of the art of earthquake prediction is summarized, the possible responses to such prediction are examined, and some needs in the present prediction program and in research related to use of this new technology are reviewed. Three basic aspects of earthquake prediction are discussed: location of the areas where large earthquakes are most likely to occur, observation within these areas of measurable changes (earthquake precursors) and determination of the area and time over which the earthquake will occur, and development of models of the earthquake source in order to interpret the precursors reliably. 6 figures

  1. A reconnaissance assessment of probabilistic earthquake accelerations at the Nevada Test Site

    International Nuclear Information System (INIS)

    Perkins, D.M.; Thenhaus, P.C.; Hanson, S.L.; Algermissen, S.T.

    1986-01-01

    We have made two interim assessments of the probabilistic ground-motion hazard for the potential nuclear-waste disposal facility at the Nevada Test Site (NTS). The first assessment used historical seismicity and generalized source zones and source faults in the immediate vicinity of the facility. This model produced relatively high probabilistic ground motions, comparable to the higher of two earlier estimates, which was obtained by averaging seismicity in a 400-km-radius circle around the site. The high ground-motion values appear to be caused in part by nuclear-explosion aftershocks remaining in the catalog even after the explosions themselves have been removed. The second assessment used particularized source zones and source faults in a region substantially larger than NTS to provide a broad context of probabilistic ground motion estimates at other locations of the study region. Source faults are mapped or inferred faults having lengths of 5 km or more. Source zones are defined by boundaries separating fault groups on the basis of direction and density. For this assessment, earthquake recurrence has been estimated primarily from historic seismicity prior to nuclear testing. Long-term recurrence for large-magnitude events is constrained by geological estimates of recurrence in a regime in which the large-magnitude earthquakes would occur with predominately normal mechanisms. 4 refs., 10 figs

  2. Short- and Long-Term Earthquake Forecasts Based on Statistical Models

    Science.gov (United States)

    Console, Rodolfo; Taroni, Matteo; Murru, Maura; Falcone, Giuseppe; Marzocchi, Warner

    2017-04-01

    The epidemic-type aftershock sequences (ETAS) models have been experimentally used to forecast the space-time earthquake occurrence rate during the sequence that followed the 2009 L'Aquila earthquake and for the 2012 Emilia earthquake sequence. These forecasts represented the two first pioneering attempts to check the feasibility of providing operational earthquake forecasting (OEF) in Italy. After the 2009 L'Aquila earthquake the Italian Department of Civil Protection nominated an International Commission on Earthquake Forecasting (ICEF) for the development of the first official OEF in Italy that was implemented for testing purposes by the newly established "Centro di Pericolosità Sismica" (CPS, the seismic Hazard Center) at the Istituto Nazionale di Geofisica e Vulcanologia (INGV). According to the ICEF guidelines, the system is open, transparent, reproducible and testable. The scientific information delivered by OEF-Italy is shaped in different formats according to the interested stakeholders, such as scientists, national and regional authorities, and the general public. The communication to people is certainly the most challenging issue, and careful pilot tests are necessary to check the effectiveness of the communication strategy, before opening the information to the public. With regard to long-term time-dependent earthquake forecast, the application of a newly developed simulation algorithm to Calabria region provided typical features in time, space and magnitude behaviour of the seismicity, which can be compared with those of the real observations. These features include long-term pseudo-periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the moderate and higher magnitude range.

  3. Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes

    Science.gov (United States)

    Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467

  4. Modeling Seismic Cycles of Great Megathrust Earthquakes Across the Scales With Focus at Postseismic Phase

    Science.gov (United States)

    Sobolev, Stephan V.; Muldashev, Iskander A.

    2017-12-01

    Subduction is substantially multiscale process where the stresses are built by long-term tectonic motions, modified by sudden jerky deformations during earthquakes, and then restored by following multiple relaxation processes. Here we develop a cross-scale thermomechanical model aimed to simulate the subduction process from 1 min to million years' time scale. The model employs elasticity, nonlinear transient viscous rheology, and rate-and-state friction. It generates spontaneous earthquake sequences and by using an adaptive time step algorithm, recreates the deformation process as observed naturally during the seismic cycle and multiple seismic cycles. The model predicts that viscosity in the mantle wedge drops by more than three orders of magnitude during the great earthquake with a magnitude above 9. As a result, the surface velocities just an hour or day after the earthquake are controlled by viscoelastic relaxation in the several hundred km of mantle landward of the trench and not by the afterslip localized at the fault as is currently believed. Our model replicates centuries-long seismic cycles exhibited by the greatest earthquakes and is consistent with the postseismic surface displacements recorded after the Great Tohoku Earthquake. We demonstrate that there is no contradiction between extremely low mechanical coupling at the subduction megathrust in South Chile inferred from long-term geodynamic models and appearance of the largest earthquakes, like the Great Chile 1960 Earthquake.

  5. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    Science.gov (United States)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the

  6. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches

    Science.gov (United States)

    Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes

    2017-04-01

    In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite

  7. Finite element models of earthquake cycles in mature strike-slip fault zones

    Science.gov (United States)

    Lynch, John Charles

    The research presented in this dissertation is on the subject of strike-slip earthquakes and the stresses that build and release in the Earth's crust during earthquake cycles. Numerical models of these cycles in a layered elastic/viscoelastic crust are produced using the finite element method. A fault that alternately sticks and slips poses a particularly challenging problem for numerical implementation, and a new contact element dubbed the "Velcro" element was developed to address this problem (Appendix A). Additionally, the finite element code used in this study was bench-marked against analytical solutions for some simplified problems (Chapter 2), and the resolving power was tested for the fault region of the models (Appendix B). With the modeling method thus developed, there are two main questions posed. First, in Chapter 3, the effect of a finite-width shear zone is considered. By defining a viscoelastic shear zone beneath a periodically slipping fault, it is found that shear stress concentrates at the edges of the shear zone and thus causes the stress tensor to rotate into non-Andersonian orientations. Several methods are used to examine the stress patterns, including the plunge angles of the principal stresses and a new method that plots the stress tensor in a manner analogous to seismic focal mechanism diagrams. In Chapter 4, a simple San Andreas-like model is constructed, consisting of two great earthquake producing faults separated by a freely-slipping shorter fault. The model inputs of lower crustal viscosity, fault separation distance, and relative breaking strengths are examined for their effect on fault communication. It is found that with a lower crustal viscosity of 1018 Pa s (in the lower range of estimates for California), the two faults tend to synchronize their earthquake cycles, even in the cases where the faults have asymmetric breaking strengths. These models imply that postseismic stress transfer over hundreds of kilometers may play a

  8. The EM Earthquake Precursor

    Science.gov (United States)

    Jones, K. B., II; Saxton, P. T.

    2013-12-01

    Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After the 1989 Loma Prieta Earthquake, American earthquake investigators predetermined magnetometer use and a minimum earthquake magnitude necessary for EM detection. This action was set in motion, due to the extensive damage incurred and public outrage concerning earthquake forecasting; however, the magnetometers employed, grounded or buried, are completely subject to static and electric fields and have yet to correlate to an identifiable precursor. Secondly, there is neither a networked array for finding any epicentral locations, nor have there been any attempts to find even one. This methodology needs dismissal, because it is overly complicated, subject to continuous change, and provides no response time. As for the minimum magnitude threshold, which was set at M5, this is simply higher than what modern technological advances have gained. Detection can now be achieved at approximately M1, which greatly improves forecasting chances. A propagating precursor has now been detected in both the field and laboratory. Field antenna testing conducted outside the NE Texas town of Timpson in February, 2013, detected three strong EM sources along with numerous weaker signals. The antenna had mobility, and observations were noted for recurrence, duration, and frequency response. Next, two

  9. Great earthquakes along the Western United States continental margin: implications for hazards, stratigraphy and turbidite lithology

    Directory of Open Access Journals (Sweden)

    C. H. Nelson

    2012-11-01

    Full Text Available We summarize the importance of great earthquakes (Mw ≳ 8 for hazards, stratigraphy of basin floors, and turbidite lithology along the active tectonic continental margins of the Cascadia subduction zone and the northern San Andreas Transform Fault by utilizing studies of swath bathymetry visual core descriptions, grain size analysis, X-ray radiographs and physical properties. Recurrence times of Holocene turbidites as proxies for earthquakes on the Cascadia and northern California margins are analyzed using two methods: (1 radiometric dating (14C method, and (2 relative dating, using hemipelagic sediment thickness and sedimentation rates (H method. The H method provides (1 the best estimate of minimum recurrence times, which are the most important for seismic hazards risk analysis, and (2 the most complete dataset of recurrence times, which shows a normal distribution pattern for paleoseismic turbidite frequencies. We observe that, on these tectonically active continental margins, during the sea-level highstand of Holocene time, triggering of turbidity currents is controlled dominantly by earthquakes, and paleoseismic turbidites have an average recurrence time of ~550 yr in northern Cascadia Basin and ~200 yr along northern California margin. The minimum recurrence times for great earthquakes are approximately 300 yr for the Cascadia subduction zone and 130 yr for the northern San Andreas Fault, which indicates both fault systems are in (Cascadia or very close (San Andreas to the early window for another great earthquake.

    On active tectonic margins with great earthquakes, the volumes of mass transport deposits (MTDs are limited on basin floors along the margins. The maximum run-out distances of MTD sheets across abyssal-basin floors along active margins are an order of magnitude less (~100 km than on passive margins (~1000 km. The great earthquakes along the Cascadia and northern California margins

  10. The finite-difference and finite-element modeling of seismic wave propagation and earthquake motion

    International Nuclear Information System (INIS)

    Moszo, P.; Kristek, J.; Galis, M.; Pazak, P.; Balazovijech, M.

    2006-01-01

    Numerical modeling of seismic wave propagation and earthquake motion is an irreplaceable tool in investigation of the Earth's structure, processes in the Earth, and particularly earthquake phenomena. Among various numerical methods, the finite-difference method is the dominant method in the modeling of earthquake motion. Moreover, it is becoming more important in the seismic exploration and structural modeling. At the same time we are convinced that the best time of the finite-difference method in seismology is in the future. This monograph provides tutorial and detailed introduction to the application of the finite-difference, finite-element, and hybrid finite-difference-finite-element methods to the modeling of seismic wave propagation and earthquake motion. The text does not cover all topics and aspects of the methods. We focus on those to which we have contributed. (Author)

  11. The HayWired Earthquake Scenario—Earthquake Hazards

    Science.gov (United States)

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  12. Surface deformation associated with the November 23, 1977, Caucete, Argentina, earthquake sequence

    Science.gov (United States)

    Kadinsky-Cade, K.; Reilinger, R.; Isacks, B.

    1985-01-01

    The 1977 Caucete (San Juan) earthquake considered in the present paper occurred near the Sierra Pie de Palo in the Sierras Pampeanas tectonic province of western Argentina. In the study reported, coseismic surface deformation is combined with seismic observations (main shock and aftershocks, both teleseismic and local data) to place constraints on the geometry and slip of the main fault responsible for the 1977 earthquake. The implications of the 1977 event for long-term crustal shortening and earthquake recurrence rates in this region are also discussed. It is concluded that the 1977 Caucete earthquake was accompanied by more than 1 m of vertical uplift.

  13. On the agreement between small-world-like OFC model and real earthquakes

    International Nuclear Information System (INIS)

    Ferreira, Douglas S.R.; Papa, Andrés R.R.; Menezes, Ronaldo

    2015-01-01

    In this article we implemented simulations of the OFC model for earthquakes for two different topologies: regular and small-world, where in the latter the links are randomly rewired with probability p. In both topologies, we have studied the distribution of time intervals between consecutive earthquakes and the border effects present in each one. In addition, we also have characterized the influence that the probability p produces in certain characteristics of the lattice and in the intensity of border effects. From the two topologies, networks of consecutive epicenters were constructed, that allowed us to analyze the distribution of connectivities of each one. In our results distributions arise belonging to a family of non-traditional distributions functions, which agrees with previous studies using data from actual earthquakes. Our results reinforce the idea that the Earth is in a critical self-organized state and furthermore point towards temporal and spatial correlations between earthquakes in different places. - Highlights: • OFC model simulations for regular and small-world topologies. • For small-world topology distributions agree remarkably well with actual earthquakes. • Reinforce the idea of a critical self-organized state for the Earth's crust. • Point towards temporal and spatial correlations between far earthquakes in far places

  14. On the agreement between small-world-like OFC model and real earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Douglas S.R., E-mail: douglas.ferreira@ifrj.edu.br [Instituto Federal de Educação, Ciência e Tecnologia do Rio de Janeiro, Paracambi, RJ (Brazil); Geophysics Department, Observatório Nacional, Rio de Janeiro, RJ (Brazil); Papa, Andrés R.R., E-mail: papa@on.br [Geophysics Department, Observatório Nacional, Rio de Janeiro, RJ (Brazil); Instituto de Física, Universidade do Estado do Rio de Janeiro, Rio de Janeiro, RJ (Brazil); Menezes, Ronaldo, E-mail: rmenezes@cs.fit.edu [BioComplex Laboratory, Computer Sciences, Florida Institute of Technology, Melbourne (United States)

    2015-03-20

    In this article we implemented simulations of the OFC model for earthquakes for two different topologies: regular and small-world, where in the latter the links are randomly rewired with probability p. In both topologies, we have studied the distribution of time intervals between consecutive earthquakes and the border effects present in each one. In addition, we also have characterized the influence that the probability p produces in certain characteristics of the lattice and in the intensity of border effects. From the two topologies, networks of consecutive epicenters were constructed, that allowed us to analyze the distribution of connectivities of each one. In our results distributions arise belonging to a family of non-traditional distributions functions, which agrees with previous studies using data from actual earthquakes. Our results reinforce the idea that the Earth is in a critical self-organized state and furthermore point towards temporal and spatial correlations between earthquakes in different places. - Highlights: • OFC model simulations for regular and small-world topologies. • For small-world topology distributions agree remarkably well with actual earthquakes. • Reinforce the idea of a critical self-organized state for the Earth's crust. • Point towards temporal and spatial correlations between far earthquakes in far places.

  15. Recursive Bayesian recurrent neural networks for time-series modeling.

    Science.gov (United States)

    Mirikitani, Derrick T; Nikolaev, Nikolay

    2010-02-01

    This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.

  16. Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment

    Science.gov (United States)

    Brietzke, G. B.; Hainzl, S.; Zöller, G.

    2012-04-01

    As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).

  17. Seven years of postseismic deformation following the 2003 Mw = 6.8 Zemmouri earthquake (Algeria) from InSAR time series

    KAUST Repository

    Cetin, Esra

    2012-05-28

    We study the postseismic surface deformation of the Mw 6.8, 2003 Zemmouri earthquake (northern Algeria) using the Multi-Temporal Small Baseline InSAR technique. InSAR time series obtained from 31 Envisat ASAR images from 2003 to 2010 reveal sub-cm coastline ground movements between Cap Matifou and Dellys. Two regions display subsidence at a maximum rate of 2 mm/yr in Cap Djenet and 3.5 mm/yr in Boumerdes. These regions correlate well with areas of maximum coseismic uplifts, and their association with two rupture segments. Inverse modeling suggest that subsidence in the areas of high coseismic uplift can be explained by afterslip on shallow sections (<5 km) of the fault above the areas of coseismic slip, in agreement with previous GPS observations. The earthquake impact on soft sediments and the ground water table southwest of the earthquake area, characterizes ground deformation of non-tectonic origin. The cumulative postseismic moment due to 7 years afterslip is equivalent to an Mw 6.3 earthquake. Therefore, the postseismic deformation and stress buildup has significant implications on the earthquake cycle models and recurrence intervals of large earthquakes in the Algiers area.

  18. Seven years of postseismic deformation following the 2003 Mw = 6.8 Zemmouri earthquake (Algeria) from InSAR time series

    KAUST Repository

    Cetin, Esra; Meghraoui, Mustapha; Cakir, Ziyadin; Akoglu, Ahmet M.; Mimouni, Omar; Chebbah, Mouloud

    2012-01-01

    We study the postseismic surface deformation of the Mw 6.8, 2003 Zemmouri earthquake (northern Algeria) using the Multi-Temporal Small Baseline InSAR technique. InSAR time series obtained from 31 Envisat ASAR images from 2003 to 2010 reveal sub-cm coastline ground movements between Cap Matifou and Dellys. Two regions display subsidence at a maximum rate of 2 mm/yr in Cap Djenet and 3.5 mm/yr in Boumerdes. These regions correlate well with areas of maximum coseismic uplifts, and their association with two rupture segments. Inverse modeling suggest that subsidence in the areas of high coseismic uplift can be explained by afterslip on shallow sections (<5 km) of the fault above the areas of coseismic slip, in agreement with previous GPS observations. The earthquake impact on soft sediments and the ground water table southwest of the earthquake area, characterizes ground deformation of non-tectonic origin. The cumulative postseismic moment due to 7 years afterslip is equivalent to an Mw 6.3 earthquake. Therefore, the postseismic deformation and stress buildup has significant implications on the earthquake cycle models and recurrence intervals of large earthquakes in the Algiers area.

  19. Rupture Propagation through the Big Bend of the San Andreas Fault: A Dynamic Modeling Case Study of the Great Earthquake of 1857

    Science.gov (United States)

    Lozos, J.

    2017-12-01

    The great San Andreas Fault (SAF) earthquake of 9 January 1857, estimated at M7.9, was one of California's largest historic earthquakes. Its 360 km rupture trace follows the Carrizo and Mojave segments of the SAF, including the 30° compressional Big Bend in the fault. If 1857 were a characteristic rupture, the hazard implications for southern California would be dire, especially given the inferred 150 year recurrence interval for this section of the fault. However, recent paleoseismic studies in this region suggest that 1857-type events occur less frequently than single-segment Carrizo or Mojave ruptures, and that the hinge of the Big Bend is a barrier to through-going rupture. Here, I use 3D dynamic rupture modeling to attempt to reproduce the rupture length and surface slip distribution of the 1857 earthquake, to determine which physical conditions allow rupture to negotiate the Big Bend of the SAF. These models incorporate the nonplanar geometry of the SAF, an observation-based heterogeneous regional velocity structure (SCEC CVM), and a regional stress field from seismicity literature. Under regional stress conditions, I am unable to produce model events that both match the observed surface slip on the Carrizo and Mojave segments of the SAF and include rupture through the hinge of the Big Bend. I suggest that accumulated stresses at the bend hinge from multiple smaller Carrizo or Mojave ruptures may be required to allow rupture through the bend — a concept consistent with paleoseismic observations. This study may contribute to understanding the cyclicity of hazard associated with the southern-central SAF.

  20. Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake

    Science.gov (United States)

    Durukal, E.; Sesetyan, K.; Erdik, M.

    2009-04-01

    The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing

  1. MODEL OF TECTONIC EARTHQUAKE PREPARATION AND OCCURRENCE AND ITS PRECURSORS IN CONDITIONS OF CRUSTAL STRETCHING

    Directory of Open Access Journals (Sweden)

    R. M. Semenov

    2018-01-01

    Full Text Available In connection with changes in the stress-strain state of the Earth's crust, various physical and mechanical processes, including destruction, take place in the rocks and are accompanied by tectonic earthquakes. Different models have been proposed to describe earthquake preparation and occurrence, depending on the mechanisms and the rates of geodynamic processes. One of the models considers crustal stretching that is characteristic of formation of rift structures. The model uses the data on rock samples that are stretched until destruction in a special laboratory installation. Based on the laboratory modeling, it is established that the samples are destroyed in stages that are interpreted as stages of preparation and occurrence of an earthquake source. The preparation stage of underground tremors is generally manifested by a variety of temporal (long-, medium- and short-term precursors. The main shortcoming of micro-modeling is that, considering small sizes of the investigated samples, it is impossible to reveal a link between the plastic extension of rocks (taking place in the earthquake hypocenter and the rock rupture. Plasticity is the ability of certain rocks to change shape and size irreversibly, while the rock continuity is maintained, in response to applied external forces. In order to take into account the effect of plastic deformation of rocks on earthquake preparation and occurrence, we propose not to refer to the diagrams showing stretching of the rock samples, but use a typical diagram of metal stretching, which can be obtained when testing a metal rod for breakage (Fig. 1. The diagram of metal stretching as a function of the relative elongation (to some degree of approximation and taking into account the coefficient of plasticity can be considered as a model of preparation and occurrence of an earthquake source in case of rifting. The energy released in the period immediately preceding the earthquake contributes to the emergence of

  2. Identified EM Earthquake Precursors

    Science.gov (United States)

    Jones, Kenneth, II; Saxton, Patrick

    2014-05-01

    Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After a number of custom rock experiments, two hypotheses were formed which could answer the EM wave model. The first hypothesis concerned a sufficient and continuous electron movement either by surface or penetrative flow, and the second regarded a novel approach to radio transmission. Electron flow along fracture surfaces was determined to be inadequate in creating strong EM fields, because rock has a very high electrical resistance making it a high quality insulator. Penetrative flow could not be corroborated as well, because it was discovered that rock was absorbing and confining electrons to a very thin skin depth. Radio wave transmission and detection worked with every single test administered. This hypothesis was reviewed for propagating, long-wave generation with sufficient amplitude, and the capability of penetrating solid rock. Additionally, fracture spaces, either air or ion-filled, can facilitate this concept from great depths and allow for surficial detection. A few propagating precursor signals have been detected in the field occurring with associated phases using custom-built loop antennae. Field testing was conducted in Southern California from 2006-2011, and outside the NE Texas town of Timpson in February, 2013. The antennae have mobility and observations were noted for

  3. An Atlas of ShakeMaps and population exposure catalog for earthquake loss modeling

    Science.gov (United States)

    Allen, T.I.; Wald, D.J.; Earle, P.S.; Marano, K.D.; Hotovec, A.J.; Lin, K.; Hearne, M.G.

    2009-01-01

    We present an Atlas of ShakeMaps and a catalog of human population exposures to moderate-to-strong ground shaking (EXPO-CAT) for recent historical earthquakes (1973-2007). The common purpose of the Atlas and exposure catalog is to calibrate earthquake loss models to be used in the US Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER). The full ShakeMap Atlas currently comprises over 5,600 earthquakes from January 1973 through December 2007, with almost 500 of these maps constrained-to varying degrees-by instrumental ground motions, macroseismic intensity data, community internet intensity observations, and published earthquake rupture models. The catalog of human exposures is derived using current PAGER methodologies. Exposure to discrete levels of shaking intensity is obtained by correlating Atlas ShakeMaps with a global population database. Combining this population exposure dataset with historical earthquake loss data, such as PAGER-CAT, provides a useful resource for calibrating loss methodologies against a systematically-derived set of ShakeMap hazard outputs. We illustrate two example uses for EXPO-CAT; (1) simple objective ranking of country vulnerability to earthquakes, and; (2) the influence of time-of-day on earthquake mortality. In general, we observe that countries in similar geographic regions with similar construction practices tend to cluster spatially in terms of relative vulnerability. We also find little quantitative evidence to suggest that time-of-day is a significant factor in earthquake mortality. Moreover, earthquake mortality appears to be more systematically linked to the population exposed to severe ground shaking (Modified Mercalli Intensity VIII+). Finally, equipped with the full Atlas of ShakeMaps, we merge each of these maps and find the maximum estimated peak ground acceleration at any grid point in the world for the past 35 years. We subsequently compare this "composite ShakeMap" with existing global

  4. Correlating precursory declines in groundwater radon with earthquake magnitude.

    Science.gov (United States)

    Kuo, T

    2014-01-01

    Both studies at the Antung hot spring in eastern Taiwan and at the Paihe spring in southern Taiwan confirm that groundwater radon can be a consistent tracer for strain changes in the crust preceding an earthquake when observed in a low-porosity fractured aquifer surrounded by a ductile formation. Recurrent anomalous declines in groundwater radon were observed at the Antung D1 monitoring well in eastern Taiwan prior to the five earthquakes of magnitude (Mw ): 6.8, 6.1, 5.9, 5.4, and 5.0 that occurred on December 10, 2003; April 1, 2006; April 15, 2006; February 17, 2008; and July 12, 2011, respectively. For earthquakes occurring on the longitudinal valley fault in eastern Taiwan, the observed radon minima decrease as the earthquake magnitude increases. The above correlation has been proven to be useful for early warning local large earthquakes. In southern Taiwan, radon anomalous declines prior to the 2010 Mw 6.3 Jiasian, 2012 Mw 5.9 Wutai, and 2012 ML 5.4 Kaohsiung earthquakes were also recorded at the Paihe spring. For earthquakes occurring on different faults in southern Taiwan, the correlation between the observed radon minima and the earthquake magnitude is not yet possible. © 2013, National Ground Water Association.

  5. A 667 year record of coseismic and interseismic Coulomb stress changes in central Italy reveals the role of fault interaction in controlling irregular earthquake recurrence intervals

    Science.gov (United States)

    Wedmore, L. N. J.; Faure Walker, J. P.; Roberts, G. P.; Sammonds, P. R.; McCaffrey, K. J. W.; Cowie, P. A.

    2017-07-01

    Current studies of fault interaction lack sufficiently long earthquake records and measurements of fault slip rates over multiple seismic cycles to fully investigate the effects of interseismic loading and coseismic stress changes on the surrounding fault network. We model elastic interactions between 97 faults from 30 earthquakes since 1349 A.D. in central Italy to investigate the relative importance of co-seismic stress changes versus interseismic stress accumulation for earthquake occurrence and fault interaction. This region has an exceptionally long, 667 year record of historical earthquakes and detailed constraints on the locations and slip rates of its active normal faults. Of 21 earthquakes since 1654, 20 events occurred on faults where combined coseismic and interseismic loading stresses were positive even though 20% of all faults are in "stress shadows" at any one time. Furthermore, the Coulomb stress on the faults that experience earthquakes is statistically different from a random sequence of earthquakes in the region. We show how coseismic Coulomb stress changes can alter earthquake interevent times by 103 years, and fault length controls the intensity of this effect. Static Coulomb stress changes cause greater interevent perturbations on shorter faults in areas characterized by lower strain (or slip) rates. The exceptional duration and number of earthquakes we model enable us to demonstrate the importance of combining long earthquake records with detailed knowledge of fault geometries, slip rates, and kinematics to understand the impact of stress changes in complex networks of active faults.

  6. Time-predictable model applicability for earthquake occurrence in northeast India and vicinity

    Directory of Open Access Journals (Sweden)

    A. Panthi

    2011-03-01

    Full Text Available Northeast India and its vicinity is one of the seismically most active regions in the world, where a few large and several moderate earthquakes have occurred in the past. In this study the region of northeast India has been considered for an earthquake generation model using earthquake data as reported by earthquake catalogues National Geophysical Data Centre, National Earthquake Information Centre, United States Geological Survey and from book prepared by Gupta et al. (1986 for the period 1906–2008. The events having a surface wave magnitude of Ms≥5.5 were considered for statistical analysis. In this region, nineteen seismogenic sources were identified by the observation of clustering of earthquakes. It is observed that the time interval between the two consecutive mainshocks depends upon the preceding mainshock magnitude (Mp and not on the following mainshock (Mf. This result corroborates the validity of time-predictable model in northeast India and its adjoining regions. A linear relation between the logarithm of repeat time (T of two consecutive events and the magnitude of the preceding mainshock is established in the form LogT = cMp+a, where "c" is a positive slope of line and "a" is function of minimum magnitude of the earthquake considered. The values of the parameters "c" and "a" are estimated to be 0.21 and 0.35 in northeast India and its adjoining regions. The less value of c than the average implies that the earthquake occurrence in this region is different from those of plate boundaries. The result derived can be used for long term seismic hazard estimation in the delineated seismogenic regions.

  7. Earthquake Triggering in the September 2017 Mexican Earthquake Sequence

    Science.gov (United States)

    Fielding, E. J.; Gombert, B.; Duputel, Z.; Huang, M. H.; Liang, C.; Bekaert, D. P.; Moore, A. W.; Liu, Z.; Ampuero, J. P.

    2017-12-01

    Southern Mexico was struck by four earthquakes with Mw > 6 and numerous smaller earthquakes in September 2017, starting with the 8 September Mw 8.2 Tehuantepec earthquake beneath the Gulf of Tehuantepec offshore Chiapas and Oaxaca. We study whether this M8.2 earthquake triggered the three subsequent large M>6 quakes in southern Mexico to improve understanding of earthquake interactions and time-dependent risk. All four large earthquakes were extensional despite the the subduction of the Cocos plate. The traditional definition of aftershocks: likely an aftershock if it occurs within two rupture lengths of the main shock soon afterwards. Two Mw 6.1 earthquakes, one half an hour after the M8.2 beneath the Tehuantepec gulf and one on 23 September near Ixtepec in Oaxaca, both fit as traditional aftershocks, within 200 km of the main rupture. The 19 September Mw 7.1 Puebla earthquake was 600 km away from the M8.2 shock, outside the standard aftershock zone. Geodetic measurements from interferometric analysis of synthetic aperture radar (InSAR) and time-series analysis of GPS station data constrain finite fault total slip models for the M8.2, M7.1, and M6.1 Ixtepec earthquakes. The early M6.1 aftershock was too close in time and space to the M8.2 to measure with InSAR or GPS. We analyzed InSAR data from Copernicus Sentinel-1A and -1B satellites and JAXA ALOS-2 satellite. Our preliminary geodetic slip model for the M8.2 quake shows significant slip extended > 150 km NW from the hypocenter, longer than slip in the v1 finite-fault model (FFM) from teleseismic waveforms posted by G. Hayes at USGS NEIC. Our slip model for the M7.1 earthquake is similar to the v2 NEIC FFM. Interferograms for the M6.1 Ixtepec quake confirm the shallow depth in the upper-plate crust and show centroid is about 30 km SW of the NEIC epicenter, a significant NEIC location bias, but consistent with cluster relocations (E. Bergman, pers. comm.) and with Mexican SSN location. Coulomb static stress

  8. Concerns over modeling and warning capabilities in wake of Tohoku Earthquake and Tsunami

    Science.gov (United States)

    Showstack, Randy

    2011-04-01

    Improved earthquake models, better tsunami modeling and warning capabilities, and a review of nuclear power plant safety are all greatly needed following the 11 March Tohoku earthquake and tsunami, according to scientists at the European Geosciences Union's (EGU) General Assembly, held 3-8 April in Vienna, Austria. EGU quickly organized a morning session of oral presentations and an afternoon panel discussion less than 1 month after the earthquake and the tsunami and the resulting crisis at Japan's Fukushima nuclear power plant, which has now been identified as having reached the same level of severity as the 1986 Chernobyl disaster. Many of the scientists at the EGU sessions expressed concern about the inability to have anticipated the size of the earthquake and the resulting tsunami, which appears likely to have caused most of the fatalities and damage, including damage to the nuclear plant.

  9. Recurrence relations in the three-dimensional Ising model

    International Nuclear Information System (INIS)

    Yukhnovskij, I.R.; Kozlovskij, M.P.

    1977-01-01

    Recurrence relations between the coefficients asub(2)sup((i)), asub(4)sup((i)) and Psub(2)sup((i)), Psub(4)sup((i)) which characterize the probabilities of distribution for the three-dimensional Ising model are studied. It is shown that for large arguments z of the Makdonald functions Ksub(ν)(z) the recurrence relations correspond to the known Wilson relations. But near the critical point for small values of the transfer momentum k this limit case does not take place. In the pointed region the argument z tends to zero, and new recurrence relations take place

  10. Evaluation of earthquake vibration on aseismic design of nuclear power plant judging from recent earthquakes

    International Nuclear Information System (INIS)

    Dan, Kazuo

    2006-01-01

    The Regulatory Guide for Aseismic Design of Nuclear Reactor Facilities was revised on 19 th September, 2006. Six factors for evaluation of earthquake vibration are considered on the basis of the recent earthquakes. They are 1) evaluation of earthquake vibration by method using fault model, 2) investigation and approval of active fault, 3) direct hit earthquake, 4) assumption of the short active fault as the hypocentral fault, 5) locality of the earthquake and the earthquake vibration and 6) remaining risk. A guiding principle of revision required new evaluation method of earthquake vibration using fault model, and evaluation of probability of earthquake vibration. The remaining risk means the facilities and people get into danger when stronger earthquake than the design occurred, accordingly, the scattering has to be considered at evaluation of earthquake vibration. The earthquake belt of Hyogo-Nanbu earthquake and strong vibration pulse in 1995, relation between length of surface earthquake fault and hypocentral fault, and distribution of seismic intensity of off Kushiro in 1993 are shown. (S.Y.)

  11. Modeling fault rupture hazard for the proposed repository at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Coppersmith, K.J.; Youngs, R.R.

    1992-01-01

    In this paper as part of the Electric Power Research Institute's High Level Waste program, the authors have developed a preliminary probabilistic model for assessing the hazard of fault rupture to the proposed high level waste repository at Yucca Mountain. The model is composed of two parts: the earthquake occurrence model that describes the three-dimensional geometry of earthquake sources and the earthquake recurrence characteristics for all sources in the site vicinity; and the rupture model that describes the probability of coseismic fault rupture of various lengths and amounts of displacement within the repository horizon 350 m below the surface. The latter uses empirical data from normal-faulting earthquakes to relate the rupture dimensions and fault displacement amounts to the magnitude of the earthquake. using a simulation procedure, we allow for earthquake occurrence on all of the earthquake sources in the site vicinity, model the location and displacement due to primary faults, and model the occurrence of secondary faulting in conjunction with primary faulting

  12. Intra-day response of foreign exchange markets after the Tohoku-Oki earthquake

    Science.gov (United States)

    Nakano, Shuhei; Hirata, Yoshito; Iwayama, Koji; Aihara, Kazuyuki

    2015-02-01

    Although an economy is influenced by a natural disaster, the market response to the disaster during the first 24 hours is not clearly understood. Here we show that an earthquake quickly causes temporal changes in a foreign exchange market by examining the case of the Tohoku-Oki earthquake. Recurrence plots and statistical change point detection independently show that the United States dollar-Japanese yen market responded to the earthquake activity without delay and with the delay of about 2 minutes, respectively. These findings support that the efficient market hypothesis nearly holds now in the time scale of minutes.

  13. A model of characteristic earthquakes and its implications for regional seismicity

    DEFF Research Database (Denmark)

    López-Ruiz, R.; Vázquez-Prada, M.; Pacheco, A.F.

    2004-01-01

    Regional seismicity (i.e. that averaged over large enough areas over long enough periods of time) has a size-frequency relationship, the Gutenberg-Richter law, which differs from that found for some seismic faults, the Characteristic Earthquake relationship. But all seismicity comes in the end from...... active faults, so the question arises of how one seismicity pattern could emerge from the other. The recently introduced Minimalist Model of Vázquez-Prada et al. of characteristic earthquakes provides a simple representation of the seismicity originating from a single fault. Here, we show...... that a Characteristic Earthquake relationship together with a fractal distribution of fault lengths can accurately describe the total seismicity produced in a region. The resulting earthquake catalogue accounts for the addition of both all the characteristic and all the non-characteristic events triggered in the faults...

  14. Seismomagnetic models for earthquakes in the eastern part of Izu Peninsula, Central Japan

    Directory of Open Access Journals (Sweden)

    Y. Ishikawa

    1997-06-01

    Full Text Available Seismomagnetic changes accompanied by four damaging earthquakes are explained by the piezomagnetic effect observed in the eastern part of Izu Peninsula, Central Japan. Most of the data were obtained by repeat surveys. Although these data suffered electric railway noise, significant magnetic changes were detected at points close to earthquake faults. Coseismic changes can be well interpreted by piezomagnetic models in the case of the 1978 Near Izu-Oshima (M 7.0 and the 1980 East Off Izu Peninsula (M 6.7 earthquakes. A large total intensity change up to 5 nT was observed at a survey point almost above the epicenter of the 1976 Kawazu (M 5.4 earthquake. This change is not explained by a single fault model; a 2-segment fault is suggested. Remarkable precursory and coseismic changes in the total force intensity were observed at KWZ station along with the 1978 Higashi-Izu (M 4.9 earthquake. KWZ station is located very close to a buried subsidiary fault of the M 7.0 Near Izu-Oshima earthquake, which moved aseismically at the time of the M 7.0 quake. The precursory magnetic change to the M 4.9 quake is ascribed to aseismic faulting of this buried fault, while the coseismic rebound to enlargement of the slipping surface at the time of M 4.9 quake. This implies that we observed the formation process of the earthquake nucleation zone via the magnetic field.

  15. REGIONAL SEISMIC AMPLITUDE MODELING AND TOMOGRAPHY FOR EARTHQUAKE-EXPLOSION DISCRIMINATION

    Energy Technology Data Exchange (ETDEWEB)

    Walter, W R; Pasyanos, M E; Matzel, E; Gok, R; Sweeney, J; Ford, S R; Rodgers, A J

    2008-07-08

    We continue exploring methodologies to improve earthquake-explosion discrimination using regional amplitude ratios such as P/S in a variety of frequency bands. Empirically we demonstrate that such ratios separate explosions from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are also examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling (e. g. Ford et al 2008). For example, regional waveform modeling shows strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East. Monitoring the world for potential nuclear explosions requires characterizing seismic

  16. Dual megathrust slip behaviors of the 2014 Iquique earthquake sequence

    Science.gov (United States)

    Meng, Lingsen; Huang, Hui; Bürgmann, Roland; Ampuero, Jean Paul; Strader, Anne

    2015-02-01

    The transition between seismic rupture and aseismic creep is of central interest to better understand the mechanics of subduction processes. A Mw 8.2 earthquake occurred on April 1st, 2014 in the Iquique seismic gap of northern Chile. This event was preceded by a long foreshock sequence including a 2-week-long migration of seismicity initiated by a Mw 6.7 earthquake. Repeating earthquakes were found among the foreshock sequence that migrated towards the mainshock hypocenter, suggesting a large-scale slow-slip event on the megathrust preceding the mainshock. The variations of the recurrence times of the repeating earthquakes highlight the diverse seismic and aseismic slip behaviors on different megathrust segments. The repeaters that were active only before the mainshock recurred more often and were distributed in areas of substantial coseismic slip, while repeaters that occurred both before and after the mainshock were in the area complementary to the mainshock rupture. The spatiotemporal distribution of the repeating earthquakes illustrates the essential role of propagating aseismic slip leading up to the mainshock and illuminates the distribution of postseismic afterslip. Various finite fault models indicate that the largest coseismic slip generally occurred down-dip from the foreshock activity and the mainshock hypocenter. Source imaging by teleseismic back-projection indicates an initial down-dip propagation stage followed by a rupture-expansion stage. In the first stage, the finite fault models show an emergent onset of moment rate at low frequency ( 0.5 Hz). This indicates frequency-dependent manifestations of seismic radiation in the low-stress foreshock region. In the second stage, the rupture expands in rich bursts along the rim of a semi-elliptical region with episodes of re-ruptures, suggesting delayed failure of asperities. The high-frequency rupture remains within an area of local high trench-parallel gravity anomaly (TPGA), suggesting the presence of

  17. The Global Earthquake Model and Disaster Risk Reduction

    Science.gov (United States)

    Smolka, A. J.

    2015-12-01

    Advanced, reliable and transparent tools and data to assess earthquake risk are inaccessible to most, especially in less developed regions of the world while few, if any, globally accepted standards currently allow a meaningful comparison of risk between places. The Global Earthquake Model (GEM) is a collaborative effort that aims to provide models, datasets and state-of-the-art tools for transparent assessment of earthquake hazard and risk. As part of this goal, GEM and its global network of collaborators have developed the OpenQuake engine (an open-source software for hazard and risk calculations), the OpenQuake platform (a web-based portal making GEM's resources and datasets freely available to all potential users), and a suite of tools to support modelers and other experts in the development of hazard, exposure and vulnerability models. These resources are being used extensively across the world in hazard and risk assessment, from individual practitioners to local and national institutions, and in regional projects to inform disaster risk reduction. Practical examples for how GEM is bridging the gap between science and disaster risk reduction are: - Several countries including Switzerland, Turkey, Italy, Ecuador, Papua-New Guinea and Taiwan (with more to follow) are computing national seismic hazard using the OpenQuake-engine. In some cases these results are used for the definition of actions in building codes. - Technical support, tools and data for the development of hazard, exposure, vulnerability and risk models for regional projects in South America and Sub-Saharan Africa. - Going beyond physical risk, GEM's scorecard approach evaluates local resilience by bringing together neighborhood/community leaders and the risk reduction community as a basis for designing risk reduction programs at various levels of geography. Actual case studies are Lalitpur in the Kathmandu Valley in Nepal and Quito/Ecuador. In agreement with GEM's collaborative approach, all

  18. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    Science.gov (United States)

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  19. Direct and indirect evidence for earthquakes; an example from the Lake Tahoe Basin, California-Nevada

    Science.gov (United States)

    Maloney, J. M.; Noble, P. J.; Driscoll, N. W.; Kent, G.; Schmauder, G. C.

    2012-12-01

    High-resolution seismic CHIRP data can image direct evidence of earthquakes (i.e., offset strata) beneath lakes and the ocean. Nevertheless, direct evidence often is not imaged due to conditions such as gas in the sediments, or steep basement topography. In these cases, indirect evidence for earthquakes (i.e., debris flows) may provide insight into the paleoseismic record. The four sub-basins of the tectonically active Lake Tahoe Basin provide an ideal opportunity to image direct evidence for earthquake deformation and compare it to indirect earthquake proxies. We present results from high-resolution seismic CHIRP surveys in Emerald Bay, Fallen Leaf Lake, and Cascade Lake to constrain the recurrence interval on the West Tahoe Dollar Point Fault (WTDPF), which was previously identified as potentially the most hazardous fault in the Lake Tahoe Basin. Recently collected CHIRP profiles beneath Fallen Leaf Lake image slide deposits that appear synchronous with slides in other sub-basins. The temporal correlation of slides between multiple basins suggests triggering by events on the WTDPF. If correct, we postulate a recurrence interval for the WTDPF of ~3-4 k.y., indicating that the WTDPF is near its seismic recurrence cycle. In addition, CHIRP data beneath Cascade Lake image strands of the WTDPF that offset the lakefloor as much as ~7 m. The Cascade Lake data combined with onshore LiDAR allowed us to map the geometry of the WTDPF continuously across the southern Lake Tahoe Basin and yielded an improved geohazard assessment.

  20. Optimization of recurrent neural networks for time series modeling

    DEFF Research Database (Denmark)

    Pedersen, Morten With

    1997-01-01

    The present thesis is about optimization of recurrent neural networks applied to time series modeling. In particular is considered fully recurrent networks working from only a single external input, one layer of nonlinear hidden units and a li near output unit applied to prediction of discrete time...... series. The overall objective s are to improve training by application of second-order methods and to improve generalization ability by architecture optimization accomplished by pruning. The major topics covered in the thesis are: 1. The problem of training recurrent networks is analyzed from a numerical...... of solution obtained as well as computation time required. 3. A theoretical definition of the generalization error for recurrent networks is provided. This definition justifies a commonly adopted approach for estimating generalization ability. 4. The viability of pruning recurrent networks by the Optimal...

  1. A preliminary assessment of earthquake ground shaking hazard at Yucca Mountain, Nevada and implications to the Las Vegas region

    International Nuclear Information System (INIS)

    Wong, I.G.; Green, R.K.; Sun, J.I.; Pezzopane, S.K.; Abrahamson, N.A.; Quittmeyer, R.C.

    1996-01-01

    As part of early design studies for the potential Yucca Mountain nuclear waste repository, the authors have performed a preliminary probabilistic seismic hazard analysis of ground shaking. A total of 88 Quaternary faults within 100 km of the site were considered in the hazard analysis. They were characterized in terms of their probability o being seismogenic, and their geometry, maximum earthquake magnitude, recurrence model, and slip rate. Individual faults were characterized by maximum earthquakes that ranged from moment magnitude (M w ) 5.1 to 7.6. Fault slip rates ranged from a very low 0.00001 mm/yr to as much as 4 mm/yr. An areal source zone representing background earthquakes up to M w 6 1/4 = 1/4 was also included in the analysis. Recurrence for these background events was based on the 1904--1994 historical record, which contains events up to M w 5.6. Based on this analysis, the peak horizontal rock accelerations are 0.16, 0.21, 0.28, and 0.50 g for return periods of 500, 1,000, 2,000, and 10,000 years, respectively. In general, the dominant contributor to the ground shaking hazard at Yucca Mountain are background earthquakes because of the low slip rates of the Basin and Range faults. A significant effect on the probabilistic ground motions is due to the inclusion of a new attenuation relation developed specifically for earthquakes in extensional tectonic regimes. This relation gives significantly lower peak accelerations than five other predominantly California-based relations used in the analysis, possibly due to the lower stress drops of extensional earthquakes compared to California events. Because Las Vegas is located within the same tectonic regime as Yucca Mountain, the seismic sources and path and site factors affecting the seismic hazard at Yucca Mountain also have implications to Las Vegas. These implications are discussed in this paper

  2. One-dimensional velocity model of the Middle Kura Depresion from local earthquakes data of Azerbaijan

    Science.gov (United States)

    Yetirmishli, G. C.; Kazimova, S. E.; Kazimov, I. E.

    2011-09-01

    We present the method for determining the velocity model of the Earth's crust and the parameters of earthquakes in the Middle Kura Depression from the data of network telemetry in Azerbaijan. Application of this method allowed us to recalculate the main parameters of the hypocenters of the earthquake, to compute the corrections to the arrival times of P and S waves at the observation station, and to significantly improve the accuracy in determining the coordinates of the earthquakes. The model was constructed using the VELEST program, which calculates one-dimensional minimal velocity models from the travel times of seismic waves.

  3. Time-decreasing hazard and increasing time until the next earthquake

    International Nuclear Information System (INIS)

    Corral, Alvaro

    2005-01-01

    The existence of a slowly always decreasing probability density for the recurrence times of earthquakes in the stationary case implies that the occurrence of an event at a given instant becomes more unlikely as time since the previous event increases. Consequently, the expected waiting time to the next earthquake increases with the elapsed time, that is, the event moves away fast to the future. We have found direct empirical evidence of this counterintuitive behavior in two worldwide catalogs as well as in diverse regional catalogs. Universal scaling functions describe the phenomenon well

  4. Static stress changes associated with normal faulting earthquakes in South Balkan area

    Science.gov (United States)

    Papadimitriou, E.; Karakostas, V.; Tranos, M.; Ranguelov, B.; Gospodinov, D.

    2007-10-01

    Activation of major faults in Bulgaria and northern Greece presents significant seismic hazard because of their proximity to populated centers. The long recurrence intervals, of the order of several hundred years as suggested by previous investigations, imply that the twentieth century activation along the southern boundary of the sub-Balkan graben system, is probably associated with stress transfer among neighbouring faults or fault segments. Fault interaction is investigated through elastic stress transfer among strong main shocks ( M ≥ 6.0), and in three cases their foreshocks, which ruptured distinct or adjacent normal fault segments. We compute stress perturbations caused by earthquake dislocations in a homogeneous half-space. The stress change calculations were performed for faults of strike, dip, and rake appropriate to the strong events. We explore the interaction between normal faults in the study area by resolving changes of Coulomb failure function ( ΔCFF) since 1904 and hence the evolution of the stress field in the area during the last 100 years. Coulomb stress changes were calculated assuming that earthquakes can be modeled as static dislocations in an elastic half-space, and taking into account both the coseismic slip in strong earthquakes and the slow tectonic stress buildup associated with major fault segments. We evaluate if these stress changes brought a given strong earthquake closer to, or sent it farther from, failure. Our modeling results show that the generation of each strong event enhanced the Coulomb stress on along-strike neighbors and reduced the stress on parallel normal faults. We extend the stress calculations up to present and provide an assessment for future seismic hazard by identifying possible sites of impending strong earthquakes.

  5. Fault slip and earthquake recurrence along strike-slip faults — Contributions of high-resolution geomorphic data

    KAUST Repository

    Zielke, Olaf; Klinger, Yann; Arrowsmith, J. Ramon

    2015-01-01

    to contribute to better-informed models of EQ recurrence and slip-accumulation patterns. After reviewing motivation and background, we outline requirements to successfully reconstruct a fault's offset accumulation pattern from geomorphic evidence. We address

  6. Earthquake geology of the Bulnay Fault (Mongolia)

    Science.gov (United States)

    Rizza, Magali; Ritz, Jean-Franciois; Prentice, Carol S.; Vassallo, Ricardo; Braucher, Regis; Larroque, Christophe; Arzhannikova, A.; Arzhanikov, S.; Mahan, Shannon; Massault, M.; Michelot, J-L.; Todbileg, M.

    2015-01-01

    The Bulnay earthquake of July 23, 1905 (Mw 8.3-8.5), in north-central Mongolia, is one of the world's largest recorded intracontinental earthquakes and one of four great earthquakes that occurred in the region during the 20th century. The 375-km-long surface rupture of the left-lateral, strike-slip, N095°E trending Bulnay Fault associated with this earthquake is remarkable for its pronounced expression across the landscape and for the size of features produced by previous earthquakes. Our field observations suggest that in many areas the width and geometry of the rupture zone is the result of repeated earthquakes; however, in those areas where it is possible to determine that the geomorphic features are the result of the 1905 surface rupture alone, the size of the features produced by this single earthquake are singular in comparison to most other historical strike-slip surface ruptures worldwide. Along the 80 km stretch, between 97.18°E and 98.33°E, the fault zone is characterized by several meters width and the mean left-lateral 1905 offset is 8.9 ± 0.6 m with two measured cumulative offsets that are twice the 1905 slip. These observations suggest that the displacement produced during the penultimate event was similar to the 1905 slip. Morphotectonic analyses carried out at three sites along the eastern part of the Bulnay fault, allow us to estimate a mean horizontal slip rate of 3.1 ± 1.7 mm/yr over the Late Pleistocene-Holocene period. In parallel, paleoseismological investigations show evidence for two earthquakes prior to the 1905 event with recurrence intervals of ~2700-4000 years.

  7. Spatial Distribution of the Coefficient of Variation and Bayesian Forecast for the Paleo-Earthquakes in Japan

    Science.gov (United States)

    Nomura, Shunichi; Ogata, Yosihiko

    2016-04-01

    We propose a Bayesian method of probability forecasting for recurrent earthquakes of inland active faults in Japan. Renewal processes with the Brownian Passage Time (BPT) distribution are applied for over a half of active faults in Japan by the Headquarters for Earthquake Research Promotion (HERP) of Japan. Long-term forecast with the BPT distribution needs two parameters; the mean and coefficient of variation (COV) for recurrence intervals. The HERP applies a common COV parameter for all of these faults because most of them have very few specified paleoseismic events, which is not enough to estimate reliable COV values for respective faults. However, different COV estimates are proposed for the same paleoseismic catalog by some related works. It can make critical difference in forecast to apply different COV estimates and so COV should be carefully selected for individual faults. Recurrence intervals on a fault are, on the average, determined by the long-term slip rate caused by the tectonic motion but fluctuated by nearby seismicities which influence surrounding stress field. The COVs of recurrence intervals depend on such stress perturbation and so have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus we introduce a spatial structure on its COV parameter by Bayesian modeling with a Gaussian process prior. The COVs on active faults are correlated and take similar values for closely located faults. It is found that the spatial trends in the estimated COV values coincide with the density of active faults in Japan. We also show Bayesian forecasts by the proposed model using Markov chain Monte Carlo method. Our forecasts are different from HERP's forecast especially on the active faults where HERP's forecasts are very high or low.

  8. Effects of acoustic waves on stick-slip in granular media and implications for earthquakes

    Science.gov (United States)

    Johnson, P.A.; Savage, H.; Knuth, M.; Gomberg, J.; Marone, Chris

    2008-01-01

    It remains unknown how the small strains induced by seismic waves can trigger earthquakes at large distances, in some cases thousands of kilometres from the triggering earthquake, with failure often occurring long after the waves have passed. Earthquake nucleation is usually observed to take place at depths of 10-20 km, and so static overburden should be large enough to inhibit triggering by seismic-wave stress perturbations. To understand the physics of dynamic triggering better, as well as the influence of dynamic stressing on earthquake recurrence, we have conducted laboratory studies of stick-slip in granular media with and without applied acoustic vibration. Glass beads were used to simulate granular fault zone material, sheared under constant normal stress, and subject to transient or continuous perturbation by acoustic waves. Here we show that small-magnitude failure events, corresponding to triggered aftershocks, occur when applied sound-wave amplitudes exceed several microstrain. These events are frequently delayed or occur as part of a cascade of small events. Vibrations also cause large slip events to be disrupted in time relative to those without wave perturbation. The effects are observed for many large-event cycles after vibrations cease, indicating a strain memory in the granular material. Dynamic stressing of tectonic faults may play a similar role in determining the complexity of earthquake recurrence. ??2007 Nature Publishing Group.

  9. Attention-based Memory Selection Recurrent Network for Language Modeling

    OpenAIRE

    Liu, Da-Rong; Chuang, Shun-Po; Lee, Hung-yi

    2016-01-01

    Recurrent neural networks (RNNs) have achieved great success in language modeling. However, since the RNNs have fixed size of memory, their memory cannot store all the information about the words it have seen before in the sentence, and thus the useful long-term information may be ignored when predicting the next words. In this paper, we propose Attention-based Memory Selection Recurrent Network (AMSRN), in which the model can review the information stored in the memory at each previous time ...

  10. Tsunami Numerical Simulation for Hypothetical Giant or Great Earthquakes along the Izu-Bonin Trench

    Science.gov (United States)

    Harada, T.; Ishibashi, K.; Satake, K.

    2013-12-01

    We performed tsunami numerical simulations from various giant/great fault models along the Izu-Bonin trench in order to see the behavior of tsunamis originated in this region and to examine the recurrence pattern of great interplate earthquakes along the Nankai trough off southwest Japan. As a result, large tsunami heights are expected in the Ryukyu Islands and on the Pacific coasts of Kyushu, Shikoku and western Honshu. The computed large tsunami heights support the hypothesis that the 1605 Keicho Nankai earthquake was not a tsunami earthquake along the Nankai trough but a giant or great earthquake along the Izu-Bonin trench (Ishibashi and Harada, 2013, SSJ Fall Meeting abstract). The Izu-Bonin subduction zone has been regarded as so-called 'Mariana-type subduction zone' where M>7 interplate earthquakes do not occur inherently. However, since several M>7 outer-rise earthquakes have occurred in this region and the largest slip of the 2011 Tohoku earthquake (M9.0) took place on the shallow plate interface where the strain accumulation had considered to be a little, a possibility of M>8.5 earthquakes in this region may not be negligible. The latest M 7.4 outer-rise earthquake off the Bonin Islands on Dec. 22, 2010 produced small tsunamis on the Pacific coast of Japan except for the Tohoku and Hokkaido districts and a zone of abnormal seismic intensity in the Kanto and Tohoku districts. Ishibashi and Harada (2013) proposed a working hypothesis that the 1605 Keicho earthquake which is considered a great tsunami earthquake along the Nankai trough was a giant/great earthquake along the Izu-Bonin trench based on the similarity of the distributions of ground shaking and tsunami of this event and the 2010 Bonin earthquake. In this study, in order to examine the behavior of tsunamis from giant/great earthquakes along the Izu-Bonin trench and check the Ishibashi and Harada's hypothesis, we performed tsunami numerical simulations from fault models along the Izu-Bonin trench

  11. Earthquake forecasting test for Kanto district to reduce vulnerability of urban mega earthquake disasters

    Science.gov (United States)

    Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.

    2012-12-01

    Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project on earthquake predictability research. The final goal of this project is to search for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined CSEP and started the Japanese testing center called as CSEP-Japan. This testing center provides an open access to researchers contributing earthquake forecast models applied to Japan. Now more than 100 earthquake forecast models were submitted on the prospective experiment. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by CSEP. The total number of experiments was implemented for approximately 300 rounds. These results provide new knowledge concerning statistical forecasting models. We started a study for constructing a 3-dimensional earthquake forecasting model for Kanto district in Japan based on CSEP experiments under the Special Project for Reducing Vulnerability for Urban Mega Earthquake Disasters. Because seismicity of the area ranges from shallower part to a depth of 80 km due to subducting Philippine Sea plate and Pacific plate, we need to study effect of depth distribution. We will develop models for forecasting based on the results of 2-D modeling. We defined the 3D - forecasting area in the Kanto region with test classes of 1 day, 3 months, 1 year and 3 years, and magnitudes from 4.0 to 9.0 as in CSEP-Japan. In the first step of the study, we will install RI10K model (Nanjo, 2011) and the HISTETAS models (Ogata, 2011) to know if those models have good performance as in the 3 months 2-D CSEP-Japan experiments in the Kanto region before the 2011 Tohoku event (Yokoi et al., in preparation). We use CSEP

  12. Ground motion modeling of the 1906 San Francisco earthquake II: Ground motion estimates for the 1906 earthquake and scenario events

    Energy Technology Data Exchange (ETDEWEB)

    Aagaard, B; Brocher, T; Dreger, D; Frankel, A; Graves, R; Harmsen, S; Hartzell, S; Larsen, S; McCandless, K; Nilsson, S; Petersson, N A; Rodgers, A; Sjogreen, B; Tkalcic, H; Zoback, M L

    2007-02-09

    We estimate the ground motions produced by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  13. Modified two-layer social force model for emergency earthquake evacuation

    Science.gov (United States)

    Zhang, Hao; Liu, Hong; Qin, Xin; Liu, Baoxi

    2018-02-01

    Studies of crowd behavior with related research on computer simulation provide an effective basis for architectural design and effective crowd management. Based on low-density group organization patterns, a modified two-layer social force model is proposed in this paper to simulate and reproduce a group gathering process. First, this paper studies evacuation videos from the Luan'xian earthquake in 2012, and extends the study of group organization patterns to a higher density. Furthermore, taking full advantage of the strength in crowd gathering simulations, a new method on grouping and guidance is proposed while using crowd dynamics. Second, a real-life grouping situation in earthquake evacuation is simulated and reproduced. Comparing with the fundamental social force model and existing guided crowd model, the modified model reduces congestion time and truly reflects group behaviors. Furthermore, the experiment result also shows that a stable group pattern and a suitable leader could decrease collision and allow a safer evacuation process.

  14. Modelling earth current precursors in earthquake prediction

    Directory of Open Access Journals (Sweden)

    R. Di Maio

    1997-06-01

    Full Text Available This paper deals with the theory of earth current precursors of earthquake. A dilatancy-diffusion-polarization model is proposed to explain the anomalies of the electric potential, which are observed on the ground surface prior to some earthquakes. The electric polarization is believed to be the electrokinetic effect due to the invasion of fluids into new pores, which are opened inside a stressed-dilated rock body. The time and space variation of the distribution of the electric potential in a layered earth as well as in a faulted half-space is studied in detail. It results that the surface response depends on the underground conductivity distribution and on the relative disposition of the measuring dipole with respect to the buried bipole source. A field procedure based on the use of an areal layout of the recording sites is proposed, in order to obtain the most complete information on the time and space evolution of the precursory phenomena in any given seismic region.

  15. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    Science.gov (United States)

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  16. Fault healing and earthquake spectra from stick slip sequences in the laboratory and on active faults

    Science.gov (United States)

    McLaskey, G. C.; Glaser, S. D.; Thomas, A.; Burgmann, R.

    2011-12-01

    Repeating earthquake sequences (RES) are thought to occur on isolated patches of a fault that fail in repeated stick-slip fashion. RES enable researchers to study the effect of variations in earthquake recurrence time and the relationship between fault healing and earthquake generation. Fault healing is thought to be the physical process responsible for the 'state' variable in widely used rate- and state-dependent friction equations. We analyze RES created in laboratory stick slip experiments on a direct shear apparatus instrumented with an array of very high frequency (1KHz - 1MHz) displacement sensors. Tests are conducted on the model material polymethylmethacrylate (PMMA). While frictional properties of this glassy polymer can be characterized with the rate- and state- dependent friction laws, the rate of healing in PMMA is higher than room temperature rock. Our experiments show that in addition to a modest increase in fault strength and stress drop with increasing healing time, there are distinct spectral changes in the recorded laboratory earthquakes. Using the impact of a tiny sphere on the surface of the test specimen as a known source calibration function, we are able to remove the instrument and apparatus response from recorded signals so that the source spectrum of the laboratory earthquakes can be accurately estimated. The rupture of a fault that was allowed to heal produces a laboratory earthquake with increased high frequency content compared to one produced by a fault which has had less time to heal. These laboratory results are supported by observations of RES on the Calaveras and San Andreas faults, which show similar spectral changes when recurrence time is perturbed by a nearby large earthquake. Healing is typically attributed to a creep-like relaxation of the material which causes the true area of contact of interacting asperity populations to increase with time in a quasi-logarithmic way. The increase in high frequency seismicity shown here

  17. Prospective testing of Coulomb short-term earthquake forecasts

    Science.gov (United States)

    Jackson, D. D.; Kagan, Y. Y.; Schorlemmer, D.; Zechar, J. D.; Wang, Q.; Wong, K.

    2009-12-01

    Earthquake induced Coulomb stresses, whether static or dynamic, suddenly change the probability of future earthquakes. Models to estimate stress and the resulting seismicity changes could help to illuminate earthquake physics and guide appropriate precautionary response. But do these models have improved forecasting power compared to empirical statistical models? The best answer lies in prospective testing in which a fully specified model, with no subsequent parameter adjustments, is evaluated against future earthquakes. The Center of Study of Earthquake Predictability (CSEP) facilitates such prospective testing of earthquake forecasts, including several short term forecasts. Formulating Coulomb stress models for formal testing involves several practical problems, mostly shared with other short-term models. First, earthquake probabilities must be calculated after each “perpetrator” earthquake but before the triggered earthquakes, or “victims”. The time interval between a perpetrator and its victims may be very short, as characterized by the Omori law for aftershocks. CSEP evaluates short term models daily, and allows daily updates of the models. However, lots can happen in a day. An alternative is to test and update models on the occurrence of each earthquake over a certain magnitude. To make such updates rapidly enough and to qualify as prospective, earthquake focal mechanisms, slip distributions, stress patterns, and earthquake probabilities would have to be made by computer without human intervention. This scheme would be more appropriate for evaluating scientific ideas, but it may be less useful for practical applications than daily updates. Second, triggered earthquakes are imperfectly recorded following larger events because their seismic waves are buried in the coda of the earlier event. To solve this problem, testing methods need to allow for “censoring” of early aftershock data, and a quantitative model for detection threshold as a function of

  18. Promise and problems in using stress triggering models for time-dependent earthquake hazard assessment

    Science.gov (United States)

    Cocco, M.

    2001-12-01

    Earthquake stress changes can promote failures on favorably oriented faults and modify the seismicity pattern over broad regions around the causative faults. Because the induced stress perturbations modify the rate of production of earthquakes, they alter the probability of seismic events in a specified time window. Comparing the Coulomb stress changes with the seismicity rate changes and aftershock patterns can statistically test the role of stress transfer in earthquake occurrence. The interaction probability may represent a further tool to test the stress trigger or shadow model. The probability model, which incorporate stress transfer, has the main advantage to include the contributions of the induced stress perturbation (a static step in its present formulation), the loading rate and the fault constitutive properties. Because the mechanical conditions of the secondary faults at the time of application of the induced load are largely unkown, stress triggering can only be tested on fault populations and not on single earthquake pairs with a specified time delay. The interaction probability can represent the most suitable tool to test the interaction between large magnitude earthquakes. Despite these important implications and the stimulating perspectives, there exist problems in understanding earthquake interaction that should motivate future research but at the same time limit its immediate social applications. One major limitation is that we are unable to predict how and if the induced stress perturbations modify the ratio between small versus large magnitude earthquakes. In other words, we cannot distinguish between a change in this ratio in favor of small events or of large magnitude earthquakes, because the interaction probability is independent of magnitude. Another problem concerns the reconstruction of the stressing history. The interaction probability model is based on the response to a static step; however, we know that other processes contribute to

  19. Modelling end-glacial earthquakes at Olkiluoto

    International Nuclear Information System (INIS)

    Faelth, B.; Hoekmark, H.

    2011-02-01

    The objective of this study is to obtain estimates of the possible effects that post-glacial seismic events in three verified deformation zones (BFZ100, BFZ021/099 and BFZ214) at the Olkiluoto site may have on nearby fractures in terms of induced fracture shear displacement. The study is carried out by use of large-scale models analysed dynamically with the three dimensional distinct element code 3DEC. Earthquakes are simulated in a schematic way; large planar discontinuities representing earthquake faults are surrounded by a number of smaller discontinuities which represent rock fractures in which shear displacements potentially could be induced by the effects of the slipping fault. Initial stresses, based on best estimates of the present-day in situ stresses and on state-of-the-art calculations of glacially-induced stresses, are applied. The fault rupture is then initiated at a pre-defined hypocentre and programmed to propagate outward along the fault plane with a specified rupture velocity until it is arrested at the boundary of the prescribed rupture area. Fault geometries, fracture orientations, in situ stress model and material property parameter values are based on data obtained from the Olkiluoto site investigations. Glacially-induced stresses are obtained from state-of-the-art ice-crust/mantle finite element analyses. The response of the surrounding smaller discontinuities, i.e. the induced fracture shear displacement, is the main output from the simulations

  20. On the Distribution of Earthquake Interevent Times and the Impact of Spatial Scale

    Science.gov (United States)

    Hristopulos, Dionissios

    2013-04-01

    The distribution of earthquake interevent times is a subject that has attracted much attention in the statistical physics literature [1-3]. A recent paper proposes that the distribution of earthquake interevent times follows from the the interplay of the crustal strength distribution and the loading function (stress versus time) of the Earth's crust locally [4]. It was also shown that the Weibull distribution describes earthquake interevent times provided that the crustal strength also follows the Weibull distribution and that the loading function follows a power-law during the loading cycle. I will discuss the implications of this work and will present supporting evidence based on the analysis of data from seismic catalogs. I will also discuss the theoretical evidence in support of the Weibull distribution based on models of statistical physics [5]. Since other-than-Weibull interevent times distributions are not excluded in [4], I will illustrate the use of the Kolmogorov-Smirnov test in order to determine which probability distributions are not rejected by the data. Finally, we propose a modification of the Weibull distribution if the size of the system under investigation (i.e., the area over which the earthquake activity occurs) is finite with respect to a critical link size. keywords: hypothesis testing, modified Weibull, hazard rate, finite size References [1] Corral, A., 2004. Long-term clustering, scaling, and universality in the temporal occurrence of earthquakes, Phys. Rev. Lett., 9210) art. no. 108501. [2] Saichev, A., Sornette, D. 2007. Theory of earthquake recurrence times, J. Geophys. Res., Ser. B 112, B04313/1-26. [3] Touati, S., Naylor, M., Main, I.G., 2009. Origin and nonuniversality of the earthquake interevent time distribution Phys. Rev. Lett., 102 (16), art. no. 168501. [4] Hristopulos, D.T., 2003. Spartan Gibbs random field models for geostatistical applications, SIAM Jour. Sci. Comput., 24, 2125-2162. [5] I. Eliazar and J. Klafter, 2006

  1. Comparison of test and earthquake response modeling of a nuclear power plant containment building

    Energy Technology Data Exchange (ETDEWEB)

    Srinivasan, M.G.; Kot, C.A.; Hsieh, B.J.

    1985-01-01

    The reactor building of a BWR plant was subjected to dynamic testing, a minor earthquake, and a strong earthquake at different times. Analytical models simulating each of these events were devised by previous investigators. A comparison of the characteristics of these models is made in this paper. The different modeling assumptions involved in the different simulation analyses restrict the validity of the models for general use and also narrow the comparison down to only a few modes. The dynamic tests successfully identified the first mode of the soil-structure system.

  2. Comparison of test and earthquake response modeling of a nuclear power plant containment building

    International Nuclear Information System (INIS)

    Srinivasan, M.G.; Kot, C.A.; Hsieh, B.J.

    1985-01-01

    The reactor building of a BWR plant was subjected to dynamic testing, a minor earthquake, and a strong earthquake at different times. Analytical models simulating each of these events were devised by previous investigators. A comparison of the characteristics of these models is made in this paper. The different modeling assumptions involved in the different simulation analyses restrict the validity of the models for general use and also narrow the comparison down to only a few modes. The dynamic tests successfully identified the first mode of the soil-structure system

  3. Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model

    Science.gov (United States)

    Thomas, M. Y.; Bhat, H. S.

    2017-12-01

    Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.

  4. Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model

    Science.gov (United States)

    Thomas, Marion Y.; Bhat, Harsha S.

    2018-05-01

    Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.

  5. Numerical Modelling of the 1995 Dinar Earthquake Effects on Hydrodynamic Regime of Egirdir Lake

    Directory of Open Access Journals (Sweden)

    Murat Aksel

    2017-11-01

    Full Text Available The effect of earthquakes on closed and semi-closed water systems is a research topic that has been studied for many years. The lack of continuous measurement systems in closed and semi-closed water systems is insufficient to examine and investigate what has happened during the earthquake. The morphological structures of the lakes indicate the characteristics of the events that occurred during an earthquake. Due to the developing technology and research request, some monitoring, measurement stations are installed in some lakes, gulfs, estuaries, etc. systems. Today, both higher quality measurement and field data can be obtained, and more complicated processes are being explored with the help of faster computers. Computational fluid dynamics is currently used because of the difficulty in calculating hydrodynamic responses of lakes due to earthquake. Investigations on the effects of the earthquake condition on the desired closed / semi-closed water system studies by using numerical modelling have been continuing increasingly in recent years. Both the quality of the bathymetric data gathered from the field, the continuous acquisition of both dynamic water level measurement systems, and the use of new technologies and systems in the search for base materials contribute to the fact that we have more knowledge of the formation, behavior and effects of the sorts. In this study, 1995 Dinar Earthquake effects on Egirdir Lake hydrodynamic regime was investigated by numerical modelling approach.

  6. Modeling the recurrent failure to thrive in less than two-year children: recurrent events survival analysis.

    Science.gov (United States)

    Saki Malehi, Amal; Hajizadeh, Ebrahim; Ahmadi, Kambiz; Kholdi, Nahid

    2014-01-01

    This study aimes to evaluate the failure to thrive (FTT) recurrent event over time. This longitudinal study was conducted during February 2007 to July 2009. The primary outcome was growth failure. The analysis was done using 1283 children who had experienced FTT several times, based on recurrent events analysis. Fifty-nine percent of the children had experienced the FTT at least one time and 5.3% of them had experienced it up to four times. The Prentice-Williams-Peterson (PWP) model revealed significant relationship between diarrhea (HR=1.26), respiratory infections (HR=1.25), urinary tract infections (HR=1.51), discontinuation of breast-feeding (HR=1.96), teething (HR=1.18), initiation age of complementary feeding (HR=1.11) and hazard rate of the first FTT event. Recurrence nature of the FTT is a main problem, which taking it into account increases the accuracy in analysis of FTT event process and can lead to identify different risk factors for each FTT recurrences.

  7. Evaluation of earthquake-triggered landslides in el Salvador using a Gis based newmark model

    OpenAIRE

    García Rodríguez, María José; Havenith, Hans; Benito Oterino, Belen

    2008-01-01

    In this work, a model for evaluating earthquake-triggered landslides hazard following the Newmark methodology is developed in a Geographical Information System (GIS). It is applied to El Salvador, one of the most seismically active regions in Central America, where the last severe destructive earthquakes occurred in January 13th and February 13th, 2001. The first of these earthquakes triggered more the 500 landslides and killed at least 844 people. This study is centred on the area (10x6km) w...

  8. Ground-motion modeling of the 1906 San Francisco Earthquake, part II: Ground-motion estimates for the 1906 earthquake and scenario events

    Science.gov (United States)

    Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; McCandless, K.; Nilsson, S.; Petersson, N.A.; Rodgers, A.; Sjogreen, B.; Zoback, M.L.

    2008-01-01

    We estimate the ground motions produce by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  9. Seismic quiescence in a frictional earthquake model

    Science.gov (United States)

    Braun, Oleg M.; Peyrard, Michel

    2018-04-01

    We investigate the origin of seismic quiescence with a generalized version of the Burridge-Knopoff model for earthquakes and show that it can be generated by a multipeaked probability distribution of the thresholds at which contacts break. Such a distribution is not assumed a priori but naturally results from the aging of the contacts. We show that the model can exhibit quiescence as well as enhanced foreshock activity, depending on the value of some parameters. This provides a generic understanding for seismic quiescence, which encompasses earlier specific explanations and could provide a pathway for a classification of faults.

  10. The finite-difference and finite-element modeling of seismic wave propagation and earthquake motion

    International Nuclear Information System (INIS)

    Moczo, P.; Kristek, J.; Pazak, P.; Balazovjech, M.; Moczo, P.; Kristek, J.; Galis, M.

    2007-01-01

    Numerical modeling of seismic wave propagation and earthquake motion is an irreplaceable tool in investigation of the Earth's structure, processes in the Earth, and particularly earthquake phenomena. Among various numerical methods, the finite-difference method is the dominant method in the modeling of earthquake motion. Moreover, it is becoming more important in the seismic exploration and structural modeling. At the same time we are convinced that the best time of the finite-difference method in seismology is in the future. This monograph provides tutorial and detailed introduction to the application of the finite difference (FD), finite-element (FE), and hybrid FD-FE methods to the modeling of seismic wave propagation and earthquake motion. The text does not cover all topics and aspects of the methods. We focus on those to which we have contributed. We present alternative formulations of equation of motion for a smooth elastic continuum. We then develop alternative formulations for a canonical problem with a welded material interface and free surface. We continue with a model of an earthquake source. We complete the general theoretical introduction by a chapter on the constitutive laws for elastic and viscoelastic media, and brief review of strong formulations of the equation of motion. What follows is a block of chapters on the finite-difference and finite-element methods. We develop FD targets for the free surface and welded material interface. We then present various FD schemes for a smooth continuum, free surface, and welded interface. We focus on the staggered-grid and mainly optimally-accurate FD schemes. We also present alternative formulations of the FE method. We include the FD and FE implementations of the traction-at-split-nodes method for simulation of dynamic rupture propagation. The FD modeling is applied to the model of the deep sedimentary Grenoble basin, France. The FD and FE methods are combined in the hybrid FD-FE method. The hybrid

  11. Rapid modeling of complex multi-fault ruptures with simplistic models from real-time GPS: Perspectives from the 2016 Mw 7.8 Kaikoura earthquake

    Science.gov (United States)

    Crowell, B.; Melgar, D.

    2017-12-01

    The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.

  12. Stress triggering of the Lushan M7. 0 earthquake by the Wenchuan Ms8. 0 earthquake

    Directory of Open Access Journals (Sweden)

    Wu Jianchao

    2013-08-01

    Full Text Available The Wenchuan Ms8. 0 earthquake and the Lushan M7. 0 earthquake occurred in the north and south segments of the Longmenshan nappe tectonic belt, respectively. Based on the focal mechanism and finite fault model of the Wenchuan Ms8. 0 earthquake, we calculated the coulomb failure stress change. The inverted coulomb stress changes based on the Nishimura and Chenji models both show that the Lushan M7. 0 earthquake occurred in the increased area of coulomb failure stress induced by the Wenchuan Ms8. 0 earthquake. The coulomb failure stress increased by approximately 0. 135 – 0. 152 bar in the source of the Lushan M7. 0 earthquake, which is far more than the stress triggering threshold. Therefore, the Lushan M7. 0 earthquake was most likely triggered by the coulomb failure stress change.

  13. GEM1: First-year modeling and IT activities for the Global Earthquake Model

    Science.gov (United States)

    Anderson, G.; Giardini, D.; Wiemer, S.

    2009-04-01

    GEM is a public-private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) to build an independent standard for modeling and communicating earthquake risk worldwide. GEM is aimed at providing authoritative, open information about seismic risk and decision tools to support mitigation. GEM will also raise risk awareness and help post-disaster economic development, with the ultimate goal of reducing the toll of future earthquakes. GEM will provide a unified set of seismic hazard, risk, and loss modeling tools based on a common global IT infrastructure and consensus standards. These tools, systems, and standards will be developed in partnership with organizations around the world, with coordination by the GEM Secretariat and its Secretary General. GEM partners will develop a variety of global components, including a unified earthquake catalog, fault database, and ground motion prediction equations. To ensure broad representation and community acceptance, GEM will include local knowledge in all modeling activities, incorporate existing detailed models where possible, and independently test all resulting tools and models. When completed in five years, GEM will have a versatile, penly accessible modeling environment that can be updated as necessary, and will provide the global standard for seismic hazard, risk, and loss models to government ministers, scientists and engineers, financial institutions, and the public worldwide. GEM is now underway with key support provided by private sponsors (Munich Reinsurance Company, Zurich Financial Services, AIR Worldwide Corporation, and Willis Group Holdings); countries including Belgium, Germany, Italy, Singapore, Switzerland, and Turkey; and groups such as the European Commission. The GEM Secretariat has been selected by the OECD and will be hosted at the Eucentre at the University of Pavia in Italy; the Secretariat is now formalizing the creation of the GEM Foundation. Some of GEM's global

  14. Sedimentary evidence of historical and prehistorical earthquakes along the Venta de Bravo Fault System, Acambay Graben (Central Mexico)

    Science.gov (United States)

    Lacan, Pierre; Ortuño, María; Audin, Laurence; Perea, Hector; Baize, Stephane; Aguirre-Díaz, Gerardo; Zúñiga, F. Ramón

    2018-03-01

    The Venta de Bravo normal fault is one of the longest structures in the intra-arc fault system of the Trans-Mexican Volcanic Belt. It defines, together with the Pastores Fault, the 80 km long southern margin of the Acambay Graben. We focus on the westernmost segment of the Venta de Bravo Fault and provide new paleoseismological information, evaluate its earthquake history, and assess the related seismic hazard. We analyzed five trenches, distributed at three different sites, in which Holocene surface faulting offsets interbedded volcanoclastic, fluvio-lacustrine and colluvial deposits. Despite the lack of known historical destructive earthquakes along this fault, we found evidence of at least eight earthquakes during the late Quaternary. Our results indicate that this is one of the major seismic sources of the Acambay Graben, capable of producing by itself earthquakes with magnitudes (MW) up to 6.9, with a slip rate of 0.22-0.24 mm yr- 1 and a recurrence interval between 1940 and 2390 years. In addition, a possible multi-fault rupture of the Venta de Bravo Fault together with other faults of the Acambay Graben could result in a MW > 7 earthquake. These new slip rates, earthquake recurrence rates, and estimation of slips per event help advance our understanding of the seismic hazard posed by the Venta de Bravo Fault and provide new parameters for further hazard assessment.

  15. Collective properties of injection-induced earthquake sequences: 1. Model description and directivity bias

    Science.gov (United States)

    Dempsey, David; Suckale, Jenny

    2016-05-01

    Induced seismicity is of increasing concern for oil and gas, geothermal, and carbon sequestration operations, with several M > 5 events triggered in recent years. Modeling plays an important role in understanding the causes of this seismicity and in constraining seismic hazard. Here we study the collective properties of induced earthquake sequences and the physics underpinning them. In this first paper of a two-part series, we focus on the directivity ratio, which quantifies whether fault rupture is dominated by one (unilateral) or two (bilateral) propagating fronts. In a second paper, we focus on the spatiotemporal and magnitude-frequency distributions of induced seismicity. We develop a model that couples a fracture mechanics description of 1-D fault rupture with fractal stress heterogeneity and the evolving pore pressure distribution around an injection well that triggers earthquakes. The extent of fault rupture is calculated from the equations of motion for two tips of an expanding crack centered at the earthquake hypocenter. Under tectonic loading conditions, our model exhibits a preference for unilateral rupture and a normal distribution of hypocenter locations, two features that are consistent with seismological observations. On the other hand, catalogs of induced events when injection occurs directly onto a fault exhibit a bias toward ruptures that propagate toward the injection well. This bias is due to relatively favorable conditions for rupture that exist within the high-pressure plume. The strength of the directivity bias depends on a number of factors including the style of pressure buildup, the proximity of the fault to failure and event magnitude. For injection off a fault that triggers earthquakes, the modeled directivity bias is small and may be too weak for practical detection. For two hypothetical injection scenarios, we estimate the number of earthquake observations required to detect directivity bias.

  16. Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)

    Science.gov (United States)

    Crowell, B. W.; Bock, Y.; Squibb, M. B.

    2010-12-01

    Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.

  17. Aftershock Duration of the 1976 Ms 7.8 Tangshan Earthquake: Implication for the Seismic Hazard Model with a Sensitivity Analysis

    Science.gov (United States)

    Zhong, Q.; Shi, B.

    2011-12-01

    The disaster of the Ms 7.8 earthquake occurred in Tangshan, China, on July 28th 1976 caused at least 240,000 deaths. The mainshock was followed by two largest aftershocks, the Ms 7.1 occurred after 15 hr later of the mainshock, and the Ms 6.9 occurred on 15 November. The aftershock sequence is lasting to date, making the regional seismic activity rate around the Tangshan main fault much higher than that of before the main event. If these aftershocks are involved in the local main event catalog for the PSHA calculation purpose, the resultant seismic hazard calculation will be overestimated in this region and underestimated in other place. However, it is always difficult to accurately determine the time duration of aftershock sequences and identifies the aftershocks from main event catalog for seismologist. In this study, by using theoretical inference and empirical relation given by Dieterich, we intended to derive the plausible time length of aftershock sequences of the Ms 7.8 Tangshan earthquake. The aftershock duration from log-log regression approach gives us about 120 years according to the empirical Omori's relation. Based on Dietrich approach, it has been claimed that the aftershock duration is a function of remote shear stressing rate, normal stress acting on the fault plane, and fault frictional constitutive parameters. In general, shear stressing rate could be estimated in three ways: 1. Shear stressing rate could be written as a function of static stress drop and a mean earthquake recurrence time. In this case, the time length of aftershock sequences is about 70-100 years. However, since the recurrence time inherits a great deal of uncertainty. 2. Ziv and Rubin derived a general function between shear stressing rate, fault slip speed and fault width with a consideration that aftershock duration does not scale with mainshock magnitude. Therefore, from Ziv's consideration, the aftershock duration is about 80 years. 3. Shear stressing rate is also can be

  18. A spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3‐ETAS): Toward an operational earthquake forecast

    Science.gov (United States)

    Field, Edward; Milner, Kevin R.; Hardebeck, Jeanne L.; Page, Morgan T.; van der Elst, Nicholas; Jordan, Thomas H.; Michael, Andrew J.; Shaw, Bruce E.; Werner, Maximillan J.

    2017-01-01

    We, the ongoing Working Group on California Earthquake Probabilities, present a spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3), with the goal being to represent aftershocks, induced seismicity, and otherwise triggered events as a potential basis for operational earthquake forecasting (OEF). Specifically, we add an epidemic‐type aftershock sequence (ETAS) component to the previously published time‐independent and long‐term time‐dependent forecasts. This combined model, referred to as UCERF3‐ETAS, collectively represents a relaxation of segmentation assumptions, the inclusion of multifault ruptures, an elastic‐rebound model for fault‐based ruptures, and a state‐of‐the‐art spatiotemporal clustering component. It also represents an attempt to merge fault‐based forecasts with statistical seismology models, such that information on fault proximity, activity rate, and time since last event are considered in OEF. We describe several unanticipated challenges that were encountered, including a need for elastic rebound and characteristic magnitude–frequency distributions (MFDs) on faults, both of which are required to get realistic triggering behavior. UCERF3‐ETAS produces synthetic catalogs of M≥2.5 events, conditioned on any prior M≥2.5 events that are input to the model. We evaluate results with respect to both long‐term (1000 year) simulations as well as for 10‐year time periods following a variety of hypothetical scenario mainshocks. Although the results are very plausible, they are not always consistent with the simple notion that triggering probabilities should be greater if a mainshock is located near a fault. Important factors include whether the MFD near faults includes a significant characteristic earthquake component, as well as whether large triggered events can nucleate from within the rupture zone of the mainshock. Because UCERF3‐ETAS has many sources of uncertainty, as

  19. Earthquake Early Warning Beta Users: Java, Modeling, and Mobile Apps

    Science.gov (United States)

    Strauss, J. A.; Vinci, M.; Steele, W. P.; Allen, R. M.; Hellweg, M.

    2014-12-01

    Earthquake Early Warning (EEW) is a system that can provide a few to tens of seconds warning prior to ground shaking at a user's location. The goal and purpose of such a system is to reduce, or minimize, the damage, costs, and casualties resulting from an earthquake. A demonstration earthquake early warning system (ShakeAlert) is undergoing testing in the United States by the UC Berkeley Seismological Laboratory, Caltech, ETH Zurich, University of Washington, the USGS, and beta users in California and the Pacific Northwest. The beta users receive earthquake information very rapidly in real-time and are providing feedback on their experiences of performance and potential uses within their organization. Beta user interactions allow the ShakeAlert team to discern: which alert delivery options are most effective, what changes would make the UserDisplay more useful in a pre-disaster situation, and most importantly, what actions users plan to take for various scenarios. Actions could include: personal safety approaches, such as drop cover, and hold on; automated processes and procedures, such as opening elevator or fire stations doors; or situational awareness. Users are beginning to determine which policy and technological changes may need to be enacted, and funding requirements to implement their automated controls. The use of models and mobile apps are beginning to augment the basic Java desktop applet. Modeling allows beta users to test their early warning responses against various scenarios without having to wait for a real event. Mobile apps are also changing the possible response landscape, providing other avenues for people to receive information. All of these combine to improve business continuity and resiliency.

  20. Synchronization and desynchronization in the Olami-Feder-Christensen earthquake model and potential implications for real seismicity

    Directory of Open Access Journals (Sweden)

    S. Hergarten

    2011-09-01

    Full Text Available The Olami-Feder-Christensen model is probably the most studied model in the context of self-organized criticality and reproduces several statistical properties of real earthquakes. We investigate and explain synchronization and desynchronization of earthquakes in this model in the nonconservative regime and its relevance for the power-law distribution of the event sizes (Gutenberg-Richter law and for temporal clustering of earthquakes. The power-law distribution emerges from synchronization, and its scaling exponent can be derived as τ = 1.775 from the scaling properties of the rupture areas' perimeter. In contrast, the occurrence of foreshocks and aftershocks according to Omori's law is closely related to desynchronization. This mechanism of foreshock and aftershock generation differs strongly from the widespread idea of spontaneous triggering and gives an idea why some even large earthquakes are not preceded by any foreshocks in nature.

  1. Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes

    NARCIS (Netherlands)

    Cheong, S.A.; Tan, T.L.; Chen, C.-C.; Chang, W.-L.; Liu, Z.; Chew, L.Y.; Sloot, P.M.A.; Johnson, N.F.

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting

  2. Aseismic blocks and destructive earthquakes in the Aegean

    Science.gov (United States)

    Stiros, Stathis

    2017-04-01

    Aseismic areas are not identified only in vast, geologically stable regions, but also within regions of active, intense, distributed deformation such as the Aegean. In the latter, "aseismic blocks" about 200m wide were recognized in the 1990's on the basis of the absence of instrumentally-derived earthquake foci, in contrast to surrounding areas. This pattern was supported by the available historical seismicity data, as well as by geologic evidence. Interestingly, GPS evidence indicates that such blocks are among the areas characterized by small deformation rates relatively to surrounding areas of higher deformation. Still, the largest and most destructive earthquake of the 1990's, the 1995 M6.6 earthquake occurred at the center of one of these "aseismic" zones at the northern part of Greece, found unprotected against seismic hazard. This case was indeed a repeat of the case of the tsunami-associated 1956 Amorgos Island M7.4 earthquake, the largest 20th century event in the Aegean back-arc region: the 1956 earthquake occurred at the center of a geologically distinct region (Cyclades Massif in Central Aegean), till then assumed aseismic. Interestingly, after 1956, the overall idea of aseismic regions remained valid, though a "promontory" of earthquake prone-areas intruding into the aseismic central Aegean was assumed. Exploitation of the archaeological excavation evidence and careful, combined analysis of historical and archaeological data and other palaeoseismic, mostly coastal data, indicated that destructive and major earthquakes have left their traces in previously assumed aseismic blocks. In the latter earthquakes typically occur with relatively low recurrence intervals, >200-300 years, much smaller than in adjacent active areas. Interestingly, areas assumed a-seismic in antiquity are among the most active in the last centuries, while areas hit by major earthquakes in the past are usually classified as areas of low seismic risk in official maps. Some reasons

  3. Tsunami Source Modeling of the 2015 Volcanic Tsunami Earthquake near Torishima, South of Japan

    Science.gov (United States)

    Sandanbata, O.; Watada, S.; Satake, K.; Fukao, Y.; Sugioka, H.; Ito, A.; Shiobara, H.

    2017-12-01

    An abnormal earthquake occurred at a submarine volcano named Smith Caldera, near Torishima Island on the Izu-Bonin arc, on May 2, 2015. The earthquake, which hereafter we call "the 2015 Torishima earthquake," has a CLVD-type focal mechanism with a moderate seismic magnitude (M5.7) but generated larger tsunami waves with an observed maximum height of 50 cm at Hachijo Island [JMA, 2015], so that the earthquake can be regarded as a "tsunami earthquake." In the region, similar tsunami earthquakes were observed in 1984, 1996 and 2006, but their physical mechanisms are still not well understood. Tsunami waves generated by the 2015 earthquake were recorded by an array of ocean bottom pressure (OBP) gauges, 100 km northeastern away from the epicenter. The waves initiated with a small downward signal of 0.1 cm and reached peak amplitude (1.5-2.0 cm) of leading upward signals followed by continuous oscillations [Fukao et al., 2016]. For modeling its tsunami source, or sea-surface displacement, we perform tsunami waveform simulations, and compare synthetic and observed waveforms at the OBP gauges. The linear Boussinesq equations are adapted with the tsunami simulation code, JAGURS [Baba et al., 2015]. We first assume a Gaussian-shaped sea-surface uplift of 1.0 m with a source size comparable to Smith Caldera, 6-7 km in diameter. By shifting source location around the caldera, we found the uplift is probably located within the caldera rim, as suggested by Sandanbata et al. [2016]. However, synthetic waves show no initial downward signal that was observed at the OBP gauges. Hence, we add a ring of subsidence surrounding the main uplift, and examine sizes and amplitudes of the main uplift and the subsidence ring. As a result, the model of a main uplift of around 1.0 m with a radius of 4 km surrounded by a ring of small subsidence shows good agreement of synthetic and observed waveforms. The results yield two implications for the deformation process that help us to understanding

  4. Sediment gravity flows triggered by remotely generated earthquake waves

    Science.gov (United States)

    Johnson, H. Paul; Gomberg, Joan S.; Hautala, Susan L.; Salmi, Marie S.

    2017-06-01

    Recent great earthquakes and tsunamis around the world have heightened awareness of the inevitability of similar events occurring within the Cascadia Subduction Zone of the Pacific Northwest. We analyzed seafloor temperature, pressure, and seismic signals, and video stills of sediment-enveloped instruments recorded during the 2011-2015 Cascadia Initiative experiment, and seafloor morphology. Our results led us to suggest that thick accretionary prism sediments amplified and extended seismic wave durations from the 11 April 2012 Mw8.6 Indian Ocean earthquake, located more than 13,500 km away. These waves triggered a sequence of small slope failures on the Cascadia margin that led to sediment gravity flows culminating in turbidity currents. Previous studies have related the triggering of sediment-laden gravity flows and turbidite deposition to local earthquakes, but this is the first study in which the originating seismic event is extremely distant (> 10,000 km). The possibility of remotely triggered slope failures that generate sediment-laden gravity flows should be considered in inferences of recurrence intervals of past great Cascadia earthquakes from turbidite sequences. Future similar studies may provide new understanding of submarine slope failures and turbidity currents and the hazards they pose to seafloor infrastructure and tsunami generation in regions both with and without local earthquakes.

  5. Large earthquake rates from geologic, geodetic, and seismological perspectives

    Science.gov (United States)

    Jackson, D. D.

    2017-12-01

    Earthquake rate and recurrence information comes primarily from geology, geodesy, and seismology. Geology gives the longest temporal perspective, but it reveals only surface deformation, relatable to earthquakes only with many assumptions. Geodesy is also limited to surface observations, but it detects evidence of the processes leading to earthquakes, again subject to important assumptions. Seismology reveals actual earthquakes, but its history is too short to capture important properties of very large ones. Unfortunately, the ranges of these observation types barely overlap, so that integrating them into a consistent picture adequate to infer future prospects requires a great deal of trust. Perhaps the most important boundary is the temporal one at the beginning of the instrumental seismic era, about a century ago. We have virtually no seismological or geodetic information on large earthquakes before then, and little geological information after. Virtually all-modern forecasts of large earthquakes assume some form of equivalence between tectonic- and seismic moment rates as functions of location, time, and magnitude threshold. That assumption links geology, geodesy, and seismology, but it invokes a host of other assumptions and incurs very significant uncertainties. Questions include temporal behavior of seismic and tectonic moment rates; shape of the earthquake magnitude distribution; upper magnitude limit; scaling between rupture length, width, and displacement; depth dependence of stress coupling; value of crustal rigidity; and relation between faults at depth and their surface fault traces, to name just a few. In this report I'll estimate the quantitative implications for estimating large earthquake rate. Global studies like the GEAR1 project suggest that surface deformation from geology and geodesy best show the geography of very large, rare earthquakes in the long term, while seismological observations of small earthquakes best forecasts moderate earthquakes

  6. A preliminary assessment of earthquake ground shaking hazard at Yucca Mountain, Nevada and implications to the Las Vegas region

    Energy Technology Data Exchange (ETDEWEB)

    Wong, I.G.; Green, R.K.; Sun, J.I. [Woodward-Clyde Federal Services, Oakland, CA (United States); Pezzopane, S.K. [Geological Survey, Denver, CO (United States); Abrahamson, N.A. [Abrahamson (Norm A.), Piedmont, CA (United States); Quittmeyer, R.C. [Woodward-Clyde Federal Services, Las Vegas, NV (United States)

    1996-12-31

    As part of early design studies for the potential Yucca Mountain nuclear waste repository, the authors have performed a preliminary probabilistic seismic hazard analysis of ground shaking. A total of 88 Quaternary faults within 100 km of the site were considered in the hazard analysis. They were characterized in terms of their probability o being seismogenic, and their geometry, maximum earthquake magnitude, recurrence model, and slip rate. Individual faults were characterized by maximum earthquakes that ranged from moment magnitude (M{sub w}) 5.1 to 7.6. Fault slip rates ranged from a very low 0.00001 mm/yr to as much as 4 mm/yr. An areal source zone representing background earthquakes up to M{sub w} 6 1/4 = 1/4 was also included in the analysis. Recurrence for these background events was based on the 1904--1994 historical record, which contains events up to M{sub w} 5.6. Based on this analysis, the peak horizontal rock accelerations are 0.16, 0.21, 0.28, and 0.50 g for return periods of 500, 1,000, 2,000, and 10,000 years, respectively. In general, the dominant contributor to the ground shaking hazard at Yucca Mountain are background earthquakes because of the low slip rates of the Basin and Range faults. A significant effect on the probabilistic ground motions is due to the inclusion of a new attenuation relation developed specifically for earthquakes in extensional tectonic regimes. This relation gives significantly lower peak accelerations than five other predominantly California-based relations used in the analysis, possibly due to the lower stress drops of extensional earthquakes compared to California events. Because Las Vegas is located within the same tectonic regime as Yucca Mountain, the seismic sources and path and site factors affecting the seismic hazard at Yucca Mountain also have implications to Las Vegas. These implications are discussed in this paper.

  7. Modelling psychological responses to the Great East Japan earthquake and nuclear incident.

    Directory of Open Access Journals (Sweden)

    Robin Goodwin

    Full Text Available The Great East Japan (Tōhoku/Kanto earthquake of March 2011 was followed by a major tsunami and nuclear incident. Several previous studies have suggested a number of psychological responses to such disasters. However, few previous studies have modelled individual differences in the risk perceptions of major events, or the implications of these perceptions for relevant behaviours. We conducted a survey specifically examining responses to the Great Japan earthquake and nuclear incident, with data collected 11-13 weeks following these events. 844 young respondents completed a questionnaire in three regions of Japan; Miyagi (close to the earthquake and leaking nuclear plants, Tokyo/Chiba (approximately 220 km from the nuclear plants, and Western Japan (Yamaguchi and Nagasaki, some 1000 km from the plants. Results indicated significant regional differences in risk perception, with greater concern over earthquake risks in Tokyo than in Miyagi or Western Japan. Structural equation analyses showed that shared normative concerns about earthquake and nuclear risks, conservation values, lack of trust in governmental advice about the nuclear hazard, and poor personal control over the nuclear incident were positively correlated with perceived earthquake and nuclear risks. These risk perceptions further predicted specific outcomes (e.g. modifying homes, avoiding going outside, contemplating leaving Japan. The strength and significance of these pathways varied by region. Mental health and practical implications of these findings are discussed in the light of the continuing uncertainties in Japan following the March 2011 events.

  8. Modelling psychological responses to the Great East Japan earthquake and nuclear incident.

    Science.gov (United States)

    Goodwin, Robin; Takahashi, Masahito; Sun, Shaojing; Gaines, Stanley O

    2012-01-01

    The Great East Japan (Tōhoku/Kanto) earthquake of March 2011 was followed by a major tsunami and nuclear incident. Several previous studies have suggested a number of psychological responses to such disasters. However, few previous studies have modelled individual differences in the risk perceptions of major events, or the implications of these perceptions for relevant behaviours. We conducted a survey specifically examining responses to the Great Japan earthquake and nuclear incident, with data collected 11-13 weeks following these events. 844 young respondents completed a questionnaire in three regions of Japan; Miyagi (close to the earthquake and leaking nuclear plants), Tokyo/Chiba (approximately 220 km from the nuclear plants), and Western Japan (Yamaguchi and Nagasaki, some 1000 km from the plants). Results indicated significant regional differences in risk perception, with greater concern over earthquake risks in Tokyo than in Miyagi or Western Japan. Structural equation analyses showed that shared normative concerns about earthquake and nuclear risks, conservation values, lack of trust in governmental advice about the nuclear hazard, and poor personal control over the nuclear incident were positively correlated with perceived earthquake and nuclear risks. These risk perceptions further predicted specific outcomes (e.g. modifying homes, avoiding going outside, contemplating leaving Japan). The strength and significance of these pathways varied by region. Mental health and practical implications of these findings are discussed in the light of the continuing uncertainties in Japan following the March 2011 events.

  9. Sensitivity of Reliability Estimates in Partially Damaged RC Structures subject to Earthquakes, using Reduced Hysteretic Models

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.

    The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....

  10. Self-Organized Criticality in an Anisotropic Earthquake Model

    Science.gov (United States)

    Li, Bin-Quan; Wang, Sheng-Jun

    2018-03-01

    We have made an extensive numerical study of a modified model proposed by Olami, Feder, and Christensen to describe earthquake behavior. Two situations were considered in this paper. One situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly not averagely and keeps itself to zero. The other situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly and keeps some energy for itself instead of reset to zero. Different boundary conditions were considered as well. By analyzing the distribution of earthquake sizes, we found that self-organized criticality can be excited only in the conservative case or the approximate conservative case in the above situations. Some evidence indicated that the critical exponent of both above situations and the original OFC model tend to the same result in the conservative case. The only difference is that the avalanche size in the original model is bigger. This result may be closer to the real world, after all, every crust plate size is different. Supported by National Natural Science Foundation of China under Grant Nos. 11675096 and 11305098, the Fundamental Research Funds for the Central Universities under Grant No. GK201702001, FPALAB-SNNU under Grant No. 16QNGG007, and Interdisciplinary Incubation Project of SNU under Grant No. 5

  11. The link between great earthquakes and the subduction of oceanic fracture zones

    Directory of Open Access Journals (Sweden)

    R. D. Müller

    2012-12-01

    Full Text Available Giant subduction earthquakes are known to occur in areas not previously identified as prone to high seismic risk. This highlights the need to better identify subduction zone segments potentially dominated by relatively long (up to 1000 yr and more recurrence times of giant earthquakes. We construct a model for the geometry of subduction coupling zones and combine it with global geophysical data sets to demonstrate that the occurrence of great (magnitude ≥ 8 subduction earthquakes is strongly biased towards regions associated with intersections of oceanic fracture zones and subduction zones. We use a computational recommendation technology, a type of information filtering system technique widely used in searching, sorting, classifying, and filtering very large, statistically skewed data sets on the Internet, to demonstrate a robust association and rule out a random effect. Fracture zone–subduction zone intersection regions, representing only 25% of the global subduction coupling zone, are linked with 13 of the 15 largest (magnitude Mw ≥ 8.6 and half of the 50 largest (magnitude Mw ≥ 8.4 earthquakes. In contrast, subducting volcanic ridges and chains are only biased towards smaller earthquakes (magnitude < 8. The associations captured by our statistical analysis can be conceptually related to physical differences between subducting fracture zones and volcanic chains/ridges. Fracture zones are characterised by laterally continuous, uplifted ridges that represent normal ocean crust with a high degree of structural integrity, causing strong, persistent coupling in the subduction interface. Smaller volcanic ridges and chains have a relatively fragile heterogeneous internal structure and are separated from the underlying ocean crust by a detachment interface, resulting in weak coupling and relatively small earthquakes, providing a conceptual basis for the observed dichotomy.

  12. A Summary of Fault Recurrence and Strain Rates in the Vicinity of the Hanford Site--Topical Report

    Energy Technology Data Exchange (ETDEWEB)

    Bjornstad, Bruce N.; Winsor, Kelsey; Unwin, Stephen D.

    2012-08-01

    This document is one in a series of topical reports compiled by the Pacific Northwest National Laboratory to summarize technical information on selected topics important to the performance of a probabilistic seismic hazard analysis of the Hanford Site. The purpose of this report is to summarize available data and analyses relevant to fault recurrence and strain rates within the Yakima Fold Belt. Strain rates have met with contention in the expert community and may have a significant potential for impact on the seismic hazard estimate at the Hanford Site. This report identifies the alternative conceptual models relevant to this technical issue and the arguments and data that support those models. It provides a brief description of the technical issue and principal uncertainties; a general overview on the nature of the technical issue, along with alternative conceptual models, supporting arguments and information, and uncertainties; and finally, suggests some prospective approaches to reducing uncertainties about earthquake recurrence rates for the Yakima Fold Belt.

  13. Bayesian hierarchical model for variations in earthquake peak ground acceleration within small-aperture arrays

    KAUST Repository

    Rahpeyma, Sahar

    2018-04-17

    Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.

  14. Bayesian hierarchical model for variations in earthquake peak ground acceleration within small-aperture arrays

    KAUST Repository

    Rahpeyma, Sahar; Halldorsson, Benedikt; Hrafnkelsson, Birgir; Jonsson, Sigurjon

    2018-01-01

    Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.

  15. Children's emotional experience two years after an earthquake: An exploration of knowledge of earthquakes and associated emotions.

    Science.gov (United States)

    Raccanello, Daniela; Burro, Roberto; Hall, Rob

    2017-01-01

    We explored whether and how the exposure to a natural disaster such as the 2012 Emilia Romagna earthquake affected the development of children's emotional competence in terms of understanding, regulating, and expressing emotions, after two years, when compared with a control group not exposed to the earthquake. We also examined the role of class level and gender. The sample included two groups of children (n = 127) attending primary school: The experimental group (n = 65) experienced the 2012 Emilia Romagna earthquake, while the control group (n = 62) did not. The data collection took place two years after the earthquake, when children were seven or ten-year-olds. Beyond assessing the children's understanding of emotions and regulating abilities with standardized instruments, we employed semi-structured interviews to explore their knowledge of earthquakes and associated emotions, and a structured task on the intensity of some target emotions. We applied Generalized Linear Mixed Models. Exposure to the earthquake did not influence the understanding and regulation of emotions. The understanding of emotions varied according to class level and gender. Knowledge of earthquakes, emotional language, and emotions associated with earthquakes were, respectively, more complex, frequent, and intense for children who had experienced the earthquake, and at increasing ages. Our data extend the generalizability of theoretical models on children's psychological functioning following disasters, such as the dose-response model and the organizational-developmental model for child resilience, and provide further knowledge on children's emotional resources related to natural disasters, as a basis for planning educational prevention programs.

  16. Swallowing sound detection using hidden markov modeling of recurrence plot features

    International Nuclear Information System (INIS)

    Aboofazeli, Mohammad; Moussavi, Zahra

    2009-01-01

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  17. Swallowing sound detection using hidden markov modeling of recurrence plot features

    Energy Technology Data Exchange (ETDEWEB)

    Aboofazeli, Mohammad [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: umaboofa@cc.umanitoba.ca; Moussavi, Zahra [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: mousavi@ee.umanitoba.ca

    2009-01-30

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  18. Highly variable recurrence of tsunamis in the 7,400 years before the 2004 Indian Ocean tsunami

    Science.gov (United States)

    Horton, B.; Rubin, C. M.; Sieh, K.; Jessica, P.; Daly, P.; Ismail, N.; Parnell, A. C.

    2017-12-01

    The devastating 2004 Indian Ocean tsunami caught millions of coastal residents and the scientific community off-guard. Subsequent research in the Indian Ocean basin has identified prehistoric tsunamis, but the timing and recurrence intervals of such events are uncertain. Here, we identify coastal caves as a new depositional environment for reconstructing tsunami records and present a 5,000 year record of continuous tsunami deposits from a coastal cave in Sumatra, Indonesia which shows the irregular recurrence of 11 tsunamis between 7,400 and 2,900 years BP. The data demonstrates that the 2004 tsunami was just the latest in a sequence of devastating tsunamis stretching back to at least the early Holocene and suggests a high likelihood for future tsunamis in the Indian Ocean. The sedimentary record in the cave shows that ruptures of the Sunda megathrust vary between large (which generated the 2004 Indian Ocean tsunami) and smaller slip failures. The chronology of events suggests the recurrence of multiple smaller tsunamis within relatively short time periods, interrupted by long periods of strain accumulation followed by giant tsunamis. The average time period between tsunamis is about 450 years with intervals ranging from a long, dormant period of over 2,000 years, to multiple tsunamis within the span of a century. The very long dormant period suggests that the Sunda megathrust is capable of accumulating large slip deficits between earthquakes. Such a high slip rupture would produce a substantially larger earthquake than the 2004 event. Although there is evidence that the likelihood of another tsunamigenic earthquake in Aceh province is high, these variable recurrence intervals suggest that long dormant periods may follow Sunda Megathrust ruptures as large as that of 2004 Indian Ocean tsunami. The remarkable variability of recurrence suggests that regional hazard mitigation plans should be based upon the high likelihood of future destructive tsunami demonstrated by

  19. Spatial-temporal variation of low-frequency earthquake bursts near Parkfield, California

    Science.gov (United States)

    Wu, Chunquan; Guyer, Robert; Shelly, David R.; Trugman, D.; Frank, William; Gomberg, Joan S.; Johnson, P.

    2015-01-01

    Tectonic tremor (TT) and low-frequency earthquakes (LFEs) have been found in the deeper crust of various tectonic environments globally in the last decade. The spatial-temporal behaviour of LFEs provides insight into deep fault zone processes. In this study, we examine recurrence times from a 12-yr catalogue of 88 LFE families with ∼730 000 LFEs in the vicinity of the Parkfield section of the San Andreas Fault (SAF) in central California. We apply an automatic burst detection algorithm to the LFE recurrence times to identify the clustering behaviour of LFEs (LFE bursts) in each family. We find that the burst behaviours in the northern and southern LFE groups differ. Generally, the northern group has longer burst duration but fewer LFEs per burst, while the southern group has shorter burst duration but more LFEs per burst. The southern group LFE bursts are generally more correlated than the northern group, suggesting more coherent deep fault slip and relatively simpler deep fault structure beneath the locked section of SAF. We also found that the 2004 Parkfield earthquake clearly increased the number of LFEs per burst and average burst duration for both the northern and the southern groups, with a relatively larger effect on the northern group. This could be due to the weakness of northern part of the fault, or the northwesterly rupture direction of the Parkfield earthquake.

  20. Earthquake Safety Tips in the Classroom

    Science.gov (United States)

    Melo, M. O.; Maciel, B. A. P. C.; Neto, R. P.; Hartmann, R. P.; Marques, G.; Gonçalves, M.; Rocha, F. L.; Silveira, G. M.

    2014-12-01

    The catastrophes induced by earthquakes are among the most devastating ones, causing an elevated number of human losses and economic damages. But, we have to keep in mind that earthquakes don't kill people, buildings do. Earthquakes can't be predicted and the only way of dealing with their effects is to teach the society how to be prepared for them, and how to deal with their consequences. In spite of being exposed to moderate and large earthquakes, most of the Portuguese are little aware of seismic risk, mainly due to the long recurrence intervals between strong events. The acquisition of safe and correct attitudes before, during and after an earthquake is relevant for human security. Children play a determinant role in the establishment of a real and long-lasting "culture of prevention", both through action and new attitudes. On the other hand, when children assume correct behaviors, their relatives often change their incorrect behaviors to mimic the correct behaviors of their kids. In the framework of a Parents-in-Science initiative, we started with bi-monthly sessions for children aged 5 - 6 years old and 9 - 10 years old. These sessions, in which parents, teachers and high-school students participate, became part of the school's permanent activities. We start by a short introduction to the Earth and to earthquakes by story telling and by using simple science activities to trigger children curiosity. With safety purposes, we focus on how crucial it is to know basic information about themselves and to define, with their families, an emergency communications plan, in case family members are separated. Using a shaking table we teach them how to protect themselves during an earthquake. We then finish with the preparation on an individual emergency kit. This presentation will highlight the importance of encouraging preventive actions in order to reduce the impact of earthquakes on society. This project is developed by science high-school students and teachers, in

  1. Antioptimization of earthquake exitation and response

    Directory of Open Access Journals (Sweden)

    G. Zuccaro

    1998-01-01

    Full Text Available The paper presents a novel approach to predict the response of earthquake-excited structures. The earthquake excitation is expanded in terms of series of deterministic functions. The coefficients of the series are represented as a point in N-dimensional space. Each available ccelerogram at a certain site is then represented as a point in the above space, modeling the available fragmentary historical data. The minimum volume ellipsoid, containing all points, is constructed. The ellipsoidal models of uncertainty, pertinent to earthquake excitation, are developed. The maximum response of a structure, subjected to the earthquake excitation, within ellipsoidal modeling of the latter, is determined. This procedure of determining least favorable response was termed in the literature (Elishakoff, 1991 as an antioptimization. It appears that under inherent uncertainty of earthquake excitation, antioptimization analysis is a viable alternative to stochastic approach.

  2. Connecting slow earthquakes to huge earthquakes.

    Science.gov (United States)

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  3. Neo-deterministic definition of earthquake hazard scenarios: a multiscale application to India

    Science.gov (United States)

    Peresan, Antonella; Magrin, Andrea; Parvez, Imtiyaz A.; Rastogi, Bal K.; Vaccari, Franco; Cozzini, Stefano; Bisignano, Davide; Romanelli, Fabio; Panza, Giuliano F.; Ashish, Mr; Mir, Ramees R.

    2014-05-01

    The development of effective mitigation strategies requires scientifically consistent estimates of seismic ground motion; recent analysis, however, showed that the performances of the classical probabilistic approach to seismic hazard assessment (PSHA) are very unsatisfactory in anticipating ground shaking from future large earthquakes. Moreover, due to their basic heuristic limitations, the standard PSHA estimates are by far unsuitable when dealing with the protection of critical structures (e.g. nuclear power plants) and cultural heritage, where it is necessary to consider extremely long time intervals. Nonetheless, the persistence in resorting to PSHA is often explained by the need to deal with uncertainties related with ground shaking and earthquakes recurrence. We show that current computational resources and physical knowledge of the seismic waves generation and propagation processes, along with the improving quantity and quality of geophysical data, allow nowadays for viable numerical and analytical alternatives to the use of PSHA. The advanced approach considered in this study, namely the NDSHA (neo-deterministic seismic hazard assessment), is based on the physically sound definition of a wide set of credible scenario events and accounts for uncertainties and earthquakes recurrence in a substantially different way. The expected ground shaking due to a wide set of potential earthquakes is defined by means of full waveforms modelling, based on the possibility to efficiently compute synthetic seismograms in complex laterally heterogeneous anelastic media. In this way a set of scenarios of ground motion can be defined, either at national and local scale, the latter considering the 2D and 3D heterogeneities of the medium travelled by the seismic waves. The efficiency of the NDSHA computational codes allows for the fast generation of hazard maps at the regional scale even on a modern laptop computer. At the scenario scale, quick parametric studies can be easily

  4. A Bimodal Hybrid Model for Time-Dependent Probabilistic Seismic Hazard Analysis

    Science.gov (United States)

    Yaghmaei-Sabegh, Saman; Shoaeifar, Nasser; Shoaeifar, Parva

    2018-03-01

    The evaluation of evidence provided by geological studies and historical catalogs indicates that in some seismic regions and faults, multiple large earthquakes occur in cluster. Then, the occurrences of large earthquakes confront with quiescence and only the small-to-moderate earthquakes take place. Clustering of large earthquakes is the most distinguishable departure from the assumption of constant hazard of random occurrence of earthquakes in conventional seismic hazard analysis. In the present study, a time-dependent recurrence model is proposed to consider a series of large earthquakes that occurs in clusters. The model is flexible enough to better reflect the quasi-periodic behavior of large earthquakes with long-term clustering, which can be used in time-dependent probabilistic seismic hazard analysis with engineering purposes. In this model, the time-dependent hazard results are estimated by a hazard function which comprises three parts. A decreasing hazard of last large earthquake cluster and an increasing hazard of the next large earthquake cluster, along with a constant hazard of random occurrence of small-to-moderate earthquakes. In the final part of the paper, the time-dependent seismic hazard of the New Madrid Seismic Zone at different time intervals has been calculated for illustrative purpose.

  5. Thermomechanical earthquake cycle simulations with rate-and-state friction and nonlinear viscoelasticity

    Science.gov (United States)

    Allison, K. L.; Dunham, E. M.

    2017-12-01

    We simulate earthquake cycles on a 2D strike-slip fault, modeling both rate-and-state fault friction and an off-fault nonlinear power-law rheology. The power-law rheology involves an effective viscosity that is a function of temperature and stress, and therefore varies both spatially and temporally. All phases of the earthquake cycle are simulated, allowing the model to spontaneously generate earthquakes, and to capture frictional afterslip and postseismic and interseismic viscous flow. We investigate the interaction between fault slip and bulk viscous flow, using experimentally-based flow laws for quartz-diorite in the crust and olivine in the mantle, representative of the Mojave Desert region in Southern California. We first consider a suite of three linear geotherms which are constant in time, with dT/dz = 20, 25, and 30 K/km. Though the simulations produce very different deformation styles in the lower crust, ranging from significant interseismc fault creep to purely bulk viscous flow, they have almost identical earthquake recurrence interval, nucleation depth, and down-dip coseismic slip limit. This indicates that bulk viscous flow and interseismic fault creep load the brittle crust similarly. The simulations also predict unrealistically high stresses in the upper crust, resulting from the fact that the lower crust and upper mantle are relatively weak far from the fault, and from the relatively small role that basal tractions on the base of the crust play in the force balance of the lithosphere. We also find that for the warmest model, the effective viscosity varies by an order of magnitude in the interseismic period, whereas for the cooler models it remains roughly constant. Because the rheology is highly sensitive to changes in temperature, in addition to the simulations with constant temperature we also consider the effect of heat generation. We capture both frictional heat generation and off-fault viscous shear heating, allowing these in turn to alter the

  6. Connecting slow earthquakes to huge earthquakes

    OpenAIRE

    Obara, Kazushige; Kato, Aitaro

    2016-01-01

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of th...

  7. Moment Magnitudes and Local Magnitudes for Small Earthquakes: Implications for Ground-Motion Prediction and b-values

    Science.gov (United States)

    Baltay, A.; Hanks, T. C.; Vernon, F.

    2016-12-01

    We illustrate two essential consequences of the systematic difference between moment magnitude and local magnitude for small earthquakes, illuminating the underlying earthquake physics. Moment magnitude, M 2/3 log M0, is uniformly valid for all earthquake sizes [Hanks and Kanamori, 1979]. However, the relationship between local magnitude ML and moment is itself magnitude dependent. For moderate events, 3> fmax. Just as importantly, if this relation is overlooked, prediction of large-magnitude ground motion from small earthquakes will be misguided. We also consider the effect of this magnitude scale difference on b-value. The oft-cited b-value of 1 should hold for small magnitudes, given M. Use of ML necessitates b=2/3 for the same data set; use of mixed, or unknown, magnitudes complicates the matter further. This is of particular import when estimating the rate of large earthquakes when one has limited data on their recurrence, as is the case for induced earthquakes in the central US.

  8. Finite-Source Inversion for the 2004 Parkfield Earthquake using 3D Velocity Model Green's Functions

    Science.gov (United States)

    Kim, A.; Dreger, D.; Larsen, S.

    2008-12-01

    We determine finite fault models of the 2004 Parkfield earthquake using 3D Green's functions. Because of the dense station coverage and detailed 3D velocity structure model in this region, this earthquake provides an excellent opportunity to examine how the 3D velocity structure affects the finite fault inverse solutions. Various studies (e.g. Michaels and Eberhart-Phillips, 1991; Thurber et al., 2006) indicate that there is a pronounced velocity contrast across the San Andreas Fault along the Parkfield segment. Also the fault zone at Parkfield is wide as evidenced by mapped surface faults and where surface slip and creep occurred in the 1966 and the 2004 Parkfield earthquakes. For high resolution images of the rupture process"Ait is necessary to include the accurate 3D velocity structure for the finite source inversion. Liu and Aurchuleta (2004) performed finite fault inversions using both 1D and 3D Green's functions for 1989 Loma Prieta earthquake using the same source paramerization and data but different Green's functions and found that the models were quite different. This indicates that the choice of the velocity model significantly affects the waveform modeling at near-fault stations. In this study, we used the P-wave velocity model developed by Thurber et al (2006) to construct the 3D Green's functions. P-wave speeds are converted to S-wave speeds and density using by the empirical relationships of Brocher (2005). Using a finite difference method, E3D (Larsen and Schultz, 1995), we computed the 3D Green's functions numerically by inserting body forces at each station. Using reciprocity, these Green's functions are recombined to represent the ground motion at each station due to the slip on the fault plane. First we modeled the waveforms of small earthquakes to validate the 3D velocity model and the reciprocity of the Green"fs function. In the numerical tests we found that the 3D velocity model predicted the individual phases well at frequencies lower than 0

  9. The use of the Finite Element method for the earthquakes modelling in different geodynamic environments

    Science.gov (United States)

    Castaldo, Raffaele; Tizzani, Pietro

    2016-04-01

    Many numerical models have been developed to simulate the deformation and stress changes associated to the faulting process. This aspect is an important topic in fracture mechanism. In the proposed study, we investigate the impact of the deep fault geometry and tectonic setting on the co-seismic ground deformation pattern associated to different earthquake phenomena. We exploit the impact of the structural-geological data in Finite Element environment through an optimization procedure. In this framework, we model the failure processes in a physical mechanical scenario to evaluate the kinematics associated to the Mw 6.1 L'Aquila 2009 earthquake (Italy), the Mw 5.9 Ferrara and Mw 5.8 Mirandola 2012 earthquake (Italy) and the Mw 8.3 Gorkha 2015 earthquake (Nepal). These seismic events are representative of different tectonic scenario: the normal, the reverse and thrust faulting processes, respectively. In order to simulate the kinematic of the analyzed natural phenomena, we assume, under the plane stress approximation (is defined to be a state of stress in which the normal stress, sz, and the shear stress sxz and syz, directed perpendicular to x-y plane are assumed to be zero), the linear elastic behavior of the involved media. The performed finite element procedure consist of through two stages: (i) compacting under the weight of the rock successions (gravity loading), the deformation model reaches a stable equilibrium; (ii) the co-seismic stage simulates, through a distributed slip along the active fault, the released stresses. To constrain the models solution, we exploit the DInSAR deformation velocity maps retrieved by satellite data acquired by old and new generation sensors, as ENVISAT, RADARSAT-2 and SENTINEL 1A, encompassing the studied earthquakes. More specifically, we first generate 2D several forward mechanical models, then, we compare these with the recorded ground deformation fields, in order to select the best boundaries setting and parameters. Finally

  10. Neural Machine Translation with Recurrent Attention Modeling

    OpenAIRE

    Yang, Zichao; Hu, Zhiting; Deng, Yuntian; Dyer, Chris; Smola, Alex

    2016-01-01

    Knowing which words have been attended to in previous time steps while generating a translation is a rich source of information for predicting what words will be attended to in the future. We improve upon the attention model of Bahdanau et al. (2014) by explicitly modeling the relationship between previous and subsequent attention levels for each word using one recurrent network per input word. This architecture easily captures informative features, such as fertility and regularities in relat...

  11. Pre-Clinical Model to Study Recurrent Venous Thrombosis in the Inferior Vena Cava.

    Science.gov (United States)

    Andraska, Elizabeth A; Luke, Catherine E; Elfline, Megan A; Henke, Samuel P; Madapoosi, Siddharth S; Metz, Allan K; Hoinville, Megan E; Wakefield, Thomas W; Henke, Peter K; Diaz, Jose A

    2018-06-01

     Patients undergoing deep vein thrombosis (VT) have over 30% recurrence, directly increasing their risk of post-thrombotic syndrome. Current murine models of inferior vena cava (IVC) VT model host one thrombosis event.  We aimed to develop a murine model to study IVC recurrent VT in mice.  An initial VT was induced using the electrolytic IVC model (EIM) with constant blood flow. This approach takes advantage of the restored vein lumen 21 days after a single VT event in the EIM demonstrated by ultrasound. We then induced a second VT 21 days later, using either EIM or an IVC ligation model for comparison. The control groups were a sham surgery and, 21 days later, either EIM or IVC ligation. IVC wall and thrombus were harvested 2 days after the second insult and analysed for IVC and thrombus size, gene expression of fibrotic markers, histology for collagen and Western blot for citrullinated histone 3 (Cit-H3) and fibrin.  Ultrasound confirmed the first VT and its progressive resolution with an anatomical channel allowing room for the second thrombus by day 21. As compared with a primary VT, recurrent VT has heavier walls with significant up-regulation of transforming growth factor-β (TGF-β), elastin, interleukin (IL)-6, matrix metallopeptidase 9 (MMP9), MMP2 and a thrombus with high citrullinated histone-3 and fibrin content.  Experimental recurrent thrombi are structurally and compositionally different from the primary VT, with a greater pro-fibrotic remodelling vein wall profile. This work provides a VT recurrence IVC model that will help to improve the current understanding of the biological mechanisms and directed treatment of recurrent VT. Schattauer GmbH Stuttgart.

  12. Earthquake hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based model

    Science.gov (United States)

    Farahi Ghasre Aboonasr, Sedigheh; Zamani, Ahmad; Razavipour, Fatemeh; Boostani, Reza

    2017-08-01

    Producing accurate seismic hazard map and predicting hazardous areas is necessary for risk mitigation strategies. In this paper, a fuzzy logic inference system is utilized to estimate the earthquake potential and seismic zoning of Zagros Orogenic Belt. In addition to the interpretability, fuzzy predictors can capture both nonlinearity and chaotic behavior of data, where the number of data is limited. In this paper, earthquake pattern in the Zagros has been assessed for the intervals of 10 and 50 years using fuzzy rule-based model. The Molchan statistical procedure has been used to show that our forecasting model is reliable. The earthquake hazard maps for this area reveal some remarkable features that cannot be observed on the conventional maps. Regarding our achievements, some areas in the southern (Bandar Abbas), southwestern (Bandar Kangan) and western (Kermanshah) parts of Iran display high earthquake severity even though they are geographically far apart.

  13. Testing earthquake source inversion methodologies

    KAUST Repository

    Page, Morgan T.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  14. A Review of Current Machine Learning Methods Used for Cancer Recurrence Modeling and Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Hemphill, Geralyn M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type has become a necessity in cancer research. A major challenge in cancer management is the classification of patients into appropriate risk groups for better treatment and follow-up. Such risk assessment is critically important in order to optimize the patient’s health and the use of medical resources, as well as to avoid cancer recurrence. This paper focuses on the application of machine learning methods for predicting the likelihood of a recurrence of cancer. It is not meant to be an extensive review of the literature on the subject of machine learning techniques for cancer recurrence modeling. Other recent papers have performed such a review, and I will rely heavily on the results and outcomes from these papers. The electronic databases that were used for this review include PubMed, Google, and Google Scholar. Query terms used include “cancer recurrence modeling”, “cancer recurrence and machine learning”, “cancer recurrence modeling and machine learning”, and “machine learning for cancer recurrence and prediction”. The most recent and most applicable papers to the topic of this review have been included in the references. It also includes a list of modeling and classification methods to predict cancer recurrence.

  15. Conditional Probabilities of Large Earthquake Sequences in California from the Physics-based Rupture Simulator RSQSim

    Science.gov (United States)

    Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.

    2017-12-01

    Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.

  16. Fault healing promotes high-frequency earthquakes in laboratory experiments and on natural faults

    Science.gov (United States)

    McLaskey, Gregory C.; Thomas, Amanda M.; Glaser, Steven D.; Nadeau, Robert M.

    2012-01-01

    Faults strengthen or heal with time in stationary contact and this healing may be an essential ingredient for the generation of earthquakes. In the laboratory, healing is thought to be the result of thermally activated mechanisms that weld together micrometre-sized asperity contacts on the fault surface, but the relationship between laboratory measures of fault healing and the seismically observable properties of earthquakes is at present not well defined. Here we report on laboratory experiments and seismological observations that show how the spectral properties of earthquakes vary as a function of fault healing time. In the laboratory, we find that increased healing causes a disproportionately large amount of high-frequency seismic radiation to be produced during fault rupture. We observe a similar connection between earthquake spectra and recurrence time for repeating earthquake sequences on natural faults. Healing rates depend on pressure, temperature and mineralogy, so the connection between seismicity and healing may help to explain recent observations of large megathrust earthquakes which indicate that energetic, high-frequency seismic radiation originates from locations that are distinct from the geodetically inferred locations of large-amplitude fault slip

  17. Discrete element modeling of triggered slip in faults with granular gouge: application to dynamic earthquake triggering

    International Nuclear Information System (INIS)

    Ferdowsi, B.

    2014-01-01

    Recent seismological observations based on new, more sensitive instrumentation show that seismic waves radiated from large earthquakes can trigger other earthquakes globally. This phenomenon is called dynamic earthquake triggering and is well-documented for over 30 of the largest earthquakes worldwide. Granular materials are at the core of mature earthquake faults and play a key role in fault triggering by exhibiting a rich nonlinear response to external perturbations. The stick-slip dynamics in sheared granular layers is analogous to the seismic cycle for earthquake fault systems. In this research effort, we characterize the macroscopic scale statistics and the grain-scale mechanisms of triggered slip in sheared granular layers. We model the granular fault gouge using three dimensional discrete element method simulations. The modeled granular system is put into stick-slip dynamics by applying a conning pressure and a shear load. The dynamic triggering is simulated by perturbing the spontaneous stick-slip dynamics using an external vibration applied to the boundary of the layer. The influences of the triggering consist in a frictional weakening during the vibration interval, a clock advance of the next expected large slip event and long term effects in the form of suppression and recovery of the energy released from the granular layer. Our study suggests that above a critical amplitude, vibration causes a significant clock advance of large slip events. We link this clock advance to a major decline in the slipping contact ratio as well as a decrease in shear modulus and weakening of the granular gouge layer. We also observe that shear vibration is less effective in perturbing the stick-slip dynamics of the granular layer. Our study suggests that in order to have an effective triggering, the input vibration must also explore the granular layer at length scales about or less than the average grain size. The energy suppression and the subsequent recovery and increased

  18. Extreme value distribution of earthquake magnitude

    Science.gov (United States)

    Zi, Jun Gan; Tung, C. C.

    1983-07-01

    Probability distribution of maximum earthquake magnitude is first derived for an unspecified probability distribution of earthquake magnitude. A model for energy release of large earthquakes, similar to that of Adler-Lomnitz and Lomnitz, is introduced from which the probability distribution of earthquake magnitude is obtained. An extensive set of world data for shallow earthquakes, covering the period from 1904 to 1980, is used to determine the parameters of the probability distribution of maximum earthquake magnitude. Because of the special form of probability distribution of earthquake magnitude, a simple iterative scheme is devised to facilitate the estimation of these parameters by the method of least-squares. The agreement between the empirical and derived probability distributions of maximum earthquake magnitude is excellent.

  19. Forecasting Induced Seismicity Using Saltwater Disposal Data and a Hydromechanical Earthquake Nucleation Model

    Science.gov (United States)

    Norbeck, J. H.; Rubinstein, J. L.

    2017-12-01

    The earthquake activity in Oklahoma and Kansas that began in 2008 reflects the most widespread instance of induced seismicity observed to date. In this work, we demonstrate that the basement fault stressing conditions that drive seismicity rate evolution are related directly to the operational history of 958 saltwater disposal wells completed in the Arbuckle aquifer. We developed a fluid pressurization model based on the assumption that pressure changes are dominated by reservoir compressibility effects. Using injection well data, we established a detailed description of the temporal and spatial variability in stressing conditions over the 21.5-year period from January 1995 through June 2017. With this stressing history, we applied a numerical model based on rate-and-state friction theory to generate seismicity rate forecasts across a broad range of spatial scales. The model replicated the onset of seismicity, the timing of the peak seismicity rate, and the reduction in seismicity following decreased disposal activity. The behavior of the induced earthquake sequence was consistent with the prediction from rate-and-state theory that the system evolves toward a steady seismicity rate depending on the ratio between the current and background stressing rates. Seismicity rate transients occurred over characteristic timescales inversely proportional to stressing rate. We found that our hydromechanical earthquake rate model outperformed observational and empirical forecast models for one-year forecast durations over the period 2008 through 2016.

  20. Rapid post-earthquake modelling of coseismic landslide intensity and distribution for emergency response decision support

    Directory of Open Access Journals (Sweden)

    T. R. Robinson

    2017-09-01

    Full Text Available Current methods to identify coseismic landslides immediately after an earthquake using optical imagery are too slow to effectively inform emergency response activities. Issues with cloud cover, data collection and processing, and manual landslide identification mean even the most rapid mapping exercises are often incomplete when the emergency response ends. In this study, we demonstrate how traditional empirical methods for modelling the total distribution and relative intensity (in terms of point density of coseismic landsliding can be successfully undertaken in the hours and days immediately after an earthquake, allowing the results to effectively inform stakeholders during the response. The method uses fuzzy logic in a GIS (Geographic Information Systems to quickly assess and identify the location-specific relationships between predisposing factors and landslide occurrence during the earthquake, based on small initial samples of identified landslides. We show that this approach can accurately model both the spatial pattern and the number density of landsliding from the event based on just several hundred mapped landslides, provided they have sufficiently wide spatial coverage, improving upon previous methods. This suggests that systematic high-fidelity mapping of landslides following an earthquake is not necessary for informing rapid modelling attempts. Instead, mapping should focus on rapid sampling from the entire affected area to generate results that can inform the modelling. This method is therefore suited to conditions in which imagery is affected by partial cloud cover or in which the total number of landslides is so large that mapping requires significant time to complete. The method therefore has the potential to provide a quick assessment of landslide hazard after an earthquake and may therefore inform emergency operations more effectively compared to current practice.

  1. 1/f and the Earthquake Problem: Scaling constraints that facilitate operational earthquake forecasting

    Science.gov (United States)

    yoder, M. R.; Rundle, J. B.; Turcotte, D. L.

    2012-12-01

    The difficulty of forecasting earthquakes can fundamentally be attributed to the self-similar, or "1/f", nature of seismic sequences. Specifically, the rate of occurrence of earthquakes is inversely proportional to their magnitude m, or more accurately to their scalar moment M. With respect to this "1/f problem," it can be argued that catalog selection (or equivalently, determining catalog constraints) constitutes the most significant challenge to seismicity based earthquake forecasting. Here, we address and introduce a potential solution to this most daunting problem. Specifically, we introduce a framework to constrain, or partition, an earthquake catalog (a study region) in order to resolve local seismicity. In particular, we combine Gutenberg-Richter (GR), rupture length, and Omori scaling with various empirical measurements to relate the size (spatial and temporal extents) of a study area (or bins within a study area) to the local earthquake magnitude potential - the magnitude of earthquake the region is expected to experience. From this, we introduce a new type of time dependent hazard map for which the tuning parameter space is nearly fully constrained. In a similar fashion, by combining various scaling relations and also by incorporating finite extents (rupture length, area, and duration) as constraints, we develop a method to estimate the Omori (temporal) and spatial aftershock decay parameters as a function of the parent earthquake's magnitude m. From this formulation, we develop an ETAS type model that overcomes many point-source limitations of contemporary ETAS. These models demonstrate promise with respect to earthquake forecasting applications. Moreover, the methods employed suggest a general framework whereby earthquake and other complex-system, 1/f type, problems can be constrained from scaling relations and finite extents.; Record-breaking hazard map of southern California, 2012-08-06. "Warm" colors indicate local acceleration (elevated hazard

  2. Radon anomalies prior to earthquakes (2). Atmospheric radon anomaly observed before the Hyogoken-Nanbu earthquake

    International Nuclear Information System (INIS)

    Ishikawa, Tetsuo; Tokonami, Shinji; Yasuoka, Yumi; Shinogi, Masaki; Nagahama, Hiroyuki; Omori, Yasutaka; Kawada, Yusuke

    2008-01-01

    Before the 1995 Hyogoken-Nanbu earthquake, various geochemical precursors were observed in the aftershock area: chloride ion concentration, groundwater discharge rate, groundwater radon concentration and so on. Kobe Pharmaceutical University (KPU) is located about 25 km northeast from the epicenter and within the aftershock area. Atmospheric radon concentration had been continuously measured from 1984 at KPU, using a flow-type ionization chamber. The radon concentration data were analyzed using the smoothed residual values which represent the daily minimum of radon concentration with the exclusion of normalized seasonal variation. The radon concentration (smoothed residual values) demonstrated an upward trend about two months before the Hyogoken-Nanbu earthquake. The trend can be well fitted to a log-periodic model related to earthquake fault dynamics. As a result of model fitting, a critical point was calculated to be between 13 and 27 January 1995, which was in good agreement with the occurrence date of earthquake (17 January 1995). The mechanism of radon anomaly before earthquakes is not fully understood. However, it might be possible to detect atmospheric radon anomaly as a precursor before a large earthquake, if (1) the measurement is conducted near the earthquake fault, (2) the monitoring station is located on granite (radon-rich) areas, and (3) the measurement is conducted for more than several years before the earthquake to obtain background data. (author)

  3. Sensing the earthquake

    Science.gov (United States)

    Bichisao, Marta; Stallone, Angela

    2017-04-01

    Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.

  4. Modelling end-glacial earthquakes at Olkiluoto. Expansion of the 2010 study

    Energy Technology Data Exchange (ETDEWEB)

    Faelth, B.; Hoekmark, H. [Clay Technology AB, Lund (Sweden)

    2012-02-15

    The present report is an extension of Posiva working report 2011-13: 'Modelling end-glacial earthquakes at Olkiluoto'. The modelling methodology and most parameter values are identical to those used in that report. The main objective is the same: to obtain conservative estimates of fracture shear displacements induced by end-glacial earthquakes occurring on verified deformation zones at the Olkiluoto site. The remotely activated rock fractures (with their fracture centres positioned at different distances around the potential earthquake fault being considered) are called 'target fractures'. As in the previous report, all target fractures were assumed to be perfectly planar and circular with a radius of 75 m. Compared to the previous study, the result catalogue is more complete. One additional deformation zone (i.e. potential earthquake fault) has been included (BFZ039), whereas one deformation zone that appeared to produce only insignificant target fracture disturbances (BFZ214) is omitted. For each of the three zones considered here (BFZ021, BFZ039, and BFZ100), four models, each with a different orientation of the target fractures surrounding the fault, are analysed. Three of these four sets were included in the previous report, however not as systematically as here where each of the four fracture orientations is tried in all fracture positions. As in the previous study, seismic moments and moment magnitudes are as high as reasonably possible, given the sizes and orientations of the zones, i.e., the earthquakes release the largest possible amount of strain energy. The strain energy release is restricted only by a low residual fault shear strength applied to suppress post-rupture fault oscillations. Moment magnitudes are: 5.8 (BFZ021), 3.9 (BFZ039) and 4.3 (BFZ100). For the BFZ100 model, the sensitivity of the results to variations in fracture shear strength is checked. The BFZ021 and BFZ100 models are analyzed for two additional in situ stress

  5. Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment

    Science.gov (United States)

    Rebbapragada, Umaa; Oommen, Thomas

    2011-01-01

    On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.

  6. Historic Eastern Canadian earthquakes

    International Nuclear Information System (INIS)

    Asmis, G.J.K.; Atchinson, R.J.

    1981-01-01

    Nuclear power plants licensed in Canada have been designed to resist earthquakes: not all plants, however, have been explicitly designed to the same level of earthquake induced forces. Understanding the nature of strong ground motion near the source of the earthquake is still very tentative. This paper reviews historical and scientific accounts of the three strongest earthquakes - St. Lawrence (1925), Temiskaming (1935), Cornwall (1944) - that have occurred in Canada in 'modern' times, field studies of near-field strong ground motion records and their resultant damage or non-damage to industrial facilities, and numerical modelling of earthquake sources and resultant wave propagation to produce accelerograms consistent with the above historical record and field studies. It is concluded that for future construction of NPP's near-field strong motion must be explicitly considered in design

  7. Empirical model development and validation with dynamic learning in the recurrent multilayer perception

    International Nuclear Information System (INIS)

    Parlos, A.G.; Chong, K.T.; Atiya, A.F.

    1994-01-01

    A nonlinear multivariable empirical model is developed for a U-tube steam generator using the recurrent multilayer perceptron network as the underlying model structure. The recurrent multilayer perceptron is a dynamic neural network, very effective in the input-output modeling of complex process systems. A dynamic gradient descent learning algorithm is used to train the recurrent multilayer perceptron, resulting in an order of magnitude improvement in convergence speed over static learning algorithms. In developing the U-tube steam generator empirical model, the effects of actuator, process,and sensor noise on the training and testing sets are investigated. Learning and prediction both appear very effective, despite the presence of training and testing set noise, respectively. The recurrent multilayer perceptron appears to learn the deterministic part of a stochastic training set, and it predicts approximately a moving average response. Extensive model validation studies indicate that the empirical model can substantially generalize (extrapolate), though online learning becomes necessary for tracking transients significantly different than the ones included in the training set and slowly varying U-tube steam generator dynamics. In view of the satisfactory modeling accuracy and the associated short development time, neural networks based empirical models in some cases appear to provide a serious alternative to first principles models. Caution, however, must be exercised because extensive on-line validation of these models is still warranted

  8. Holocene earthquakes of magnitude 7 during westward escape of the Olympic Mountains, Washington

    Science.gov (United States)

    Nelson, Alan R.; Personius, Stephen; Wells, Ray; Schermer, Elizabeth R.; Bradley, Lee-Ann; Buck, Jason; Reitman, Nadine G.

    2017-01-01

    The Lake Creek–Boundary Creek fault, previously mapped in Miocene bedrock as an oblique thrust on the north flank of the Olympic Mountains, poses a significant earthquake hazard. Mapping using 2015 light detection and ranging (lidar) confirms 2004 lidar mapping of postglacial (≥14  km along a splay fault, the Sadie Creek fault, west of Lake Crescent. Scarp morphology suggests repeated earthquake ruptures along the eastern section of the Lake Creek–Boundary Creek fault and the Sadie Creek fault since ∼13  ka">∼13  ka. Right‐lateral (∼11–28  m">∼11–28  m) and vertical (1–2 m) cumulative fault offsets suggest slip rates of ∼1–2  mm/yr">∼1–2  mm/yr Stratigraphic and age‐model data from five trenches perpendicular to scarps at four sites on the eastern section of the fault show evidence of 3–5 surface‐rupturing earthquakes. Near‐vertical fault dips and upward‐branching fault patterns in trenches, abrupt changes in the thickness of stratigraphic units across faults, and variations in vertical displacement of successive stratigraphic units along fault traces also suggest a large lateral component of slip. Age models suggest two earthquakes date from 1.3±0.8">1.3±0.8 and 2.9±0.6  ka">2.9±0.6  ka; evidence and ages for 2–3 earlier earthquakes are less certain. Assuming 3–5 postglacial earthquakes, lateral and vertical cumulative fault offsets yield average slip per earthquake of ∼4.6  m">∼4.6  m, a lateral‐to‐vertical slip ratio of ∼10:1">∼10:1, and a recurrence interval of 3.5±1.0  ka">3.5±1.0  ka. Empirical relations yield moment magnitude estimates of M 7.2–7.5 (slip per earthquake) and 7.1–7.3 (56 km maximum rupture length). An apparent left‐lateral Miocene to right‐lateral Holocene slip reversal on the faults is probably related to overprinting of east‐directed, accretion‐dominated deformation in the eastern core of the Olympic

  9. Risk of Recurrence in Operated Parasagittal Meningiomas: A Logistic Binary Regression Model.

    Science.gov (United States)

    Escribano Mesa, José Alberto; Alonso Morillejo, Enrique; Parrón Carreño, Tesifón; Huete Allut, Antonio; Narro Donate, José María; Méndez Román, Paddy; Contreras Jiménez, Ascensión; Pedrero García, Francisco; Masegosa González, José

    2018-02-01

    Parasagittal meningiomas arise from the arachnoid cells of the angle formed between the superior sagittal sinus (SSS) and the brain convexity. In this retrospective study, we focused on factors that predict early recurrence and recurrence times. We reviewed 125 patients with parasagittal meningiomas operated from 1985 to 2014. We studied the following variables: age, sex, location, laterality, histology, surgeons, invasion of the SSS, Simpson removal grade, follow-up time, angiography, embolization, radiotherapy, recurrence and recurrence time, reoperation, neurologic deficit, degree of dependency, and patient status at the end of follow-up. Patients ranged in age from 26 to 81 years (mean 57.86 years; median 60 years). There were 44 men (35.2%) and 81 women (64.8%). There were 57 patients with neurologic deficits (45.2%). The most common presenting symptom was motor deficit. World Health Organization grade I tumors were identified in 104 patients (84.6%), and the majority were the meningothelial type. Recurrence was detected in 34 cases. Time of recurrence was 9 to 336 months (mean: 84.4 months; median: 79.5 months). Male sex was identified as an independent risk for recurrence with relative risk 2.7 (95% confidence interval 1.21-6.15), P = 0.014. Kaplan-Meier curves for recurrence had statistically significant differences depending on sex, age, histologic type, and World Health Organization histologic grade. A binary logistic regression was made with the Hosmer-Lemeshow test with P > 0.05; sex, tumor size, and histologic type were used in this model. Male sex is an independent risk factor for recurrence that, associated with other factors such tumor size and histologic type, explains 74.5% of all cases in a binary regression model. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Source mechanism inversion and ground motion modeling of induced earthquakes in Kuwait - A Bayesian approach

    Science.gov (United States)

    Gu, C.; Toksoz, M. N.; Marzouk, Y.; Al-Enezi, A.; Al-Jeri, F.; Buyukozturk, O.

    2016-12-01

    The increasing seismic activity in the regions of oil/gas fields due to fluid injection/extraction and hydraulic fracturing has drawn new attention in both academia and industry. Source mechanism and triggering stress of these induced earthquakes are of great importance for understanding the physics of the seismic processes in reservoirs, and predicting ground motion in the vicinity of oil/gas fields. The induced seismicity data in our study are from Kuwait National Seismic Network (KNSN). Historically, Kuwait has low local seismicity; however, in recent years the KNSN has monitored more and more local earthquakes. Since 1997, the KNSN has recorded more than 1000 earthquakes (Mw Institutions for Seismology (IRIS) and KNSN, and widely felt by people in Kuwait. These earthquakes happen repeatedly in the same locations close to the oil/gas fields in Kuwait (see the uploaded image). The earthquakes are generally small (Mw stress of these earthquakes was calculated based on the source mechanisms results. In addition, we modeled the ground motion in Kuwait due to these local earthquakes. Our results show that most likely these local earthquakes occurred on pre-existing faults and were triggered by oil field activities. These events are generally smaller than Mw 5; however, these events, occurring in the reservoirs, are very shallow with focal depths less than about 4 km. As a result, in Kuwait, where oil fields are close to populated areas, these induced earthquakes could produce ground accelerations high enough to cause damage to local structures without using seismic design criteria.

  11. Java Programs for Using Newmark's Method and Simplified Decoupled Analysis to Model Slope Performance During Earthquakes

    Science.gov (United States)

    Jibson, Randall W.; Jibson, Matthew W.

    2003-01-01

    Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.

  12. Guidelines for earthquake ground motion definition for the Eastern United States

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Aramayo, G.A.; Williams, R.T.

    1985-01-01

    Guidelines for the determination of earthquake ground-motion definition for the eastern United States are established in this paper. Both far-field and near-field guidelines are given. The guidelines were based on an extensive review of the current procedures for specifying ground motion in the United States. Both empirical and theoretical procedures were used in establishing the guidelines because of the low seismicity in the eastern United States. Only a few large to great (M > 7.5) sized earthquakes have occurred in this region, no evidence of tectonic surface ruptures related to historic or Holocene earthquakes have been found, and no currently active plate boundaries of any kind are known in this region. Very little instrumented data has been gathered in the East. Theoretical procedures are proposed so that in regions of almost no data a reasonable level of seismic ground motion activity can be assumed. The guidelines are to be used to develop the Safe Shutdown Earthquake, SSE. A new procedure for establishing the Operating Basis Earthquake, OBE, is proposed, in particular for the eastern United States. The OBE would be developed using a probabilistic assessment of the geological conditions and the recurrence of seismic events at a site. These guidelines should be useful in development of seismic design requirements for future reactors

  13. Estimating shaking-induced casualties and building damage for global earthquake events: a proposed modelling approach

    Science.gov (United States)

    So, Emily; Spence, Robin

    2013-01-01

    Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.

  14. Variability of dynamic source parameters inferred from kinematic models of past earthquakes

    KAUST Repository

    Causse, M.; Dalguer, L. A.; Mai, Paul Martin

    2013-01-01

    We analyse the scaling and distribution of average dynamic source properties (fracture energy, static, dynamic and apparent stress drops) using 31 kinematic inversion models from 21 crustal earthquakes. Shear-stress histories are computed by solving

  15. Earthquake shaking hazard estimates and exposure changes in the conterminous United States

    Science.gov (United States)

    Jaiswal, Kishor S.; Petersen, Mark D.; Rukstales, Kenneth S.; Leith, William S.

    2015-01-01

    A large portion of the population of the United States lives in areas vulnerable to earthquake hazards. This investigation aims to quantify population and infrastructure exposure within the conterminous U.S. that are subjected to varying levels of earthquake ground motions by systematically analyzing the last four cycles of the U.S. Geological Survey's (USGS) National Seismic Hazard Models (published in 1996, 2002, 2008 and 2014). Using the 2013 LandScan data, we estimate the numbers of people who are exposed to potentially damaging ground motions (peak ground accelerations at or above 0.1g). At least 28 million (~9% of the total population) may experience 0.1g level of shaking at relatively frequent intervals (annual rate of 1 in 72 years or 50% probability of exceedance (PE) in 50 years), 57 million (~18% of the total population) may experience this level of shaking at moderately frequent intervals (annual rate of 1 in 475 years or 10% PE in 50 years), and 143 million (~46% of the total population) may experience such shaking at relatively infrequent intervals (annual rate of 1 in 2,475 years or 2% PE in 50 years). We also show that there is a significant number of critical infrastructure facilities located in high earthquake-hazard areas (Modified Mercalli Intensity ≥ VII with moderately frequent recurrence interval).

  16. Application of the recurrent multilayer perceptron in modeling complex process dynamics.

    Science.gov (United States)

    Parlos, A G; Chong, K T; Atiya, A F

    1994-01-01

    A nonlinear dynamic model is developed for a process system, namely a heat exchanger, using the recurrent multilayer perceptron network as the underlying model structure. The perceptron is a dynamic neural network, which appears effective in the input-output modeling of complex process systems. Dynamic gradient descent learning is used to train the recurrent multilayer perceptron, resulting in an order of magnitude improvement in convergence speed over a static learning algorithm used to train the same network. In developing the empirical process model the effects of actuator, process, and sensor noise on the training and testing sets are investigated. Learning and prediction both appear very effective, despite the presence of training and testing set noise, respectively. The recurrent multilayer perceptron appears to learn the deterministic part of a stochastic training set, and it predicts approximately a moving average response of various testing sets. Extensive model validation studies with signals that are encountered in the operation of the process system modeled, that is steps and ramps, indicate that the empirical model can substantially generalize operational transients, including accurate prediction of instabilities not in the training set. However, the accuracy of the model beyond these operational transients has not been investigated. Furthermore, online learning is necessary during some transients and for tracking slowly varying process dynamics. Neural networks based empirical models in some cases appear to provide a serious alternative to first principles models.

  17. The bayesian probabilistic prediction of the next earthquake in the ometepec segment of the mexican subduction zone

    Science.gov (United States)

    Ferraes, Sergio G.

    1992-06-01

    A predictive equation to estimate the next interoccurrence time (τ) for the next earthquake ( M≥6) in the Ometepec segment is presented, based on Bayes' theorem and the Gaussian process. Bayes' theorem is used to relate the Gaussian process to both a log-normal distribution of recurrence times (τ) and a log-normal distribution of magnitudes ( M) ( Nishenko and Buland, 1987; Lomnitz, 1964). We constructed two new random variables X=In M and Y=In τ with normal marginal densities, and based on the Gaussian process model we assume that their joint density is normal. Using this information, we determine the Bayesian conditional probability. Finally, a predictive equation is derived, based on the criterion of maximization of the Bayesian conditional probability. The model forecasts the next interoccurrence time, conditional on the magnitude of the last event. Realistic estimates of future damaging earthquakes are based on relocated historical earthquakes. However, at the present time there is a controversy between Nishenko-Singh and Gonzalez-Ruiz-Mc-Nally concerning the rupturing process of the 1907 earthquake. We use our Bayesian analysis to examine and discuss this very important controversy. To clarify to the full significance of the analysis, we put forward the results using two catalogues: (1) The Ometepec catalogue without the 1907 earthquake (González-Ruíz-McNally), and (2) the Ometepec catalogue including the 1907 earthquake (Nishenko-Singh). The comparison of the prediction error reveals that in the Nishenko-Singh catalogue, the errors are considerably smaller than the average error for the González-Ruíz-McNally catalogue of relocated events. Finally, using the Nishenko-Singh catalogue which locates the 1907 event inside the Ometepec segment, we conclude that the next expected damaging earthquake ( M≥6.0) will occur approximately within the next time interval τ=11.82 years from the last event (which occurred on July 2, 1984), or equivalently will

  18. Implications of fault constitutive properties for earthquake prediction.

    Science.gov (United States)

    Dieterich, J H; Kilgore, B

    1996-04-30

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  19. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  20. Synthetic seismicity for the San Andreas fault

    Directory of Open Access Journals (Sweden)

    S. N. Ward

    1994-06-01

    Full Text Available Because historical catalogs generally span only a few repetition intervals of major earthquakes, they do not provide much constraint on how regularly earthquakes recur. In order to obtain better recurrence statistics and long-term probability estimates for events M ? 6 on the San Andreas fault, we apply a seismicity model to this fault. The model is based on the concept of fault segmentation and the physics of static dislocations which allow for stress transfer between segments. Constraints are provided by geological and seismological observations of segment lengths, characteristic magnitudes and long-term slip rates. Segment parameters slightly modified from the Working Group on California Earthquake Probabilities allow us to reproduce observed seismicity over four orders of magnitude. The model yields quite irregular earthquake recurrence patterns. Only the largest events (M ? 7.5 are quasi-periodic; small events cluster. Both the average recurrence time and the aperiodicity are also a function of position along the fault. The model results are consistent with paleoseismic data for the San Andreas fault as well as a global set of historical and paleoseismic recurrence data. Thus irregular earthquake recurrence resulting from segment interaction is consistent with a large range of observations.

  1. Butterfly, Recurrence, and Predictability in Lorenz Models

    Science.gov (United States)

    Shen, B. W.

    2017-12-01

    Over the span of 50 years, the original three-dimensional Lorenz model (3DLM; Lorenz,1963) and its high-dimensional versions (e.g., Shen 2014a and references therein) have been used for improving our understanding of the predictability of weather and climate with a focus on chaotic responses. Although the Lorenz studies focus on nonlinear processes and chaotic dynamics, people often apply a "linear" conceptual model to understand the nonlinear processes in the 3DLM. In this talk, we present examples to illustrate the common misunderstandings regarding butterfly effect and discuss the importance of solutions' recurrence and boundedness in the 3DLM and high-dimensional LMs. The first example is discussed with the following folklore that has been widely used as an analogy of the butterfly effect: "For want of a nail, the shoe was lost.For want of a shoe, the horse was lost.For want of a horse, the rider was lost.For want of a rider, the battle was lost.For want of a battle, the kingdom was lost.And all for the want of a horseshoe nail."However, in 2008, Prof. Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability; and that the verse implicitly suggests that subsequent small events will not reverse the outcome (Lorenz, 2008). Lorenz's comments suggest that the verse neither describes negative (nonlinear) feedback nor indicates recurrence, the latter of which is required for the appearance of a butterfly pattern. The second example is to illustrate that the divergence of two nearby trajectories should be bounded and recurrent, as shown in Figure 1. Furthermore, we will discuss how high-dimensional LMs were derived to illustrate (1) negative nonlinear feedback that stabilizes the system within the five- and seven-dimensional LMs (5D and 7D LMs; Shen 2014a; 2015a; 2016); (2) positive nonlinear feedback that destabilizes the system within the 6D and 8D LMs (Shen 2015b; 2017); and (3

  2. Future Earth: Reducing Loss By Automating Response to Earthquake Shaking

    Science.gov (United States)

    Allen, R. M.

    2014-12-01

    Earthquakes pose a significant threat to society in the U.S. and around the world. The risk is easily forgotten given the infrequent recurrence of major damaging events, yet the likelihood of a major earthquake in California in the next 30 years is greater than 99%. As our societal infrastructure becomes ever more interconnected, the potential impacts of these future events are difficult to predict. Yet, the same inter-connected infrastructure also allows us to rapidly detect earthquakes as they begin, and provide seconds, tens or seconds, or a few minutes warning. A demonstration earthquake early warning system is now operating in California and is being expanded to the west coast (www.ShakeAlert.org). In recent earthquakes in the Los Angeles region, alerts were generated that could have provided warning to the vast majority of Los Angelinos who experienced the shaking. Efforts are underway to build a public system. Smartphone technology will be used not only to issue that alerts, but could also be used to collect data, and improve the warnings. The MyShake project at UC Berkeley is currently testing an app that attempts to turn millions of smartphones into earthquake-detectors. As our development of the technology continues, we can anticipate ever-more automated response to earthquake alerts. Already, the BART system in the San Francisco Bay Area automatically stops trains based on the alerts. In the future, elevators will stop, machinery will pause, hazardous materials will be isolated, and self-driving cars will pull-over to the side of the road. In this presentation we will review the current status of the earthquake early warning system in the US. We will illustrate how smartphones can contribute to the system. Finally, we will review applications of the information to reduce future losses.

  3. Modeling Belt-Servomechanism by Chebyshev Functional Recurrent Neuro-Fuzzy Network

    Science.gov (United States)

    Huang, Yuan-Ruey; Kang, Yuan; Chu, Ming-Hui; Chang, Yeon-Pun

    A novel Chebyshev functional recurrent neuro-fuzzy (CFRNF) network is developed from a combination of the Takagi-Sugeno-Kang (TSK) fuzzy model and the Chebyshev recurrent neural network (CRNN). The CFRNF network can emulate the nonlinear dynamics of a servomechanism system. The system nonlinearity is addressed by enhancing the input dimensions of the consequent parts in the fuzzy rules due to functional expansion of a Chebyshev polynomial. The back propagation algorithm is used to adjust the parameters of the antecedent membership functions as well as those of consequent functions. To verify the performance of the proposed CFRNF, the experiment of the belt servomechanism is presented in this paper. Both of identification methods of adaptive neural fuzzy inference system (ANFIS) and recurrent neural network (RNN) are also studied for modeling of the belt servomechanism. The analysis and comparison results indicate that CFRNF makes identification of complex nonlinear dynamic systems easier. It is verified that the accuracy and convergence of the CFRNF are superior to those of ANFIS and RNN by the identification results of a belt servomechanism.

  4. Field Investigations and a Tsunami Modeling for the 1766 Marmara Sea Earthquake, Turkey

    Science.gov (United States)

    Aykurt Vardar, H.; Altinok, Y.; Alpar, B.; Unlu, S.; Yalciner, A. C.

    2016-12-01

    Turkey is located on one of the world's most hazardous earthquake zones. The northern branch of the North Anatolian fault beneath the Sea of Marmara, where the population is most concentrated, is the most active fault branch at least since late Pliocene. The Sea of Marmara region has been affected by many large tsunamigenic earthquakes; the most destructive ones are 549, 553, 557, 740, 989, 1332, 1343, 1509, 1766, 1894, 1912 and 1999 events. In order to understand and determine the tsunami potential and their possible effects along the coasts of this inland sea, detailed documentary, geophysical and numerical modelling studies are needed on the past earthquakes and their associated tsunamis whose effects are presently unknown.On the northern coast of the Sea of Marmara region, the Kucukcekmece Lagoon has a high potential to trap and preserve tsunami deposits. Within the scope of this study, lithological content, composition and sources of organic matters in the lagoon's bottom sediments were studied along a 4.63 m-long piston core recovered from the SE margin of the lagoon. The sedimentary composition and possible sources of the organic matters along the core were analysed and their results were correlated with the historical events on the basis of dating results. Finally, a tsunami scenario was tested for May 22nd 1766 Marmara Sea Earthquake by using a widely used tsunami simulation model called NAMIDANCE. The results show that the candidate tsunami deposits at the depths of 180-200 cm below the lagoons bottom were related with the 1766 (May) earthquake. This work was supported by the Scientific Research Projects Coordination Unit of Istanbul University (Project 6384) and by the EU project TRANSFER for coring.

  5. Earthquake resistant design of structures

    International Nuclear Information System (INIS)

    Choi, Chang Geun; Kim, Gyu Seok; Lee, Dong Geun

    1990-02-01

    This book tells of occurrence of earthquake and damage analysis of earthquake, equivalent static analysis method, application of equivalent static analysis method, dynamic analysis method like time history analysis by mode superposition method and direct integration method, design spectrum analysis considering an earthquake-resistant design in Korea. Such as analysis model and vibration mode, calculation of base shear, calculation of story seismic load and combine of analysis results.

  6. Risk estimation of multiple recurrence and progression of non muscle invasive bladder carcinoma using new mathematical models.

    Science.gov (United States)

    Luján, S; Santamaría, C; Pontones, J L; Ruiz-Cerdá, J L; Trassierra, M; Vera-Donoso, C D; Solsona, E; Jiménez-Cruz, F

    2014-12-01

    To apply new mathematical models according to Non Muscle Invasive Bladder Carcinoma (NMIBC) biological characteristics and enabling an accurate risk estimation of multiple recurrences and tumor progression. The classical Cox model is not valid for the assessment of this kind of events becausethe time betweenrecurrencesin the same patientmay be stronglycorrelated. These new models for risk estimation of recurrence/progression lead to individualized monitoring and treatment plan. 960 patients with primary NMIBC were enrolled. The median follow-up was 48.1 (3-160) months. Results obtained were validated in 240 patients from other center. Transurethral resection of the bladder (TURB) and random bladder biopsy were performed. Subsequently, adjuvant localized chemotherapy was performed. The variables analyzed were: number and tumor size, age, chemotherapy and histopathology. The endpoints were time to recurrence and time to progression. Cox model and its extensions were used as joint frailty model for multiple recurrence and progression. Model accuracy was calculated using Harrell's concordance index (c-index). 468 (48.8%) patients developed at least one tumor recurrence and tumor progression was reported in 52 (5.4%) patients. Variables for multiple-recurrence risk are: age, grade, number, size, treatment and the number of prior recurrences. All these together with age, stage and grade are the variables for progression risk. Concordance index was 0.64 and 0.85 for multiple recurrence and progression respectively. the high concordance reported besides to the validation process in external source, allow accurate multi-recurrence/progression risk estimation. As consequence, it is possible to schedule a follow-up and treatment individualized plan in new and recurrent NMCB cases. Copyright © 2014 AEU. Published by Elsevier Espana. All rights reserved.

  7. The costs and benefits of reconstruction options in Nepal using the CEDIM FDA modelled and empirical analysis following the 2015 earthquake

    Science.gov (United States)

    Daniell, James; Schaefer, Andreas; Wenzel, Friedemann; Khazai, Bijan; Girard, Trevor; Kunz-Plapp, Tina; Kunz, Michael; Muehr, Bernhard

    2016-04-01

    Over the days following the 2015 Nepal earthquake, rapid loss estimates of deaths and the economic loss and reconstruction cost were undertaken by our research group in conjunction with the World Bank. This modelling relied on historic losses from other Nepal earthquakes as well as detailed socioeconomic data and earthquake loss information via CATDAT. The modelled results were very close to the final death toll and reconstruction cost for the 2015 earthquake of around 9000 deaths and a direct building loss of ca. 3 billion (a). A description of the process undertaken to produce these loss estimates is described and the potential for use in analysing reconstruction costs from future Nepal earthquakes in rapid time post-event. The reconstruction cost and death toll model is then used as the base model for the examination of the effect of spending money on earthquake retrofitting of buildings versus complete reconstruction of buildings. This is undertaken future events using empirical statistics from past events along with further analytical modelling. The effects of investment vs. the time of a future event is also explored. Preliminary low-cost options (b) along the line of other country studies for retrofitting (ca. 100) are examined versus the option of different building typologies in Nepal as well as investment in various sectors of construction. The effect of public vs. private capital expenditure post-earthquake is also explored as part of this analysis, as well as spending on other components outside of earthquakes. a) http://www.scientificamerican.com/article/experts-calculate-new-loss-predictions-for-nepal-quake/ b) http://www.aees.org.au/wp-content/uploads/2015/06/23-Daniell.pdf

  8. Heterogeneous slip and rupture models of the San Andreas fault zone based upon three-dimensional earthquake tomography

    Energy Technology Data Exchange (ETDEWEB)

    Foxall, William [Univ. of California, Berkeley, CA (United States)

    1992-11-01

    Crystal fault zones exhibit spatially heterogeneous slip behavior at all scales, slip being partitioned between stable frictional sliding, or fault creep, and unstable earthquake rupture. An understanding the mechanisms underlying slip segmentation is fundamental to research into fault dynamics and the physics of earthquake generation. This thesis investigates the influence that large-scale along-strike heterogeneity in fault zone lithology has on slip segmentation. Large-scale transitions from the stable block sliding of the Central 4D Creeping Section of the San Andreas, fault to the locked 1906 and 1857 earthquake segments takes place along the Loma Prieta and Parkfield sections of the fault, respectively, the transitions being accomplished in part by the generation of earthquakes in the magnitude range 6 (Parkfield) to 7 (Loma Prieta). Information on sub-surface lithology interpreted from the Loma Prieta and Parkfield three-dimensional crustal velocity models computed by Michelini (1991) is integrated with information on slip behavior provided by the distributions of earthquakes located using, the three-dimensional models and by surface creep data to study the relationships between large-scale lithological heterogeneity and slip segmentation along these two sections of the fault zone.

  9. A recurrent dynamic model for correspondence-based face recognition.

    Science.gov (United States)

    Wolfrum, Philipp; Wolff, Christian; Lücke, Jörg; von der Malsburg, Christoph

    2008-12-29

    Our aim here is to create a fully neural, functionally competitive, and correspondence-based model for invariant face recognition. By recurrently integrating information about feature similarities, spatial feature relations, and facial structure stored in memory, the system evaluates face identity ("what"-information) and face position ("where"-information) using explicit representations for both. The network consists of three functional layers of processing, (1) an input layer for image representation, (2) a middle layer for recurrent information integration, and (3) a gallery layer for memory storage. Each layer consists of cortical columns as functional building blocks that are modeled in accordance with recent experimental findings. In numerical simulations we apply the system to standard benchmark databases for face recognition. We find that recognition rates of our biologically inspired approach lie in the same range as recognition rates of recent and purely functionally motivated systems.

  10. Swedish earthquakes and acceleration probabilities

    International Nuclear Information System (INIS)

    Slunga, R.

    1979-03-01

    A method to assign probabilities to ground accelerations for Swedish sites is described. As hardly any nearfield instrumental data is available we are left with the problem of interpreting macroseismic data in terms of acceleration. By theoretical wave propagation computations the relation between seismic strength of the earthquake, focal depth, distance and ground accelerations are calculated. We found that most Swedish earthquake of the area, the 1904 earthquake 100 km south of Oslo, is an exception and probably had a focal depth exceeding 25 km. For the nuclear power plant sites an annual probability of 10 -5 has been proposed as interesting. This probability gives ground accelerations in the range 5-20 % for the sites. This acceleration is for a free bedrock site. For consistency all acceleration results in this study are given for bedrock sites. When applicating our model to the 1904 earthquake and assuming the focal zone to be in the lower crust we get the epicentral acceleration of this earthquake to be 5-15 % g. The results above are based on an analyses of macrosismic data as relevant instrumental data is lacking. However, the macroseismic acceleration model deduced in this study gives epicentral ground acceleration of small Swedish earthquakes in agreement with existent distant instrumental data. (author)

  11. Temporal stress changes caused by earthquakes: A review

    Science.gov (United States)

    Hardebeck, Jeanne L.; Okada, Tomomi

    2018-01-01

    Earthquakes can change the stress field in the Earth’s lithosphere as they relieve and redistribute stress. Earthquake-induced stress changes have been observed as temporal rotations of the principal stress axes following major earthquakes in a variety of tectonic settings. The stress changes due to the 2011 Mw9.0 Tohoku-Oki, Japan, earthquake were particularly well documented. Earthquake stress rotations can inform our understanding of earthquake physics, most notably addressing the long-standing problem of whether the Earth’s crust at plate boundaries is “strong” or “weak.” Many of the observed stress rotations, including that due to the Tohoku-Oki earthquake, indicate near-complete stress drop in the mainshock. This implies low background differential stress, on the order of earthquake stress drop, supporting the weak crust model. Earthquake stress rotations can also be used to address other important geophysical questions, such as the level of crustal stress heterogeneity and the mechanisms of postseismic stress reloading. The quantitative interpretation of stress rotations is evolving from those based on simple analytical methods to those based on more sophisticated numerical modeling that can capture the spatial-temporal complexity of the earthquake stress changes.

  12. Web-Based Real Time Earthquake Forecasting and Personal Risk Management

    Science.gov (United States)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.

    2012-12-01

    Earthquake forecasts have been computed by a variety of countries and economies world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. One example is the Working Group on California Earthquake Probabilities that has been responsible for the official California earthquake forecast since 1988. However, in a time of increasingly severe global financial constraints, we are now moving inexorably towards personal risk management, wherein mitigating risk is becoming the responsibility of individual members of the public. Under these circumstances, open access to a variety of web-based tools, utilities and information is a necessity. Here we describe a web-based system that has been operational since 2009 at www.openhazards.com and www.quakesim.org. Models for earthquake physics and forecasting require input data, along with model parameters. The models we consider are the Natural Time Weibull (NTW) model for regional earthquake forecasting, together with models for activation and quiescence. These models use small earthquakes ('seismicity-based models") to forecast the occurrence of large earthquakes, either through varying rates of small earthquake activity, or via an accumulation of this activity over time. These approaches use data-mining algorithms combined with the ANSS earthquake catalog. The basic idea is to compute large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Each of these approaches has computational challenges associated with computing forecast information in real time. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we show that real-time forecasting is possible at a grid scale of 0.1o. We have analyzed the performance of these models using Reliability/Attributes and standard Receiver Operating Characteristic (ROC) tests. We show how the Reliability and

  13. Defeating Earthquakes

    Science.gov (United States)

    Stein, R. S.

    2012-12-01

    The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by

  14. Physics of Earthquake Rupture Propagation

    Science.gov (United States)

    Xu, Shiqing; Fukuyama, Eiichi; Sagy, Amir; Doan, Mai-Linh

    2018-05-01

    A comprehensive understanding of earthquake rupture propagation requires the study of not only the sudden release of elastic strain energy during co-seismic slip, but also of other processes that operate at a variety of spatiotemporal scales. For example, the accumulation of the elastic strain energy usually takes decades to hundreds of years, and rupture propagation and termination modify the bulk properties of the surrounding medium that can influence the behavior of future earthquakes. To share recent findings in the multiscale investigation of earthquake rupture propagation, we held a session entitled "Physics of Earthquake Rupture Propagation" during the 2016 American Geophysical Union (AGU) Fall Meeting in San Francisco. The session included 46 poster and 32 oral presentations, reporting observations of natural earthquakes, numerical and experimental simulations of earthquake ruptures, and studies of earthquake fault friction. These presentations and discussions during and after the session suggested a need to document more formally the research findings, particularly new observations and views different from conventional ones, complexities in fault zone properties and loading conditions, the diversity of fault slip modes and their interactions, the evaluation of observational and model uncertainties, and comparison between empirical and physics-based models. Therefore, we organize this Special Issue (SI) of Tectonophysics under the same title as our AGU session, hoping to inspire future investigations. Eighteen articles (marked with "this issue") are included in this SI and grouped into the following six categories.

  15. Hysteretic recurrent neural networks: a tool for modeling hysteretic materials and systems

    International Nuclear Information System (INIS)

    Veeramani, Arun S; Crews, John H; Buckner, Gregory D

    2009-01-01

    This paper introduces a novel recurrent neural network, the hysteretic recurrent neural network (HRNN), that is ideally suited to modeling hysteretic materials and systems. This network incorporates a hysteretic neuron consisting of conjoined sigmoid activation functions. Although similar hysteretic neurons have been explored previously, the HRNN is unique in its utilization of simple recurrence to 'self-select' relevant activation functions. Furthermore, training is facilitated by placing the network weights on the output side, allowing standard backpropagation of error training algorithms to be used. We present two- and three-phase versions of the HRNN for modeling hysteretic materials with distinct phases. These models are experimentally validated using data collected from shape memory alloys and ferromagnetic materials. The results demonstrate the HRNN's ability to accurately generalize hysteretic behavior with a relatively small number of neurons. Additional benefits lie in the network's ability to identify statistical information concerning the macroscopic material by analyzing the weights of the individual neurons

  16. A forecast experiment of earthquake activity in Japan under Collaboratory for the Study of Earthquake Predictability (CSEP)

    Science.gov (United States)

    Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.

    2012-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast

  17. A prospective earthquake forecast experiment in the western Pacific

    Science.gov (United States)

    Eberhard, David A. J.; Zechar, J. Douglas; Wiemer, Stefan

    2012-09-01

    Since the beginning of 2009, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquake predictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of Mw≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquake predictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.

  18. Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model

    Science.gov (United States)

    Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,

    2013-01-01

    In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of

  19. Introduction to thematic collection "Historical and geological studies of earthquakes"

    Science.gov (United States)

    Satake, Kenji; Wang, Jian; Hammerl, Christa; Malik, Javed N.

    2017-12-01

    This thematic collection contains eight papers mostly presented at the 2016 AOGS meeting in Beijing. Four papers describe historical earthquake studies in Europe, Japan, and China; one paper uses modern instrumental data to examine the effect of giant earthquakes on the seismicity rate; and three papers describe paleoseismological studies using tsunami deposit in Japan, marine terraces in Philippines, and active faults in Himalayas. Hammerl (Geosci Lett 4:7, 2017) introduced historical seismological studies in Austria, starting from methodology which is state of the art in most European countries, followed by a case study for an earthquake of July 17, 1670 in Tyrol. Albini and Rovida (Geosci Lett 3:30, 2016) examined 114 historical records for the earthquake on April 6, 1667 on the east coast of the Adriatic Sea, compiled 37 Macroseismic Data Points, and estimated the epicenter and the size of the earthquake. Matsu'ura (Geosci Lett 4:3, 2017) summarized historical earthquake studies in Japan which resulted in about 8700 Intensity Data Points, assigned epicenters for 214 earthquakes between AD 599 and 1872, and estimated focal depth and magnitudes for 134 events. Wang et al. (Geosci Lett 4:4, 2017) introduced historical seismology in China, where historical earthquake archives include about 15,000 sources, and parametric catalogs include about 1000 historical earthquakes between 2300 BC and AD 1911. Ishibe et al. (Geosci Lett 4:5, 2017) tested the Coulomb stress triggering hypothesis for three giant (M 9) earthquakes that occurred in recent years, and found that at least the 2004 Sumatra-Andaman and 2011 Tohoku earthquakes caused the seismicity rate change. Ishimura (2017) re-estimated the ages of 11 tsunami deposits in the last 4000 years along the Sanriku coast of northern Japan and found that the average recurrence interval of those tsunamis as 350-390 years. Ramos et al. (2017) studied 1000-year-old marine terraces on the west coast of Luzon Island, Philippines

  20. Designing an Earthquake-Resistant Building

    Science.gov (United States)

    English, Lyn D.; King, Donna T.

    2016-01-01

    How do cross-bracing, geometry, and base isolation help buildings withstand earthquakes? These important structural design features involve fundamental geometry that elementary school students can readily model and understand. The problem activity, Designing an Earthquake-Resistant Building, was undertaken by several classes of sixth- grade…

  1. Mechanisms driving local breast cancer recurrence in a model of breast-conserving surgery.

    LENUS (Irish Health Repository)

    Smith, Myles J

    2012-02-03

    OBJECTIVE: We aimed to identify mechanisms driving local recurrence in a model of breast-conserving surgery (BCS) for breast cancer. BACKGROUND: Breast cancer recurrence after BCS remains a clinically significant, but poorly understood problem. We have previously reported that recurrent colorectal tumours demonstrate altered growth dynamics, increased metastatic burden and resistance to apoptosis, mediated by upregulation of phosphoinositide-3-kinase\\/Akt (PI3K\\/Akt). We investigated whether similar characteristics were evident in a model of locally recurrent breast cancer. METHODS: Tumours were generated by orthotopic inoculation of 4T1 cells in two groups of female Balb\\/c mice and cytoreductive surgery performed when mean tumour size was above 150 mm(3). Local recurrence was observed and gene expression was examined using Affymetrix GeneChips in primary and recurrent tumours. Differential expression was confirmed with quantitative real-time polymerase chain reaction (qRT-PCR). Phosphorylation of Akt was assessed using Western immunoblotting. An ex vivo heat shock protein (HSP)-loaded dendritic cell vaccine was administered in the perioperative period. RESULTS: We observed a significant difference in the recurrent 4T1 tumour volume and growth rate (p < 0.05). Gene expression studies suggested roles for the PI3K\\/Akt system and local immunosuppression driving the altered growth kinetics. We demonstrated that perioperative vaccination with an ex vivo HSP-loaded dendritic cell vaccine abrogated recurrent tumour growth in vivo (p = 0.003 at day 15). CONCLUSION: Investigating therapies which target tumour survival pathways such as PI3K\\/Akt and boost immune surveillance in the perioperative period may be useful adjuncts to contemporary breast cancer treatment.

  2. Guidelines for earthquake ground motion definition for the eastern United States

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Aramayo, G.A.; Williams, R.T.

    1985-01-01

    Guidelines for the determination of earthquake ground-motion definition for the eastern United States are established in this paper. Both far-field and near-field guidelines are given. The guidelines were based on an extensive review of the current procedures for specifying ground motion in the United States. Both empirical and theoretical procedures were used in establishing the guidelines because of the low seismicity in the eastern United States. Only a few large to great (M > 7.5) sized earthquakes have occurred in this region, no evidence of tectonic surface ruptures related to historic or Holocene earthquakes have been found, and no currently active plate boundaries of any kind are known in this region. Very little instrumented data has been gathered in the East. Theoretical procedures are proposed so that in regions of almost no data a reasonable level of seismic ground motion activity can be assumed. The guidelines are to be used to develop the Safe Shutdown Earthquake, SSE. A new procedure for establishing the Operating Basis Earthquake, OBE, is proposed, in particular for the eastern United States. The OBE would be developed using a probabilistic assessment of the geological conditions and the recurrence of seismic events at a site. These guidelines should be useful in development of seismic design requirements for future reactors. 17 refs., 2 figs., 1 tab

  3. Building the Southern California Earthquake Center

    Science.gov (United States)

    Jordan, T. H.; Henyey, T.; McRaney, J. K.

    2004-12-01

    Kei Aki was the founding director of the Southern California Earthquake Center (SCEC), a multi-institutional collaboration formed in 1991 as a Science and Technology Center (STC) under the National Science Foundation (NSF) and the U. S. Geological Survey (USGS). Aki and his colleagues articulated a system-level vision for the Center: investigations by disciplinary working groups would be woven together into a "Master Model" for Southern California. In this presentation, we will outline how the Master-Model concept has evolved and how SCEC's structure has adapted to meet scientific challenges of system-level earthquake science. In its first decade, SCEC conducted two regional imaging experiments (LARSE I & II); published the "Phase-N" reports on (1) the Landers earthquake, (2) a new earthquake rupture forecast for Southern California, and (3) new models for seismic attenuation and site effects; it developed two prototype "Community Models" (the Crustal Motion Map and Community Velocity Model) and, perhaps most important, sustained a long-term, multi-institutional, interdisciplinary collaboration. The latter fostered pioneering numerical simulations of earthquake ruptures, fault interactions, and wave propagation. These accomplishments provided the impetus for a successful proposal in 2000 to reestablish SCEC as a "stand alone" center under NSF/USGS auspices. SCEC remains consistent with the founders' vision: it continues to advance seismic hazard analysis through a system-level synthesis that is based on community models and an ever expanding array of information technology. SCEC now represents a fully articulated "collaboratory" for earthquake science, and many of its features are extensible to other active-fault systems and other system-level collaborations. We will discuss the implications of the SCEC experience for EarthScope, the USGS's program in seismic hazard analysis, NSF's nascent Cyberinfrastructure Initiative, and other large collaboratory programs.

  4. Mothers Coping With Bereavement in the 2008 China Earthquake: A Dual Process Model Analysis.

    Science.gov (United States)

    Chen, Lin; Fu, Fang; Sha, Wei; Chan, Cecilia L W; Chow, Amy Y M

    2017-01-01

    The purpose of this study is to explore the grief experiences of mothers after they lost their children in the 2008 China earthquake. Informed by the Dual Process Model, this study conducted in-depth interviews to explore how six bereaved mothers coped with such grief over a 2-year period. Right after the earthquake, these mothers suffered from intensive grief. They primarily coped with loss-oriented stressors. As time passed, these mothers began to focus on restoration-oriented stressors to face changes in life. This coping trajectory was a dynamic and integral process, which bereaved mothers oscillated between loss- and restoration-oriented stressors. This study offers insight in extending the existing empirical evidence of the Dual Process Model.

  5. Nowcasting Earthquakes and Tsunamis

    Science.gov (United States)

    Rundle, J. B.; Turcotte, D. L.

    2017-12-01

    The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk

  6. Intensity earthquake scenario (scenario event - a damaging earthquake with higher probability of occurrence) for the city of Sofia

    Science.gov (United States)

    Aleksandrova, Irena; Simeonova, Stela; Solakov, Dimcho; Popova, Maria

    2014-05-01

    . The usable and realistic ground motion maps for urban areas are generated: - either from the assumption of a "reference earthquake" - or directly, showing values of macroseimic intensity generated by a damaging, real earthquake. In the study, applying deterministic approach, earthquake scenario in macroseismic intensity ("model" earthquake scenario) for the city of Sofia is generated. The deterministic "model" intensity scenario based on assumption of a "reference earthquake" is compared with a scenario based on observed macroseimic effects caused by the damaging 2012 earthquake (MW5.6). The difference between observed (Io) and predicted (Ip) intensities values is analyzed.

  7. Probabilistic approach to earthquake prediction.

    Directory of Open Access Journals (Sweden)

    G. D'Addezio

    2002-06-01

    Full Text Available The evaluation of any earthquake forecast hypothesis requires the application of rigorous statistical methods. It implies a univocal definition of the model characterising the concerned anomaly or precursor, so as it can be objectively recognised in any circumstance and by any observer.A valid forecast hypothesis is expected to maximise successes and minimise false alarms. The probability gain associated to a precursor is also a popular way to estimate the quality of the predictions based on such precursor. Some scientists make use of a statistical approach based on the computation of the likelihood of an observed realisation of seismic events, and on the comparison of the likelihood obtained under different hypotheses. This method can be extended to algorithms that allow the computation of the density distribution of the conditional probability of earthquake occurrence in space, time and magnitude. Whatever method is chosen for building up a new hypothesis, the final assessment of its validity should be carried out by a test on a new and independent set of observations. The implementation of this test could, however, be problematic for seismicity characterised by long-term recurrence intervals. Even using the historical record, that may span time windows extremely variable between a few centuries to a few millennia, we have a low probability to catch more than one or two events on the same fault. Extending the record of earthquakes of the past back in time up to several millennia, paleoseismology represents a great opportunity to study how earthquakes recur through time and thus provide innovative contributions to time-dependent seismic hazard assessment. Sets of paleoseimologically dated earthquakes have been established for some faults in the Mediterranean area: the Irpinia fault in Southern Italy, the Fucino fault in Central Italy, the El Asnam fault in Algeria and the Skinos fault in Central Greece. By using the age of the

  8. Does paleoseismology forecast the historic rates of large earthquakes on the San Andreas fault system?

    Science.gov (United States)

    Biasi, Glenn; Scharer, Katherine M.; Weldon, Ray; Dawson, Timothy E.

    2016-01-01

    The 98-year open interval since the most recent ground-rupturing earthquake in the greater San Andreas boundary fault system would not be predicted by the quasi-periodic recurrence statistics from paleoseismic data. We examine whether the current hiatus could be explained by uncertainties in earthquake dating. Using seven independent paleoseismic records, 100 year intervals may have occurred circa 1150, 1400, and 1700 AD, but they occur in a third or less of sample records drawn at random. A second method sampling from dates conditioned on the existence of a gap of varying length suggests century-long gaps occur 3-10% of the time. A combined record with more sites would lead to lower probabilities. Systematic data over-interpretation is considered an unlikely explanation. Instead some form of non-stationary behaviour seems required, perhaps through long-range fault interaction. Earthquake occurrence since 1000 AD is not inconsistent with long-term cyclicity suggested from long runs of earthquake simulators.

  9. Numerical experiment on tsunami deposit distribution process by using tsunami sediment transport model in historical tsunami event of megathrust Nankai trough earthquake

    Science.gov (United States)

    Imai, K.; Sugawara, D.; Takahashi, T.

    2017-12-01

    A large flow caused by tsunami transports sediments from beach and forms tsunami deposits in land and coastal lakes. A tsunami deposit has been found in their undisturbed on coastal lakes especially. Okamura & Matsuoka (2012) found some tsunami deposits in the field survey of coastal lakes facing to the Nankai trough, and tsunami deposits due to the past eight Nankai Trough megathrust earthquakes they identified. The environment in coastal lakes is stably calm and suitable for tsunami deposits preservation compared to other topographical conditions such as plains. Therefore, there is a possibility that the recurrence interval of megathrust earthquakes and tsunamis will be discussed with high resolution. In addition, it has been pointed out that small events that cannot be detected in plains could be separated finely (Sawai, 2012). Various aspects of past tsunami is expected to be elucidated, in consideration of topographical conditions of coastal lakes by using the relationship between the erosion-and-sedimentation process of the lake bottom and the external force of tsunami. In this research, numerical examination based on tsunami sediment transport model (Takahashi et al., 1999) was carried out on the site Ryujin-ike pond of Ohita, Japan where tsunami deposit was identified, and deposit migration analysis was conducted on the tsunami deposit distribution process of historical Nankai Trough earthquakes. Furthermore, examination of tsunami source conditions is possibly investigated by comparison studies of the observed data and the computation of tsunami deposit distribution. It is difficult to clarify details of tsunami source from indistinct information of paleogeographical conditions. However, this result shows that it can be used as a constraint condition of the tsunami source scale by combining tsunami deposit distribution in lakes with computation data.

  10. Earthquake occurrence as stochastic event: (1) theoretical models

    Energy Technology Data Exchange (ETDEWEB)

    Basili, A.; Basili, M.; Cagnetti, V.; Colombino, A.; Jorio, V.M.; Mosiello, R.; Norelli, F.; Pacilio, N.; Polinari, D.

    1977-01-01

    The present article intends liaisoning the stochastic approach in the description of earthquake processes suggested by Lomnitz with the experimental evidence reached by Schenkova that the time distribution of some earthquake occurrence is better described by a Negative Bionomial distribution than by a Poisson distribution. The final purpose of the stochastic approach might be a kind of new way for labeling a given area in terms of seismic risk.

  11. The numerical simulation study of the dynamic evolutionary processes in an earthquake cycle on the Longmen Shan Fault

    Science.gov (United States)

    Tao, Wei; Shen, Zheng-Kang; Zhang, Yong

    2016-04-01

    The Longmen Shan, located in the conjunction of the eastern margin the Tibet plateau and Sichuan basin, is a typical area for studying the deformation pattern of the Tibet plateau. Following the 2008 Mw 7.9 Wenchuan earthquake (WE) rupturing the Longmen Shan Fault (LSF), a great deal of observations and studies on geology, geophysics, and geodesy have been carried out for this region, with results published successively in recent years. Using the 2D viscoelastic finite element model, introducing the rate-state friction law to the fault, this thesis makes modeling of the earthquake recurrence process and the dynamic evolutionary processes in an earthquake cycle of 10 thousand years. By analyzing the displacement, velocity, stresses, strain energy and strain energy increment fields, this work obtains the following conclusions: (1) The maximum coseismic displacement on the fault is on the surface, and the damage on the hanging wall is much more serious than that on the foot wall of the fault. If the detachment layer is absent, the coseismic displacement would be smaller and the relative displacement between the hanging wall and foot wall would also be smaller. (2) In every stage of the earthquake cycle, the velocities (especially the vertical velocities) on the hanging wall of the fault are larger than that on the food wall, and the values and the distribution patterns of the velocity fields are similar. While in the locking stage prior to the earthquake, the velocities in crust and the relative velocities between hanging wall and foot wall decrease. For the model without the detachment layer, the velocities in crust in the post-seismic stage is much larger than those in other stages. (3) The maximum principle stress and the maximum shear stress concentrate around the joint of the fault and detachment layer, therefore the earthquake would nucleate and start here. (4) The strain density distribution patterns in stages of the earthquake cycle are similar. There are two

  12. Observations and models of Co- and Post-Seismic Deformation Due to the 2015 Mw 7.8 Gorkha (Nepal) Earthquake

    Science.gov (United States)

    Wang, K.; Fialko, Y. A.

    2016-12-01

    The 2015 Mw 7.8 Gorkha (Nepal) earthquake occurred along the central Himalayan arc, a convergent boundary between India and Eurasian plates. We use space geodetic data to investigate co- and post-seismic deformation due to the Gorkha earthquake. Because the epicentral area of the earthquake is characterized by strong variations in surface relief and material properties, we developed finite element models that explicitly account for topography and 3-D elastic structure. Compared with slip models obtained using homogenous elastic half-space models, the model including elastic heterogeneity and topography exhibits greater (up to 10%) slip amplitude. GPS observations spanning more than 1 year following the earthquake show overall southward movement and uplift after the Gorkha earthquake, qualitatively similar to the coseismic deformation pattern. Kinematic inversions of GPS data, and forward modeling of stress-driven creep indicate that the observed post-seismic transient is consistent with afterslip on a down-dip extention of the seismic rupture. The Main Himalayan Thrust (MHT) has negligible creep updip of the 2015 rupture, reiterating a future seismic hazard. A poro-elastic rebound may contribute to the observed uplift southward motion, but the predicted surface displacements are small (on the order of 1 cm or less). We also tested a wide range of visco-elastic relaxation models, including 1-D and 3-D variations in the viscosity structure. All tested visco-elastic models predict the opposite signs of horizontal and vertical displacements compared to those observed. Available surface deformation data allow one to rule out a model of a low viscosity channel beneath Tibetan Plateau invoked to explain variations in surface relief at the plateau margins.

  13. Phase response curves for models of earthquake fault dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Franović, Igor, E-mail: franovic@ipb.ac.rs [Scientific Computing Laboratory, Institute of Physics Belgrade, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Kostić, Srdjan [Institute for the Development of Water Resources “Jaroslav Černi,” Jaroslava Černog 80, 11226 Belgrade (Serbia); Perc, Matjaž [Faculty of Natural Sciences and Mathematics, University of Maribor, Koroška cesta 160, SI-2000 Maribor (Slovenia); CAMTP—Center for Applied Mathematics and Theoretical Physics, University of Maribor, Krekova 2, SI-2000 Maribor (Slovenia); Klinshov, Vladimir [Institute of Applied Physics of the Russian Academy of Sciences, 46 Ulyanov Street, 603950 Nizhny Novgorod (Russian Federation); Nekorkin, Vladimir [Institute of Applied Physics of the Russian Academy of Sciences, 46 Ulyanov Street, 603950 Nizhny Novgorod (Russian Federation); University of Nizhny Novgorod, 23 Prospekt Gagarina, 603950 Nizhny Novgorod (Russian Federation); Kurths, Jürgen [Institute of Applied Physics of the Russian Academy of Sciences, 46 Ulyanov Street, 603950 Nizhny Novgorod (Russian Federation); Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Institute of Physics, Humboldt University Berlin, 12489 Berlin (Germany)

    2016-06-15

    We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.

  14. Phase response curves for models of earthquake fault dynamics

    International Nuclear Information System (INIS)

    Franović, Igor; Kostić, Srdjan; Perc, Matjaž; Klinshov, Vladimir; Nekorkin, Vladimir; Kurths, Jürgen

    2016-01-01

    We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.

  15. Earthquake simulations with time-dependent nucleation and long-range interactions

    Directory of Open Access Journals (Sweden)

    J. H. Dieterich

    1995-01-01

    Full Text Available A model for rapid simulation of earthquake sequences is introduced which incorporates long-range elastic interactions among fault elements and time-dependent earthquake nucleation inferred from experimentally derived rate- and state-dependent fault constitutive properties. The model consists of a planar two-dimensional fault surface which is periodic in both the x- and y-directions. Elastic interactions among fault elements are represented by an array of elastic dislocations. Approximate solutions for earthquake nucleation and dynamics of earthquake slip are introduced which permit computations to proceed in steps that are determined by the transitions from one sliding state to the next. The transition-driven time stepping and avoidance of systems of simultaneous equations permit rapid simulation of large sequences of earthquake events on computers of modest capacity, while preserving characteristics of the nucleation and rupture propagation processes evident in more detailed models. Earthquakes simulated with this model reproduce many of the observed spatial and temporal characteristics of clustering phenomena including foreshock and aftershock sequences. Clustering arises because the time dependence of the nucleation process is highly sensitive to stress perturbations caused by nearby earthquakes. Rate of earthquake activity following a prior earthquake decays according to Omori's aftershock decay law and falls off with distance.

  16. Sensitivity of tsunami wave profiles and inundation simulations to earthquake slip and fault geometry for the 2011 Tohoku earthquake

    KAUST Repository

    Goda, Katsuichiro; Mai, Paul Martin; Yasuda, Tomohiro; Mori, Nobuhito

    2014-01-01

    In this study, we develop stochastic random-field slip models for the 2011 Tohoku earthquake and conduct a rigorous sensitivity analysis of tsunami hazards with respect to the uncertainty of earthquake slip and fault geometry. Synthetic earthquake slip distributions generated from the modified Mai-Beroza method captured key features of inversion-based source representations of the mega-thrust event, which were calibrated against rich geophysical observations of this event. Using original and synthesised earthquake source models (varied for strike, dip, and slip distributions), tsunami simulations were carried out and the resulting variability in tsunami hazard estimates was investigated. The results highlight significant sensitivity of the tsunami wave profiles and inundation heights to the coastal location and the slip characteristics, and indicate that earthquake slip characteristics are a major source of uncertainty in predicting tsunami risks due to future mega-thrust events.

  17. Sensitivity of tsunami wave profiles and inundation simulations to earthquake slip and fault geometry for the 2011 Tohoku earthquake

    KAUST Repository

    Goda, Katsuichiro

    2014-09-01

    In this study, we develop stochastic random-field slip models for the 2011 Tohoku earthquake and conduct a rigorous sensitivity analysis of tsunami hazards with respect to the uncertainty of earthquake slip and fault geometry. Synthetic earthquake slip distributions generated from the modified Mai-Beroza method captured key features of inversion-based source representations of the mega-thrust event, which were calibrated against rich geophysical observations of this event. Using original and synthesised earthquake source models (varied for strike, dip, and slip distributions), tsunami simulations were carried out and the resulting variability in tsunami hazard estimates was investigated. The results highlight significant sensitivity of the tsunami wave profiles and inundation heights to the coastal location and the slip characteristics, and indicate that earthquake slip characteristics are a major source of uncertainty in predicting tsunami risks due to future mega-thrust events.

  18. Variability of dynamic source parameters inferred from kinematic models of past earthquakes

    KAUST Repository

    Causse, M.

    2013-12-24

    We analyse the scaling and distribution of average dynamic source properties (fracture energy, static, dynamic and apparent stress drops) using 31 kinematic inversion models from 21 crustal earthquakes. Shear-stress histories are computed by solving the elastodynamic equations while imposing the slip velocity of a kinematic source model as a boundary condition on the fault plane. This is achieved using a 3-D finite difference method in which the rupture kinematics are modelled with the staggered-grid-split-node fault representation method of Dalguer & Day. Dynamic parameters are then estimated from the calculated stress-slip curves and averaged over the fault plane. Our results indicate that fracture energy, static, dynamic and apparent stress drops tend to increase with magnitude. The epistemic uncertainty due to uncertainties in kinematic inversions remains small (ϕ ∼ 0.1 in log10 units), showing that kinematic source models provide robust information to analyse the distribution of average dynamic source parameters. The proposed scaling relations may be useful to constrain friction law parameters in spontaneous dynamic rupture calculations for earthquake source studies, and physics-based near-source ground-motion prediction for seismic hazard and risk mitigation.

  19. Surface Rupture Effects on Earthquake Moment-Area Scaling Relations

    Science.gov (United States)

    Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro

    2017-09-01

    Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment ( M 0) and rupture area ( A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0- A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0- A scaling relations for strike-slip earthquakes.

  20. The Iquique earthquake sequence of April 2014: Bayesian modeling accounting for prediction uncertainty

    Science.gov (United States)

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.

    2016-01-01

    The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.

  1. Tsunami evacuation plans for future megathrust earthquakes in Padang, Indonesia, considering stochastic earthquake scenarios

    Directory of Open Access Journals (Sweden)

    A. Muhammad

    2017-12-01

    Full Text Available This study develops tsunami evacuation plans in Padang, Indonesia, using a stochastic tsunami simulation method. The stochastic results are based on multiple earthquake scenarios for different magnitudes (Mw 8.5, 8.75, and 9.0 that reflect asperity characteristics of the 1797 historical event in the same region. The generation of the earthquake scenarios involves probabilistic models of earthquake source parameters and stochastic synthesis of earthquake slip distributions. In total, 300 source models are generated to produce comprehensive tsunami evacuation plans in Padang. The tsunami hazard assessment results show that Padang may face significant tsunamis causing the maximum tsunami inundation height and depth of 15 and 10 m, respectively. A comprehensive tsunami evacuation plan – including horizontal evacuation area maps, assessment of temporary shelters considering the impact due to ground shaking and tsunami, and integrated horizontal–vertical evacuation time maps – has been developed based on the stochastic tsunami simulation results. The developed evacuation plans highlight that comprehensive mitigation policies can be produced from the stochastic tsunami simulation for future tsunamigenic events.

  2. Earthquake location in island arcs

    Science.gov (United States)

    Engdahl, E.R.; Dewey, J.W.; Fujita, K.

    1982-01-01

    A comprehensive data set of selected teleseismic P-wave arrivals and local-network P- and S-wave arrivals from large earthquakes occurring at all depths within a small section of the central Aleutians is used to examine the general problem of earthquake location in island arcs. Reference hypocenters for this special data set are determined for shallow earthquakes from local-network data and for deep earthquakes from combined local and teleseismic data by joint inversion for structure and location. The high-velocity lithospheric slab beneath the central Aleutians may displace hypocenters that are located using spherically symmetric Earth models; the amount of displacement depends on the position of the earthquakes with respect to the slab and on whether local or teleseismic data are used to locate the earthquakes. Hypocenters for trench and intermediate-depth events appear to be minimally biased by the effects of slab structure on rays to teleseismic stations. However, locations of intermediate-depth events based on only local data are systematically displaced southwards, the magnitude of the displacement being proportional to depth. Shallow-focus events along the main thrust zone, although well located using only local-network data, are severely shifted northwards and deeper, with displacements as large as 50 km, by slab effects on teleseismic travel times. Hypocenters determined by a method that utilizes seismic ray tracing through a three-dimensional velocity model of the subduction zone, derived by thermal modeling, are compared to results obtained by the method of joint hypocenter determination (JHD) that formally assumes a laterally homogeneous velocity model over the source region and treats all raypath anomalies as constant station corrections to the travel-time curve. The ray-tracing method has the theoretical advantage that it accounts for variations in travel-time anomalies within a group of events distributed over a sizable region of a dipping, high

  3. Numerical modeling of block structure dynamics: Application to the Vrancea region and study of earthquakes sequences in the synthetic catalogs

    International Nuclear Information System (INIS)

    Soloviev, A.A.; Vorobieva, I.A.

    1995-08-01

    A seismically active region is represented as a system of absolutely rigid blocks divided by infinitely thin plane faults. The interaction of the blocks along the fault planes and with the underlying medium is viscous-elastic. The system of blocks moves as a consequence of prescribed motion of boundary blocks and the underlying medium. When for some part of a fault plane the stress surpasses a certain strength level a stress-drop (''a failure'') occurs. It can cause a failure for other parts of fault planes. The failures are considered as earthquakes. As a result of the numerical simulation a synthetic earthquake catalogue is produced. This procedure is applied for numerical modeling of dynamics of the block structure approximating the tectonic structure of the Vrancea region. By numerical experiments the values of the model parameters were obtained which supplied the synthetic earthquake catalog with the space distribution of epicenters close to the real distribution of the earthquake epicenters in the Vrancea region. The frequency-magnitude relations (Gutenberg-Richter curves) obtained for the synthetic and real catalogs have some common features. The sequences of earthquakes arising in the model are studied for some artificial structures. It is found that ''foreshocks'', ''main shocks'', and ''aftershocks'' could be detected among earthquakes forming the sequences. The features of aftershocks, foreshocks, and catalogs of main shocks are analysed. (author). 5 refs, 12 figs, 16 tabs

  4. Radon, gas geochemistry, groundwater, and earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    King, Chi-Yu [Power Reactor and Nuclear Fuel Development Corp., Tono Geoscience Center, Toki, Gifu (Japan)

    1998-12-31

    Radon monitoring in groundwater, soil air, and atmosphere has been continued in many seismic areas of the world for earthquake-prediction and active-fault studies. Some recent measurements of radon and other geochemical and hydrological parameters have been made for sufficiently long periods, with reliable instruments, and together with measurements of meteorological variables and solid-earth tides. The resultant data are useful in better distinguishing earthquake-related changes from various background noises. Some measurements have been carried out in areas where other geophysical measurements are being made also. Comparative studies of various kinds of geophysical data are helpful in ascertaining the reality of the earthquake-related and fault-related anomalies and in understanding the underlying mechanisms. Spatial anomalies of radon and other terrestrial gasses have been observed for many active faults. Such observations indicate that gas concentrations are very much site dependent, particularly on fault zones where terrestrial fluids may move vertically. Temporal anomalies have been reliably observed before and after some recent earthquakes, including the 1995 Kobe earthquake, and the general pattern of anomaly occurrence remains the same as observed before: They are recorded at only relatively few sensitive sites, which can be at much larger distances than expected from existing earthquake-source models. The sensitivity of a sensitive site is also found to be changeable with time. These results clearly show the inadequacy of the existing dilatancy-fluid diffusion and elastic-dislocation models for earthquake sources to explain earthquake-related geochemical and geophysical changes recorded at large distances. (J.P.N.)

  5. Statistical aspects and risks of human-caused earthquakes

    Science.gov (United States)

    Klose, C. D.

    2013-12-01

    The seismological community invests ample human capital and financial resources to research and predict risks associated with earthquakes. Industries such as the insurance and re-insurance sector are equally interested in using probabilistic risk models developed by the scientific community to transfer risks. These models are used to predict expected losses due to naturally occurring earthquakes. But what about the risks associated with human-caused earthquakes? Such risk models are largely absent from both industry and academic discourse. In countries around the world, informed citizens are becoming increasingly aware and concerned that this economic bias is not sustainable for long-term economic growth, environmental and human security. Ultimately, citizens look to their government officials to hold industry accountable. In the Netherlands, for example, the hydrocarbon industry is held accountable for causing earthquakes near Groningen. In Switzerland, geothermal power plants were shut down or suspended because they caused earthquakes in canton Basel and St. Gallen. The public and the private non-extractive industry needs access to information about earthquake risks in connection with sub/urban geoengineeing activities, including natural gas production through fracking, geothermal energy production, carbon sequestration, mining and water irrigation. This presentation illuminates statistical aspects of human-caused earthquakes with respect to different geologic environments. Statistical findings are based on the first catalog of human-caused earthquakes (in Klose 2013). Findings are discussed which include the odds to die during a medium-size earthquake that is set off by geomechanical pollution. Any kind of geoengineering activity causes this type of pollution and increases the likelihood of triggering nearby faults to rupture.

  6. The Differences in Source Dynamics Between Intermediate-Depth and Deep EARTHQUAKES:A Comparative Study Between the 2014 Rat Islands Intermediate-Depth Earthquake and the 2015 Bonin Islands Deep Earthquake

    Science.gov (United States)

    Twardzik, C.; Ji, C.

    2015-12-01

    It has been proposed that the mechanisms for intermediate-depth and deep earthquakes might be different. While previous extensive seismological studies suggested that such potential differences do not significantly affect the scaling relationships of earthquake parameters, there has been only a few investigations regarding their dynamic characteristics, especially for fracture energy. In this work, the 2014 Mw7.9 Rat Islands intermediate-depth (105 km) earthquake and the 2015 Mw7.8 Bonin Islands deep (680 km) earthquake are studied from two different perspectives. First, their kinematic rupture models are constrained using teleseismic body waves. Our analysis reveals that the Rat Islands earthquake breaks the entire cold core of the subducting slab defined as the depth of the 650oC isotherm. The inverted stress drop is 4 MPa, compatible to that of intra-plate earthquakes at shallow depths. On the other hand, the kinematic rupture model of the Bonin Islands earthquake, which occurred in a region lacking of seismicity for the past forty years, according to the GCMT catalog, exhibits an energetic rupture within a 35 km by 30 km slip patch and a high stress drop of 24 MPa. It is of interest to note that although complex rupture patterns are allowed to match the observations, the inverted slip distributions of these two earthquakes are simple enough to be approximated as the summation of a few circular/elliptical slip patches. Thus, we investigate subsequently their dynamic rupture models. We use a simple modelling approach in which we assume that the dynamic rupture propagation obeys a slip-weakening friction law, and we describe the distribution of stress and friction on the fault as a set of elliptical patches. We will constrain the three dynamic parameters that are yield stress, background stress prior to the rupture and slip weakening distance, as well as the shape of the elliptical patches directly from teleseismic body waves observations. The study would help us

  7. Earthquakes

    Science.gov (United States)

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  8. Flood Simulation Using WMS Model in Small Watershed after Strong Earthquake -A Case Study of Longxihe Watershed, Sichuan province, China

    Science.gov (United States)

    Guo, B.

    2017-12-01

    Mountain watershed in Western China is prone to flash floods. The Wenchuan earthquake on May 12, 2008 led to the destruction of surface, and frequent landslides and debris flow, which further exacerbated the flash flood hazards. Two giant torrent and debris flows occurred due to heavy rainfall after the earthquake, one was on August 13 2010, and the other on August 18 2010. Flash floods reduction and risk assessment are the key issues in post-disaster reconstruction. Hydrological prediction models are important and cost-efficient mitigation tools being widely applied. In this paper, hydrological observations and simulation using remote sensing data and the WMS model are carried out in the typical flood-hit area, Longxihe watershed, Dujiangyan City, Sichuan Province, China. The hydrological response of rainfall runoff is discussed. The results show that: the WMS HEC-1 model can well simulate the runoff process of small watershed in mountainous area. This methodology can be used in other earthquake-affected areas for risk assessment and to predict the magnitude of flash floods. Key Words: Rainfall-runoff modeling. Remote Sensing. Earthquake. WMS.

  9. Sensitivity of Coulomb stress changes to slip models of source faults: A case study for the 2011 Mw 9.0 Tohoku-oki earthquake

    Science.gov (United States)

    Wang, J.; Xu, C.; Furlong, K.; Zhong, B.; Xiao, Z.; Yi, L.; Chen, T.

    2017-12-01

    Although Coulomb stress changes induced by earthquake events have been used to quantify stress transfers and to retrospectively explain stress triggering among earthquake sequences, realistic reliable prospective earthquake forecasting remains scarce. To generate a robust Coulomb stress map for earthquake forecasting, uncertainties in Coulomb stress changes associated with the source fault, receiver fault and friction coefficient and Skempton's coefficient need to be exhaustively considered. In this paper, we specifically explore the uncertainty in slip models of the source fault of the 2011 Mw 9.0 Tohoku-oki earthquake as a case study. This earthquake was chosen because of its wealth of finite-fault slip models. Based on the wealth of those slip models, we compute the coseismic Coulomb stress changes induced by this mainshock. Our results indicate that nearby Coulomb stress changes for each slip model can be quite different, both for the Coulomb stress map at a given depth and on the Pacific subducting slab. The triggering rates for three months of aftershocks of the mainshock, with and without considering the uncertainty in slip models, differ significantly, decreasing from 70% to 18%. Reliable Coulomb stress changes in the three seismogenic zones of Nanki, Tonankai and Tokai are insignificant, approximately only 0.04 bar. By contrast, the portions of the Pacific subducting slab at a depth of 80 km and beneath Tokyo received a positive Coulomb stress change of approximately 0.2 bar. The standard errors of the seismicity rate and earthquake probability based on the Coulomb rate-and-state model (CRS) decay much faster with elapsed time in stress triggering zones than in stress shadows, meaning that the uncertainties in Coulomb stress changes in stress triggering zones would not drastically affect assessments of the seismicity rate and earthquake probability based on the CRS in the intermediate to long term.

  10. Toward real-time regional earthquake simulation II: Real-time Online earthquake Simulation (ROS) of Taiwan earthquakes

    Science.gov (United States)

    Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh

    2014-06-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  11. The Challenge of Centennial Earthquakes to Improve Modern Earthquake Engineering

    International Nuclear Information System (INIS)

    Saragoni, G. Rodolfo

    2008-01-01

    The recent commemoration of the centennial of the San Francisco and Valparaiso 1906 earthquakes has given the opportunity to reanalyze their damages from modern earthquake engineering perspective. These two earthquakes plus Messina Reggio Calabria 1908 had a strong impact in the birth and developing of earthquake engineering. The study of the seismic performance of some up today existing buildings, that survive centennial earthquakes, represent a challenge to better understand the limitations of our in use earthquake design methods. Only Valparaiso 1906 earthquake, of the three considered centennial earthquakes, has been repeated again as the Central Chile, 1985, Ms = 7.8 earthquake. In this paper a comparative study of the damage produced by 1906 and 1985 Valparaiso earthquakes is done in the neighborhood of Valparaiso harbor. In this study the only three centennial buildings of 3 stories that survived both earthquakes almost undamaged were identified. Since for 1985 earthquake accelerogram at El Almendral soil conditions as well as in rock were recoded, the vulnerability analysis of these building is done considering instrumental measurements of the demand. The study concludes that good performance of these buildings in the epicentral zone of large earthquakes can not be well explained by modern earthquake engineering methods. Therefore, it is recommended to use in the future of more suitable instrumental parameters, such as the destructiveness potential factor, to describe earthquake demand

  12. Anatomical Cystocele Recurrence: Development and Internal Validation of a Prediction Model.

    Science.gov (United States)

    Vergeldt, Tineke F M; van Kuijk, Sander M J; Notten, Kim J B; Kluivers, Kirsten B; Weemhoff, Mirjam

    2016-02-01

    To develop a prediction model that estimates the risk of anatomical cystocele recurrence after surgery. The databases of two multicenter prospective cohort studies were combined, and we performed a retrospective secondary analysis of these data. Women undergoing an anterior colporrhaphy without mesh materials and without previous pelvic organ prolapse (POP) surgery filled in a questionnaire, underwent translabial three-dimensional ultrasonography, and underwent staging of POP preoperatively and postoperatively. We developed a prediction model using multivariable logistic regression and internally validated it using standard bootstrapping techniques. The performance of the prediction model was assessed by computing indices of overall performance, discriminative ability, calibration, and its clinical utility by computing test characteristics. Of 287 included women, 149 (51.9%) had anatomical cystocele recurrence. Factors included in the prediction model were assisted delivery, preoperative cystocele stage, number of compartments involved, major levator ani muscle defects, and levator hiatal area during Valsalva. Potential predictors that were excluded after backward elimination because of high P values were age, body mass index, number of vaginal deliveries, and family history of POP. The shrinkage factor resulting from the bootstrap procedure was 0.91. After correction for optimism, Nagelkerke's R and the Brier score were 0.15 and 0.22, respectively. This indicates satisfactory model fit. The area under the receiver operating characteristic curve of the prediction model was 71.6% (95% confidence interval 65.7-77.5). After correction for optimism, the area under the receiver operating characteristic curve was 69.7%. This prediction model, including history of assisted delivery, preoperative stage, number of compartments, levator defects, and levator hiatus, estimates the risk of anatomical cystocele recurrence.

  13. About Block Dynamic Model of Earthquake Source.

    Science.gov (United States)

    Gusev, G. A.; Gufeld, I. L.

    One may state the absence of a progress in the earthquake prediction papers. The short-term prediction (diurnal period, localisation being also predicted) has practical meaning. Failure is due to the absence of the adequate notions about geological medium, particularly, its block structure and especially in the faults. Geological and geophysical monitoring gives the basis for the notion about geological medium as open block dissipative system with limit energy saturation. The variations of the volume stressed state close to critical states are associated with the interaction of the inhomogeneous ascending stream of light gases (helium and hydrogen) with solid phase, which is more expressed in the faults. In the background state small blocks of the fault medium produce the sliding of great blocks in the faults. But for the considerable variations of ascending gas streams the formation of bound chains of small blocks is possible, so that bound state of great blocks may result (earthquake source). Recently using these notions we proposed a dynamical earthquake source model, based on the generalized chain of non-linear bound oscillators of Fermi-Pasta-Ulam type (FPU). The generalization concerns its in homogeneity and different external actions, imitating physical processes in the real source. Earlier weak inhomogeneous approximation without dissipation was considered. Last has permitted to study the FPU return (return to initial state). Probabilistic properties in quasi periodic movement were found. The chain decay problem due to non-linearity and external perturbations was posed. The thresholds and dependence of life- time of the chain are studied. The great fluctuations of life-times are discovered. In the present paper the rigorous consideration of the inhomogeneous chain including the dissipation is considered. For the strong dissipation case, when the oscillation movements are suppressed, specific effects are discovered. For noise action and constantly arising

  14. How fault geometry controls earthquake magnitude

    Science.gov (United States)

    Bletery, Q.; Thomas, A.; Karlstrom, L.; Rempel, A. W.; Sladen, A.; De Barros, L.

    2016-12-01

    Recent large megathrust earthquakes, such as the Mw9.3 Sumatra-Andaman earthquake in 2004 and the Mw9.0 Tohoku-Oki earthquake in 2011, astonished the scientific community. The first event occurred in a relatively low-convergence-rate subduction zone where events of its size were unexpected. The second event involved 60 m of shallow slip in a region thought to be aseismicaly creeping and hence incapable of hosting very large magnitude earthquakes. These earthquakes highlight gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution. Here we show that gradients in dip angle exert a primary control on mega-earthquake occurrence. We calculate the curvature along the major subduction zones of the world and show that past mega-earthquakes occurred on flat (low-curvature) interfaces. A simplified analytic model demonstrates that shear strength heterogeneity increases with curvature. Stress loading on flat megathrusts is more homogeneous and hence more likely to be released simultaneously over large areas than on highly-curved faults. Therefore, the absence of asperities on large faults might counter-intuitively be a source of higher hazard.

  15. Tectonic feedback and the earthquake cycle

    Science.gov (United States)

    Lomnitz, Cinna

    1985-09-01

    The occurrence of cyclical instabilities along plate boundaries at regular intervals suggests that the process of earthquake causation differs in some respects from the model of elastic rebound in its simplest forms. The model of tectonic feedback modifies the concept of this original model in that it provides a physical interaction between the loading rate and the state of strain on the fault. Two examples are developed: (a) Central Chile, and (b) Mexico. The predictions of earthquake hazards for both types of models are compared.

  16. Selection of earthquake resistant design criteria for nuclear power plants: Methodology and technical cases: Dislocation models of near-source earthquake ground motion: A review

    International Nuclear Information System (INIS)

    Luco, J.E.

    1987-05-01

    The solutions available for a number of dynamic dislocation fault models are examined in an attempt at establishing some of the expected characteristics of earthquake ground motion in the near-source region. In particular, solutions for two-dimensional anti-plane shear and plane-strain models as well as for three-dimensional fault models in full space, uniform half-space and layered half-space media are reviewed

  17. UCERF3: A new earthquake forecast for California's complex fault system

    Science.gov (United States)

    Field, Edward H.; ,

    2015-01-01

    With innovations, fresh data, and lessons learned from recent earthquakes, scientists have developed a new earthquake forecast model for California, a region under constant threat from potentially damaging events. The new model, referred to as the third Uniform California Earthquake Rupture Forecast, or "UCERF" (http://www.WGCEP.org/UCERF3), provides authoritative estimates of the magnitude, location, and likelihood of earthquake fault rupture throughout the state. Overall the results confirm previous findings, but with some significant changes because of model improvements. For example, compared to the previous forecast (Uniform California Earthquake Rupture Forecast 2), the likelihood of moderate-sized earthquakes (magnitude 6.5 to 7.5) is lower, whereas that of larger events is higher. This is because of the inclusion of multifault ruptures, where earthquakes are no longer confined to separate, individual faults, but can occasionally rupture multiple faults simultaneously. The public-safety implications of this and other model improvements depend on several factors, including site location and type of structure (for example, family dwelling compared to a long-span bridge). Building codes, earthquake insurance products, emergency plans, and other risk-mitigation efforts will be updated accordingly. This model also serves as a reminder that damaging earthquakes are inevitable for California. Fortunately, there are many simple steps residents can take to protect lives and property.

  18. The 2012 Mw5.6 earthquake in Sofia seismogenic zone - is it a slow earthquake

    Science.gov (United States)

    Raykova, Plamena; Solakov, Dimcho; Slavcheva, Krasimira; Simeonova, Stela; Aleksandrova, Irena

    2017-04-01

    Recently our understanding of tectonic faulting has been shaken by the discoveries of seismic tremor, low frequency earthquakes, slow slip events, and other models of fault slip. These phenomenas represent models of failure that were thought to be non-existent and theoretically impossible only a few years ago. Slow earthquakes are seismic phenomena in which the rupture of geological faults in the earth's crust occurs gradually without creating strong tremors. Despite the growing number of observations of slow earthquakes their origin remains unresolved. Studies show that the duration of slow earthquakes ranges from a few seconds to a few hundred seconds. The regular earthquakes with which most people are familiar release a burst of built-up stress in seconds, slow earthquakes release energy in ways that do little damage. This study focus on the characteristics of the Mw5.6 earthquake occurred in Sofia seismic zone on May 22nd, 2012. The Sofia area is the most populated, industrial and cultural region of Bulgaria that faces considerable earthquake risk. The Sofia seismic zone is located in South-western Bulgaria - the area with pronounce tectonic activity and proved crustal movement. In 19th century the city of Sofia (situated in the centre of the Sofia seismic zone) has experienced two strong earthquakes with epicentral intensity of 10 MSK. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK64).The 2012 quake occurs in an area characterized by a long quiescence (of 95 years) for moderate events. Moreover, a reduced number of small earthquakes have also been registered in the recent past. The Mw5.6 earthquake is largely felt on the territory of Bulgaria and neighbouring countries. No casualties and severe injuries have been reported. Mostly moderate damages were observed in the cities of Pernik and Sofia and their surroundings. These observations could be assumed indicative for a

  19. Biopsychosocial model of chronic recurrent pain

    Directory of Open Access Journals (Sweden)

    Zlatka Rakovec-Felser

    2009-07-01

    Full Text Available Pain is not merely a symptom of disease but a complex independent phenomenon where psychological factors are always present (Sternberg, 1973. Especially by chronic, recurrent pain it's more constructive to think of chronic pain as a syndrome that evolves over time, involving a complex interaction of physiological/organic, psychological, and behavioural processes. Study of chronic recurrent functional pain covers tension form of headache. 50 suffering persons were accidentally chosen among those who had been seeking medical help over more than year ago. We tested their pain intensity and duration, extent of subjective experience of accommodation efforts, temperament characteristics, coping strategies, personal traits, the role of pain in intra- and interpersonal communication. At the end we compared this group with control group (without any manifest physical disorders and with analyse of variance (MANOVA. The typical person who suffers and expects medical help is mostly a woman, married, has elementary or secondary education, is about 40. Pain, seems to appear in the phase of stress-induced psychophysical fatigue, by persons with lower constitutional resistance to different influences, greater irritability and number of physiologic correlates of emotional tensions. Because of their ineffective style of coping, it seems they quickly exhausted their adaptation potential too. Through their higher level of social–field dependence, reactions of other persons (doctor, spouse could be important factors of reinforcement and social learning processes. In managing of chronic pain, especially such as tension headache is, it's very important to involve bio-psychosocial model of pain and integrative model of treatment. Intra- and inter-subjective psychological functions of pain must be recognised as soon as possible.

  20. Predicting adenocarcinoma recurrence using computational texture models of nodule components in lung CT

    International Nuclear Information System (INIS)

    Depeursinge, Adrien; Yanagawa, Masahiro; Leung, Ann N.; Rubin, Daniel L.

    2015-01-01

    Purpose: To investigate the importance of presurgical computed tomography (CT) intensity and texture information from ground-glass opacities (GGO) and solid nodule components for the prediction of adenocarcinoma recurrence. Methods: For this study, 101 patients with surgically resected stage I adenocarcinoma were selected. During the follow-up period, 17 patients had disease recurrence with six associated cancer-related deaths. GGO and solid tumor components were delineated on presurgical CT scans by a radiologist. Computational texture models of GGO and solid regions were built using linear combinations of steerable Riesz wavelets learned with linear support vector machines (SVMs). Unlike other traditional texture attributes, the proposed texture models are designed to encode local image scales and directions that are specific to GGO and solid tissue. The responses of the locally steered models were used as texture attributes and compared to the responses of unaligned Riesz wavelets. The texture attributes were combined with CT intensities to predict tumor recurrence and patient hazard according to disease-free survival (DFS) time. Two families of predictive models were compared: LASSO and SVMs, and their survival counterparts: Cox-LASSO and survival SVMs. Results: The best-performing predictive model of patient hazard was associated with a concordance index (C-index) of 0.81 ± 0.02 and was based on the combination of the steered models and CT intensities with survival SVMs. The same feature group and the LASSO model yielded the highest area under the receiver operating characteristic curve (AUC) of 0.8 ± 0.01 for predicting tumor recurrence, although no statistically significant difference was found when compared to using intensity features solely. For all models, the performance was found to be significantly higher when image attributes were based on the solid components solely versus using the entire tumors (p < 3.08 × 10 −5 ). Conclusions: This study

  1. Long-period effects of the Denali earthquake on water bodies in the Puget Lowland: Observations and modeling

    Science.gov (United States)

    Barberopoulou, A.; Qamar, A.; Pratt, T.L.; Steele, W.P.

    2006-01-01

    Analysis of strong-motion instrument recordings in Seattle, Washington, resulting from the 2002 Mw 7.9 Denali, Alaska, earthquake reveals that amplification in the 0.2-to 1.0-Hz frequency band is largely governed by the shallow sediments both inside and outside the sedimentary basins beneath the Puget Lowland. Sites above the deep sedimentary strata show additional seismic-wave amplification in the 0.04- to 0.2-Hz frequency range. Surface waves generated by the Mw 7.9 Denali, Alaska, earthquake of 3 November 2002 produced pronounced water waves across Washington state. The largest water waves coincided with the area of largest seismic-wave amplification underlain by the Seattle basin. In the current work, we present reports that show Lakes Union and Washington, both located on the Seattle basin, are susceptible to large water waves generated by large local earthquakes and teleseisms. A simple model of a water body is adopted to explain the generation of waves in water basins. This model provides reasonable estimates for the water-wave amplitudes in swimming pools during the Denali earthquake but appears to underestimate the waves observed in Lake Union.

  2. Calculation of displacements on fractures intersecting canisters induced by earthquakes: Aberg, Beberg and Ceberg examples

    Energy Technology Data Exchange (ETDEWEB)

    LaPointe, P.R.; Cladouhos, T. [Golder Associates Inc. (Sweden); Follin, S. [Golder Grundteknik KB (Sweden)

    1999-01-01

    This study shows how the method developed in La Pointe and others can be applied to assess the safety of canisters due to secondary slippage of fractures intersecting those canisters in the event of an earthquake. The method is applied to the three generic sites Aberg, Beberg and Ceberg. Estimation of secondary slippage or displacement is a four-stage process. The first stage is the analysis of lineament trace data in order to quantify the scaling properties of the fractures. This is necessary to insure that all scales of fracturing are properly represented in the numerical simulations. The second stage consists of creating stochastic discrete fracture network (DFN) models for jointing and small faulting at each of the generic sites. The third stage is to combine the stochastic DFN model with mapped lineament data at larger scales into data sets for the displacement calculations. The final stage is to carry out the displacement calculations for all of the earthquakes that might occur during the next 100,000 years. Large earthquakes are located along any lineaments in the vicinity of the site that are of sufficient size to accommodate an earthquake of the specified magnitude. These lineaments are assumed to represent vertical faults. Smaller earthquakes are located at random. The magnitude of the earthquake that any fault could generate is based upon the mapped surface trace length of the lineaments, and is calculated from regression relations. Recurrence rates for a given magnitude of earthquake are based upon published studies for Sweden. A major assumption in this study is that future earthquakes will be similar in magnitude, location and orientation as earthquakes in the geological and historical records of Sweden. Another important assumption is that the displacement calculations based upon linear elasticity and linear elastic fracture mechanics provides a conservative (over-)estimate of possible displacements. A third assumption is that the world

  3. Adaptively smoothed seismicity earthquake forecasts for Italy

    Directory of Open Access Journals (Sweden)

    Yan Y. Kagan

    2010-11-01

    Full Text Available We present a model for estimation of the probabilities of future earthquakes of magnitudes m ≥ 4.95 in Italy. This model is a modified version of that proposed for California, USA, by Helmstetter et al. [2007] and Werner et al. [2010a], and it approximates seismicity using a spatially heterogeneous, temporally homogeneous Poisson point process. The temporal, spatial and magnitude dimensions are entirely decoupled. Magnitudes are independently and identically distributed according to a tapered Gutenberg-Richter magnitude distribution. We have estimated the spatial distribution of future seismicity by smoothing the locations of past earthquakes listed in two Italian catalogs: a short instrumental catalog, and a longer instrumental and historic catalog. The bandwidth of the adaptive spatial kernel is estimated by optimizing the predictive power of the kernel estimate of the spatial earthquake density in retrospective forecasts. When available and reliable, we used small earthquakes of m ≥ 2.95 to reveal active fault structures and 29 probable future epicenters. By calibrating the model with these two catalogs of different durations to create two forecasts, we intend to quantify the loss (or gain of predictability incurred when only a short, but recent, data record is available. Both forecasts were scaled to five and ten years, and have been submitted to the Italian prospective forecasting experiment of the global Collaboratory for the Study of Earthquake Predictability (CSEP. An earlier forecast from the model was submitted by Helmstetter et al. [2007] to the Regional Earthquake Likelihood Model (RELM experiment in California, and with more than half of the five-year experimental period over, the forecast has performed better than the others.

  4. Demonstration of the Cascadia G‐FAST geodetic earthquake early warning system for the Nisqually, Washington, earthquake

    Science.gov (United States)

    Crowell, Brendan; Schmidt, David; Bodin, Paul; Vidale, John; Gomberg, Joan S.; Hartog, Renate; Kress, Victor; Melbourne, Tim; Santillian, Marcelo; Minson, Sarah E.; Jamison, Dylan

    2016-01-01

    A prototype earthquake early warning (EEW) system is currently in development in the Pacific Northwest. We have taken a two‐stage approach to EEW: (1) detection and initial characterization using strong‐motion data with the Earthquake Alarm Systems (ElarmS) seismic early warning package and (2) the triggering of geodetic modeling modules using Global Navigation Satellite Systems data that help provide robust estimates of large‐magnitude earthquakes. In this article we demonstrate the performance of the latter, the Geodetic First Approximation of Size and Time (G‐FAST) geodetic early warning system, using simulated displacements for the 2001Mw 6.8 Nisqually earthquake. We test the timing and performance of the two G‐FAST source characterization modules, peak ground displacement scaling, and Centroid Moment Tensor‐driven finite‐fault‐slip modeling under ideal, latent, noisy, and incomplete data conditions. We show good agreement between source parameters computed by G‐FAST with previously published and postprocessed seismic and geodetic results for all test cases and modeling modules, and we discuss the challenges with integration into the U.S. Geological Survey’s ShakeAlert EEW system.

  5. Encoding of phonology in a recurrent neural model of grounded speech

    NARCIS (Netherlands)

    Alishahi, Afra; Barking, Marie; Chrupala, Grzegorz; Levy, Roger; Specia, Lucia

    2017-01-01

    We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how

  6. Development and validation of a prognostic model for recurrent glioblastoma patients treated with bevacizumab and irinotecan

    DEFF Research Database (Denmark)

    Urup, Thomas; Dahlrot, Rikke Hedegaard; Grunnet, Kirsten

    2016-01-01

    Background Predictive markers and prognostic models are required in order to individualize treatment of recurrent glioblastoma (GBM) patients. Here, we sought to identify clinical factors able to predict response and survival in recurrent GBM patients treated with bevacizumab (BEV) and irinotecan....... Material and methods A total of 219 recurrent GBM patients treated with BEV plus irinotecan according to a previously published treatment protocol were included in the initial population. Prognostic models were generated by means of multivariate logistic and Cox regression analysis. Results In multivariate...

  7. Updating the USGS seismic hazard maps for Alaska

    Science.gov (United States)

    Mueller, Charles; Briggs, Richard; Wesson, Robert L.; Petersen, Mark D.

    2015-01-01

    The U.S. Geological Survey makes probabilistic seismic hazard maps and engineering design maps for building codes, emergency planning, risk management, and many other applications. The methodology considers all known earthquake sources with their associated magnitude and rate distributions. Specific faults can be modeled if slip-rate or recurrence information is available. Otherwise, areal sources are developed from earthquake catalogs or GPS data. Sources are combined with ground-motion estimates to compute the hazard. The current maps for Alaska were developed in 2007, and included modeled sources for the Alaska-Aleutian megathrust, a few crustal faults, and areal seismicity sources. The megathrust was modeled as a segmented dipping plane with segmentation largely derived from the slip patches of past earthquakes. Some megathrust deformation is aseismic, so recurrence was estimated from seismic history rather than plate rates. Crustal faults included the Fairweather-Queen Charlotte system, the Denali–Totschunda system, the Castle Mountain fault, two faults on Kodiak Island, and the Transition fault, with recurrence estimated from geologic data. Areal seismicity sources were developed for Benioff-zone earthquakes and for crustal earthquakes not associated with modeled faults. We review the current state of knowledge in Alaska from a seismic-hazard perspective, in anticipation of future updates of the maps. Updated source models will consider revised seismicity catalogs, new information on crustal faults, new GPS data, and new thinking on megathrust recurrence, segmentation, and geometry. Revised ground-motion models will provide up-to-date shaking estimates for crustal earthquakes and subduction earthquakes in Alaska.

  8. Of overlapping Cantor sets and earthquakes: analysis of the discrete Chakrabarti-Stinchcombe model

    Science.gov (United States)

    Bhattacharyya, Pratip

    2005-03-01

    We report an exact analysis of a discrete form of the Chakrabarti-Stinchcombe model for earthquakes (Physica A 270 (1999) 27), which considers a pair of dynamically overlapping finite generations of the Cantor set as a prototype of geological faults. In this model the nth generation of the Cantor set shifts on its replica in discrete steps of the length of a line segment in that generation and periodic boundary conditions are assumed. We determine the general form of time sequences for the constant magnitude overlaps and, hence, obtain the complete time-series of overlaps by the superposition of these sequences for all overlap magnitudes. From the time-series we derive the exact frequency distribution of the overlap magnitudes. The corresponding probability distribution of the logarithm of overlap magnitudes for the nth generation is found to assume the form of the binomial distribution for n Bernoulli trials with probability {1}/{3} for the success of each trial. For an arbitrary pair of consecutive overlaps in the time-series where the magnitude of the earlier overlap is known, we find that the magnitude of the later overlap can be determined with a definite probability; the conditional probability for each possible magnitude of the later overlap follows the binomial distribution for k Bernoulli trials with probability {1}/{2} for the success of each trial and the number k is determined by the magnitude of the earlier overlap. Although this model does not produce the Gutenberg-Richter law for earthquakes, our results indicate that the fractal structure of faults admits a probabilistic prediction of earthquake magnitudes.

  9. Local and regional minimum 1D models for earthquake location and data quality assessment in complex tectonic regions: application to Switzerland

    International Nuclear Information System (INIS)

    Husen, S.; Clinton, J. F.; Kissling, E.

    2011-01-01

    One-dimensional (1D) velocity models are still widely used for computing earthquake locations at seismological centers or in regions where three-dimensional (3D) velocity models are not available due to the lack of data of sufficiently high quality. The concept of the minimum 1D model with appropriate station corrections provides a framework to compute initial hypocenter locations and seismic velocities for local earthquake tomography. Since a minimum 1D model represents a solution to the coupled hypocenter-velocity problem it also represents a suitable velocity model for earthquake location and data quality assessment, such as evaluating the consistency in assigning pre-defined weighting classes and average picking error. Nevertheless, the use of a simple 1D velocity structure in combination with station delays raises the question of how appropriate the minimum 1D model concept is when applied to complex tectonic regions with significant three-dimensional (3D) variations in seismic velocities. In this study we compute one regional minimum 1D model and three local minimum 1D models for selected subregions of the Swiss Alpine region, which exhibits a strongly varying Moho topography. We compare the regional and local minimum 1D models in terms of earthquake locations and data quality assessment to measure their performance. Our results show that the local minimum 1D models provide more realistic hypocenter locations and better data fits than a single model for the Alpine region. We attribute this to the fact that in a local minimum 1D model local and regional effects of the velocity structure can be better separated. Consequently, in tectonically complex regions, minimum 1D models should be computed in sub-regions defined by similar structure, if they are used for earthquake location and data quality assessment. (authors)

  10. Local and regional minimum 1D models for earthquake location and data quality assessment in complex tectonic regions: application to Switzerland

    Energy Technology Data Exchange (ETDEWEB)

    Husen, S.; Clinton, J. F. [Swiss Seismological Service, ETH Zuerich, Zuerich (Switzerland); Kissling, E. [Institute of Geophysics, ETH Zuerich, Zuerich (Switzerland)

    2011-10-15

    One-dimensional (1D) velocity models are still widely used for computing earthquake locations at seismological centers or in regions where three-dimensional (3D) velocity models are not available due to the lack of data of sufficiently high quality. The concept of the minimum 1D model with appropriate station corrections provides a framework to compute initial hypocenter locations and seismic velocities for local earthquake tomography. Since a minimum 1D model represents a solution to the coupled hypocenter-velocity problem it also represents a suitable velocity model for earthquake location and data quality assessment, such as evaluating the consistency in assigning pre-defined weighting classes and average picking error. Nevertheless, the use of a simple 1D velocity structure in combination with station delays raises the question of how appropriate the minimum 1D model concept is when applied to complex tectonic regions with significant three-dimensional (3D) variations in seismic velocities. In this study we compute one regional minimum 1D model and three local minimum 1D models for selected subregions of the Swiss Alpine region, which exhibits a strongly varying Moho topography. We compare the regional and local minimum 1D models in terms of earthquake locations and data quality assessment to measure their performance. Our results show that the local minimum 1D models provide more realistic hypocenter locations and better data fits than a single model for the Alpine region. We attribute this to the fact that in a local minimum 1D model local and regional effects of the velocity structure can be better separated. Consequently, in tectonically complex regions, minimum 1D models should be computed in sub-regions defined by similar structure, if they are used for earthquake location and data quality assessment. (authors)

  11. Midbroken Reinforced Concrete Shear Frames Due to Earthquakes

    DEFF Research Database (Denmark)

    Köylüoglu, H. U.; Cakmak, A. S.; Nielsen, Søren R. K.

    A non-linear hysteretic model for the response and local damage analyses of reinforced concrete shear frames subject to earthquake excitation is proposed, and, the model is applied to analyse midbroken reinforced concrete (RC) structures due to earthquake loads. Each storey of the shear frame...

  12. Excel, Earthquakes, and Moneyball: exploring Cascadia earthquake probabilities using spreadsheets and baseball analogies

    Science.gov (United States)

    Campbell, M. R.; Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2017-12-01

    Much recent media attention focuses on Cascadia's earthquake hazard. A widely cited magazine article starts "An earthquake will destroy a sizable portion of the coastal Northwest. The question is when." Stories include statements like "a massive earthquake is overdue", "in the next 50 years, there is a 1-in-10 chance a "really big one" will erupt," or "the odds of the big Cascadia earthquake happening in the next fifty years are roughly one in three." These lead students to ask where the quoted probabilities come from and what they mean. These probability estimates involve two primary choices: what data are used to describe when past earthquakes happened and what models are used to forecast when future earthquakes will happen. The data come from a 10,000-year record of large paleoearthquakes compiled from subsidence data on land and turbidites, offshore deposits recording submarine slope failure. Earthquakes seem to have happened in clusters of four or five events, separated by gaps. Earthquakes within a cluster occur more frequently and regularly than in the full record. Hence the next earthquake is more likely if we assume that we are in the recent cluster that started about 1700 years ago, than if we assume the cluster is over. Students can explore how changing assumptions drastically changes probability estimates using easy-to-write and display spreadsheets, like those shown below. Insight can also come from baseball analogies. The cluster issue is like deciding whether to assume that a hitter's performance in the next game is better described by his lifetime record, or by the past few games, since he may be hitting unusually well or in a slump. The other big choice is whether to assume that the probability of an earthquake is constant with time, or is small immediately after one occurs and then grows with time. This is like whether to assume that a player's performance is the same from year to year, or changes over their career. Thus saying "the chance of

  13. Mapping Tectonic Stress Using Earthquakes

    International Nuclear Information System (INIS)

    Arnold, Richard; Townend, John; Vignaux, Tony

    2005-01-01

    An earthquakes occurs when the forces acting on a fault overcome its intrinsic strength and cause it to slip abruptly. Understanding more specifically why earthquakes occur at particular locations and times is complicated because in many cases we do not know what these forces actually are, or indeed what processes ultimately trigger slip. The goal of this study is to develop, test, and implement a Bayesian method of reliably determining tectonic stresses using the most abundant stress gauges available - earthquakes themselves.Existing algorithms produce reasonable estimates of the principal stress directions, but yield unreliable error bounds as a consequence of the generally weak constraint on stress imposed by any single earthquake, observational errors, and an unavoidable ambiguity between the fault normal and the slip vector.A statistical treatment of the problem can take into account observational errors, combine data from multiple earthquakes in a consistent manner, and provide realistic error bounds on the estimated principal stress directions.We have developed a realistic physical framework for modelling multiple earthquakes and show how the strong physical and geometrical constraints present in this problem allow inference to be made about the orientation of the principal axes of stress in the earth's crust

  14. Earthquake response analysis of a base isolated building

    International Nuclear Information System (INIS)

    Mazda, T.; Shiojiri, H.; Sawada, Y.; Harada, O.; Kawai, N.; Ontsuka, S.

    1989-01-01

    Recently, the seismic isolation has become one of the popular methods in the design of important structures or equipments against the earthquakes. However, it is desired to accumulate the demonstration data on reliability of seismically isolated structures and to establish the analysis methods of those structures. Based on the above recognition, the vibration tests of a base isolated building were carried out in Tsukuba Science City. After that, many earthquake records have been obtained at the building. In order to examine the validity of numerical models, earthquake response analyses have been executed by using both lumped mass model, and finite element model

  15. A model of seismic focus and related statistical distributions of earthquakes

    International Nuclear Information System (INIS)

    Apostol, Bogdan-Felix

    2006-01-01

    A growth model for accumulating seismic energy in a localized seismic focus is described, which introduces a fractional parameter r on geometrical grounds. The model is employed for deriving a power-type law for the statistical distribution in energy, where the parameter r contributes to the exponent, as well as corresponding time and magnitude distributions for earthquakes. The accompanying seismic activity of foreshocks and aftershocks is discussed in connection with this approach, as based on Omori distributions, and the rate of released energy is derived

  16. An Integrated and Interdisciplinary Model for Predicting the Risk of Injury and Death in Future Earthquakes.

    Science.gov (United States)

    Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor

    2016-01-01

    A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities' preparedness and response capabilities and to mitigate future consequences. An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model's algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties.

  17. Earthquake Damping Device for Steel Frame

    Science.gov (United States)

    Zamri Ramli, Mohd; Delfy, Dezoura; Adnan, Azlan; Torman, Zaida

    2018-04-01

    Structures such as buildings, bridges and towers are prone to collapse when natural phenomena like earthquake occurred. Therefore, many design codes are reviewed and new technologies are introduced to resist earthquake energy especially on building to avoid collapse. The tuned mass damper is one of the earthquake reduction products introduced on structures to minimise the earthquake effect. This study aims to analyse the effectiveness of tuned mass damper by experimental works and finite element modelling. The comparisons are made between these two models under harmonic excitation. Based on the result, it is proven that installing tuned mass damper will reduce the dynamic response of the frame but only in several input frequencies. At the highest input frequency applied, the tuned mass damper failed to reduce the responses. In conclusion, in order to use a proper design of damper, detailed analysis must be carried out to have sufficient design based on the location of the structures with specific ground accelerations.

  18. Viscoelastic Earthquake Cycle Simulation with Memory Variable Method

    Science.gov (United States)

    Hirahara, K.; Ohtani, M.

    2017-12-01

    There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half

  19. Sedimentary Signatures of Submarine Earthquakes: Deciphering the Extent of Sediment Remobilization from the 2011 Tohoku Earthquake and Tsunami and 2010 Haiti Earthquake

    Science.gov (United States)

    McHugh, C. M.; Seeber, L.; Moernaut, J.; Strasser, M.; Kanamatsu, T.; Ikehara, K.; Bopp, R.; Mustaque, S.; Usami, K.; Schwestermann, T.; Kioka, A.; Moore, L. M.

    2017-12-01

    The 2004 Sumatra-Andaman Mw9.3 and the 2011 Tohoku (Japan) Mw9.0 earthquakes and tsunamis were huge geological events with major societal consequences. Both were along subduction boundaries and ruptured portions of these boundaries that had been deemed incapable of such events. Submarine strike-slip earthquakes, such as the 2010 Mw7.0 in Haiti, are smaller but may be closer to population centers and can be similarly catastrophic. Both classes of earthquakes remobilize sediment and leave distinct signatures in the geologic record by a wide range of processes that depends on both environment and earthquake characteristics. Understanding them has the potential of greatly expanding the record of past earthquakes, which is critical for geohazard analysis. Recent events offer precious ground truth about the earthquakes and short-lived radioisotopes offer invaluable tools to identify sediments they remobilized. In the 2011 Mw9 Japan earthquake they document the spatial extent of remobilized sediment from water depths of 626m in the forearc slope to trench depths of 8000m. Subbottom profiles, multibeam bathymetry and 40 piston cores collected by the R/V Natsushima and R/V Sonne expeditions to the Japan Trench document multiple turbidites and high-density flows. Core tops enriched in xs210Pb,137Cs and 134Cs reveal sediment deposited by the 2011 Tohoku earthquake and tsunami. The thickest deposits (2m) were documented on a mid-slope terrace and trench (4000-8000m). Sediment was deposited on some terraces (600-3000m), but shed from the steep forearc slope (3000-4000m). The 2010 Haiti mainshock ruptured along the southern flank of Canal du Sud and triggered multiple nearshore sediment failures, generated turbidity currents and stirred fine sediment into suspension throughout this basin. A tsunami was modeled to stem from both sediment failures and tectonics. Remobilized sediment was tracked with short-lived radioisotopes from the nearshore, slope, in fault basins including the

  20. Tsunami Simulations in the Western Makran Using Hypothetical Heterogeneous Source Models from World's Great Earthquakes

    Science.gov (United States)

    Rashidi, Amin; Shomali, Zaher Hossein; Keshavarz Farajkhah, Nasser

    2018-04-01

    The western segment of Makran subduction zone is characterized with almost no major seismicity and no large earthquake for several centuries. A possible episode for this behavior is that this segment is currently locked accumulating energy to generate possible great future earthquakes. Taking into account this assumption, a hypothetical rupture area is considered in the western Makran to set different tsunamigenic scenarios. Slip distribution models of four recent tsunamigenic earthquakes, i.e. 2015 Chile M w 8.3, 2011 Tohoku-Oki M w 9.0 (using two different scenarios) and 2006 Kuril Islands M w 8.3, are scaled into the rupture area in the western Makran zone. The numerical modeling is performed to evaluate near-field and far-field tsunami hazards. Heterogeneity in slip distribution results in higher tsunami amplitudes. However, its effect reduces from local tsunamis to regional and distant tsunamis. Among all considered scenarios for the western Makran, only a similar tsunamigenic earthquake to the 2011 Tohoku-Oki event can re-produce a significant far-field tsunami and is considered as the worst case scenario. The potential of a tsunamigenic source is dominated by the degree of slip heterogeneity and the location of greatest slip on the rupture area. For the scenarios with similar slip patterns, the mean slip controls their relative power. Our conclusions also indicate that along the entire Makran coasts, the southeastern coast of Iran is the most vulnerable area subjected to tsunami hazard.

  1. Tsunami Simulations in the Western Makran Using Hypothetical Heterogeneous Source Models from World's Great Earthquakes

    Science.gov (United States)

    Rashidi, Amin; Shomali, Zaher Hossein; Keshavarz Farajkhah, Nasser

    2018-03-01

    The western segment of Makran subduction zone is characterized with almost no major seismicity and no large earthquake for several centuries. A possible episode for this behavior is that this segment is currently locked accumulating energy to generate possible great future earthquakes. Taking into account this assumption, a hypothetical rupture area is considered in the western Makran to set different tsunamigenic scenarios. Slip distribution models of four recent tsunamigenic earthquakes, i.e. 2015 Chile M w 8.3, 2011 Tohoku-Oki M w 9.0 (using two different scenarios) and 2006 Kuril Islands M w 8.3, are scaled into the rupture area in the western Makran zone. The numerical modeling is performed to evaluate near-field and far-field tsunami hazards. Heterogeneity in slip distribution results in higher tsunami amplitudes. However, its effect reduces from local tsunamis to regional and distant tsunamis. Among all considered scenarios for the western Makran, only a similar tsunamigenic earthquake to the 2011 Tohoku-Oki event can re-produce a significant far-field tsunami and is considered as the worst case scenario. The potential of a tsunamigenic source is dominated by the degree of slip heterogeneity and the location of greatest slip on the rupture area. For the scenarios with similar slip patterns, the mean slip controls their relative power. Our conclusions also indicate that along the entire Makran coasts, the southeastern coast of Iran is the most vulnerable area subjected to tsunami hazard.

  2. Geophysical Anomalies and Earthquake Prediction

    Science.gov (United States)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require

  3. Space Geodetic Observations and Modeling of 2016 Mw 5.9 Menyuan Earthquake: Implications on Seismogenic Tectonic Motion

    Directory of Open Access Journals (Sweden)

    Yongsheng Li

    2016-06-01

    Full Text Available Determining the relationship between crustal movement and faulting in thrust belts is essential for understanding the growth of geological structures and addressing the proposed models of a potential earthquake hazard. A Mw 5.9 earthquake occurred on 21 January 2016 in Menyuan, NE Qinghai Tibetan plateau. We combined satellite interferometry from Sentinel-1A Terrain Observation with Progressive Scans (TOPS images, historical earthquake records, aftershock relocations and geological data to determine fault seismogenic structural geometry and its relationship with the Lenglongling faults. The results indicate that the reverse slip of the 2016 earthquake is distributed on a southwest dipping shovel-shaped fault segment. The main shock rupture was initiated at the deeper part of the fault plane. The focal mechanism of the 2016 earthquake is quite different from that of a previous Ms 6.5 earthquake which occurred in 1986. Both earthquakes occurred at the two ends of a secondary fault. Joint analysis of the 1986 and 2016 earthquakes and aftershocks distribution of the 2016 event reveals an intense connection with the tectonic deformation of the Lenglongling faults. Both earthquakes resulted from the left-lateral strike-slip of the Lenglongling fault zone and showed distinct focal mechanism characteristics. Under the shearing influence, the normal component is formed at the releasing bend of the western end of the secondary fault for the left-order alignment of the fault zone, while the thrust component is formed at the restraining bend of the east end for the right-order alignment of the fault zone. Seismic activity of this region suggests that the left-lateral strike-slip of the Lenglongling fault zone plays a significant role in adjustment of the tectonic deformation in the NE Tibetan plateau.

  4. Global Compilation of InSAR Earthquake Source Models: Comparisons with Seismic Catalogues and the Effects of 3D Earth Structure

    Science.gov (United States)

    Weston, J. M.; Ferreira, A. M.; Funning, G. J.

    2010-12-01

    While past progress in seismology led to extensive earthquake catalogues such as the Global Centroid Moment Tensor (GCMT) catalogue, recent advances in space geodesy have enabled earthquake parameter estimations from the measurement of the deformation of the Earth’s surface, notably using InSAR data. Many earthquakes have now been studied using InSAR, but a full assessment of the quality and of the additional value of these source parameters compared to traditional seismological techniques is still lacking. In this study we present results of systematic comparisons between earthquake CMT parameters determined using InSAR and seismic data, on a global scale. We compiled a large database of source parameters obtained using InSAR data from the literature and estimated the corresponding CMT parameters into a ICMT compilation. We here present results from the analysis of 58 earthquakes that occurred between 1992-2007 from about 80 published InSAR studies. Multiple studies of the same earthquake are included in the archive, as they are valuable to assess uncertainties. Where faults are segmented, with changes in width along-strike, a weighted average based on the seismic moment in each fault has been used to determine overall earthquake parameters. For variable slip models, we have calculated source parameters taking the spatial distribution of slip into account. The parameters in our ICMT compilation are compared with those taken from the Global CMT (GCMT), ISC, EHB and NEIC catalogues. We find that earthquake fault strike, dip and rake values in the GCMT and ICMT archives are generally compatible with each other. Likewise, the differences in seismic moment in these two archives are relatively small. However, the locations of the centroid epicentres show substantial discrepancies, which are larger when comparing with GCMT locations (10-30km differences) than for EHB, ISC and NEIC locations (5-15km differences). Since InSAR data have a high spatial resolution, and thus

  5. A 100-year average recurrence interval for the san andreas fault at wrightwood, california.

    Science.gov (United States)

    Fumal, T E; Schwartz, D P; Pezzopane, S K; Weldon, R J

    1993-01-08

    Evidence for five large earthquakes during the past five centuries along the San Andreas fault zone 70 kilometers northeast of Los Angeles, California, indicates that the average recurrence interval and the temporal variability are significantly smaller than previously thought. Rapid sedimentation during the past 5000 years in a 150-meter-wide structural depression has produced a greater than 21-meter-thick sequence of debris flow and stream deposits interbedded with more than 50 datable peat layers. Fault scarps, colluvial wedges, fissure infills, upward termination of ruptures, and tilted and folded deposits above listric faults provide evidence for large earthquakes that occurred in A.D. 1857, 1812, and about 1700, 1610, and 1470.

  6. A dynamic model for slab development associated with the 2015 Mw 7.9 Bonin Islands deep earthquak

    Science.gov (United States)

    Zhan, Z.; Yang, T.; Gurnis, M.

    2016-12-01

    The 680 km deep May 30, 2015 Mw 7.9 Bonin Islands earthquake is isolated from the nearest earthquakes by more than 150 km. The geodynamic context leading to this isolated deep event is unclear. Tomographic models and seismicity indicate that the morphology of the west-dipping Pacific slab changes rapidly along the strike of the Izu-Bonin-Mariana trench. To the north, the Izu-Bonin section of the Pacific slab lies horizontally above the 660 km discontinuity and extends more than 500 km westward. Several degrees south, the Mariana section dips vertically and penetrates directly into the lower mantle. The observed slab morphology is consistent with plate reconstructions suggesting that the northern section of the IBM trench retreated rapidly since the late Eocene while the southern section of the IBM trench was relatively stable during the same period. We suggest that the location of the isolated 2015 Bonin Islands deep earthquake can be explained by the buckling of the Pacific slab beneath the Bonin Islands. We use geodynamic models to investigate the slab morphology, temperature and stress regimes under different trench motion histories. Models confirm previous results that the slab often lies horizontally within the transition zone when the trench retreats, but buckles when the trench position becomes fixed with respect to the lower mantle. We show that a slab-buckling model is consistent with the observed deep earthquake P-axis directions (assumed to be the axis of principal compressional stress) regionally. The influences of various physical parameters on slab morphology, temperature and stress regime are investigated. In the models investigated, the horizontal width of the buckled slab is no more than 400 km.

  7. Smartphone-Based Earthquake and Tsunami Early Warning in Chile

    Science.gov (United States)

    Brooks, B. A.; Baez, J. C.; Ericksen, T.; Barrientos, S. E.; Minson, S. E.; Duncan, C.; Guillemot, C.; Smith, D.; Boese, M.; Cochran, E. S.; Murray, J. R.; Langbein, J. O.; Glennie, C. L.; Dueitt, J.; Parra, H.

    2016-12-01

    Many locations around the world face high seismic hazard, but do not have the resources required to establish traditional earthquake and tsunami warning systems (E/TEW) that utilize scientific grade seismological sensors. MEMs accelerometers and GPS chips embedded in, or added inexpensively to, smartphones are sensitive enough to provide robust E/TEW if they are deployed in sufficient numbers. We report on a pilot project in Chile, one of the most productive earthquake regions world-wide. There, magnitude 7.5+ earthquakes occurring roughly every 1.5 years and larger tsunamigenic events pose significant local and trans-Pacific hazard. The smartphone-based network described here is being deployed in parallel to the build-out of a scientific-grade network for E/TEW. Our sensor package comprises a smartphone with internal MEMS and an external GPS chipset that provides satellite-based augmented positioning and phase-smoothing. Each station is independent of local infrastructure, they are solar-powered and rely on cellular SIM cards for communications. An Android app performs initial onboard processing and transmits both accelerometer and GPS data to a server employing the FinDer-BEFORES algorithm to detect earthquakes, producing an acceleration-based line source model for smaller magnitude earthquakes or a joint seismic-geodetic finite-fault distributed slip model for sufficiently large magnitude earthquakes. Either source model provides accurate ground shaking forecasts, while distributed slip models for larger offshore earthquakes can be used to infer seafloor deformation for local tsunami warning. The network will comprise 50 stations by Sept. 2016 and 100 stations by Dec. 2016. Since Nov. 2015, batch processing has detected, located, and estimated the magnitude for Mw>5 earthquakes. Operational since June, 2016, we have successfully detected two earthquakes > M5 (M5.5, M5.1) that occurred within 100km of our network while producing zero false alarms.

  8. Numerical modeling of intraplate seismicity with a deformable loading plate

    Science.gov (United States)

    So, B. D.; Capitanio, F. A.

    2017-12-01

    We use finite element modeling to investigate on the stress loading-unloading cycles and earthquakes occurrence in the plate interiors, resulting from the interactions of tectonic plates along their boundary. We model a visco-elasto-plastic plate embedding a single or multiple faults, while the tectonic stress is applied along the plate boundary by an external loading visco-elastic plate, reproducing the tectonic setting of two interacting lithospheres. Because the two plates deform viscously, the timescale of stress accumulation and release on the faults is self-consistently determined, from the boundary to the interiors, and seismic recurrence is an emerging feature. This approach overcomes the constraints on recurrence period imposed by stress (stress-drop) and velocity boundary conditions, while here it is unconstrained. We illustrate emerging macroscopic characteristics of this system, showing that the seismic recurrence period τ becomes shorter as Γ and Θ decreases, where Γ = ηI/ηL the viscosity ratio of the viscosities of the internal fault-embedded to external loading plates, respectively, and Θ = σY/σL the stress ratio of the elastic limit of the fault to far-field loading stress. When the system embeds multiple, randomly distributed faults, stress transfer results in recurrence period deviations, however the time-averaged recurrence period of each fault show the same dependence on Γ and Θ, illustrating a characteristic collective behavior. The control of these parameters prevails even when initial pre-stress was randomly assigned in terms of the spatial arrangement and orientation on the internal plate, mimicking local fluctuations. Our study shows the relevance of macroscopic rheological properties of tectonic plates on the earthquake occurrence in plate interiors, as opposed to local factors, proposing a viable model for the seismic behavior of continent interiors in the context of large-scale, long-term deformation of interacting tectonic

  9. Statistical Approaches Regarding the Evolution of the Earthquakes in Romania

    Directory of Open Access Journals (Sweden)

    Gabriela OPAIT

    2015-05-01

    Full Text Available This paper reflects the statistical modeling of the values regarding the magnitudes of the earthquakes in Romania, respectively the depth of the earthquakes, through by means of the „Least Squares Method”, which is a method through we can to reflect the trend line of the best fit concerning a model. If we achieve in time and space an analysis concerning the earthquakes, we can to say that any earthquake has an unexpected character and the fortuitous factor playes the principal role.

  10. Validation of the alpha-fetoprotein model for hepatocellular carcinoma recurrence after transplantation in an Asian population.

    Science.gov (United States)

    Rhu, Jinsoo; Kim, Jong Man; Choi, Gyu Seong; Kwon, Choon Hyuck David; Joh, Jae-Won

    2018-02-20

    This study was designed to validate the alpha-fetoprotein model for predicting recurrence after liver transplantation in Korean hepatocellular carcinoma patients. Patients who underwent liver transplantation for hepatocellular carcinoma at Samsung Medical Center between 2007 and 2015 were included. Recurrence, overall survival, and disease-specific survival of patients divided by both the Milan criteria and the alpha-fetoprotein model were compared using Kaplan-Meier log-rank test. The predictability of the alpha-fetoprotein model compared to the Milan criteria was tested by means of net reclassification improvement analysis applied to patients with a follow-up of at least 2 years. A total of 400 patients were included in the study. Patients within Milan criteria had 5-year recurrence, and overall survival rates of 20.9% and 76.3% respectively, compared to corresponding rates of 50.3% and 55.7%, respectively, for patients who were beyond Milan criteria. Alpha-fetoprotein model low risk patients had 5-year recurrence and overall survival rates of 21.1% and 76.2%, respectively, compared to corresponding rates of 57.7% and 52.2%, respectively, in high risk patients (P<0.001, all). Although overall net reclassification improvements were statistically nonsignificant for recurrence (NRI=1.7%, Z=0.30, p=0.7624), and overall survival (NRI=9.0%, Z=1.60, p=0.1098), they were significantly better for predicting no recurrence (NRI=6.6%, Z=3.16, p=0.0016) and no death. (NRI=7.7%, Z=3.65, p=0.0003) CONCLUSIONS: The alpha-fetoprotein model seems to be a promising tool for liver transplantation candidacy, but further investigation is needed.

  11. Centrality in earthquake multiplex networks

    Science.gov (United States)

    Lotfi, Nastaran; Darooneh, Amir Hossein; Rodrigues, Francisco A.

    2018-06-01

    Seismic time series has been mapped as a complex network, where a geographical region is divided into square cells that represent the nodes and connections are defined according to the sequence of earthquakes. In this paper, we map a seismic time series to a temporal network, described by a multiplex network, and characterize the evolution of the network structure in terms of the eigenvector centrality measure. We generalize previous works that considered the single layer representation of earthquake networks. Our results suggest that the multiplex representation captures better earthquake activity than methods based on single layer networks. We also verify that the regions with highest seismological activities in Iran and California can be identified from the network centrality analysis. The temporal modeling of seismic data provided here may open new possibilities for a better comprehension of the physics of earthquakes.

  12. A predictive model for recurrence in patients with glottic cancer implemented in a mobile application for Android.

    Science.gov (United States)

    Jover-Esplá, Ana Gabriela; Palazón-Bru, Antonio; Folgado-de la Rosa, David Manuel; Severá-Ferrándiz, Guillermo; Sancho-Mestre, Manuela; de Juan-Herrero, Joaquín; Gil-Guillén, Vicente Francisco

    2018-05-01

    The existing predictive models of laryngeal cancer recurrence present limitations for clinical practice. Therefore, we constructed, internally validated and implemented in a mobile application (Android) a new model based on a points system taking into account the internationally recommended statistical methodology. This longitudinal prospective study included 189 patients with glottic cancer in 2004-2016 in a Spanish region. The main variable was time-to-recurrence, and its potential predictors were: age, gender, TNM classification, stage, smoking, alcohol consumption, and histology. A points system was developed to predict five-year risk of recurrence based on a Cox model. This was validated internally by bootstrapping, determining discrimination (C-statistics) and calibration (smooth curves). A total of 77 patients presented recurrence (40.7%) in a mean follow-up period of 3.4 ± 3.0 years. The factors in the model were: age, lymph node stage, alcohol consumption and stage. Discrimination and calibration were satisfactory. A points system was developed to obtain the probability of recurrence of laryngeal glottic cancer in five years, using five clinical variables. Our system should be validated externally in other geographical areas. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. The GED4GEM project: development of a Global Exposure Database for the Global Earthquake Model initiative

    Science.gov (United States)

    Gamba, P.; Cavalca, D.; Jaiswal, K.S.; Huyck, C.; Crowley, H.

    2012-01-01

    In order to quantify earthquake risk of any selected region or a country of the world within the Global Earthquake Model (GEM) framework (www.globalquakemodel.org/), a systematic compilation of building inventory and population exposure is indispensable. Through the consortium of leading institutions and by engaging the domain-experts from multiple countries, the GED4GEM project has been working towards the development of a first comprehensive publicly available Global Exposure Database (GED). This geospatial exposure database will eventually facilitate global earthquake risk and loss estimation through GEM’s OpenQuake platform. This paper provides an overview of the GED concepts, aims, datasets, and inference methodology, as well as the current implementation scheme, status and way forward.

  14. Ionospheric precursors to large earthquakes: A case study of the 2011 Japanese Tohoku Earthquake

    Science.gov (United States)

    Carter, B. A.; Kellerman, A. C.; Kane, T. A.; Dyson, P. L.; Norman, R.; Zhang, K.

    2013-09-01

    Researchers have reported ionospheric electron distribution abnormalities, such as electron density enhancements and/or depletions, that they claimed were related to forthcoming earthquakes. In this study, the Tohoku earthquake is examined using ionosonde data to establish whether any otherwise unexplained ionospheric anomalies were detected in the days and hours prior to the event. As the choices for the ionospheric baseline are generally different between previous works, three separate baselines for the peak plasma frequency of the F2 layer, foF2, are employed here; the running 30-day median (commonly used in other works), the International Reference Ionosphere (IRI) model and the Thermosphere Ionosphere Electrodynamic General Circulation Model (TIE-GCM). It is demonstrated that the classification of an ionospheric perturbation is heavily reliant on the baseline used, with the 30-day median, the IRI and the TIE-GCM generally underestimating, approximately describing and overestimating the measured foF2, respectively, in the 1-month period leading up to the earthquake. A detailed analysis of the ionospheric variability in the 3 days before the earthquake is then undertaken, where a simultaneous increase in foF2 and the Es layer peak plasma frequency, foEs, relative to the 30-day median was observed within 1 h before the earthquake. A statistical search for similar simultaneous foF2 and foEs increases in 6 years of data revealed that this feature has been observed on many other occasions without related seismic activity. Therefore, it is concluded that one cannot confidently use this type of ionospheric perturbation to predict an impending earthquake. It is suggested that in order to achieve significant progress in our understanding of seismo-ionospheric coupling, better account must be taken of other known sources of ionospheric variability in addition to solar and geomagnetic activity, such as the thermospheric coupling.

  15. Induced seismicity provides insight into why earthquake ruptures stop

    KAUST Repository

    Galis, Martin

    2017-12-21

    Injection-induced earthquakes pose a serious seismic hazard but also offer an opportunity to gain insight into earthquake physics. Currently used models relating the maximum magnitude of injection-induced earthquakes to injection parameters do not incorporate rupture physics. We develop theoretical estimates, validated by simulations, of the size of ruptures induced by localized pore-pressure perturbations and propagating on prestressed faults. Our model accounts for ruptures growing beyond the perturbed area and distinguishes self-arrested from runaway ruptures. We develop a theoretical scaling relation between the largest magnitude of self-arrested earthquakes and the injected volume and find it consistent with observed maximum magnitudes of injection-induced earthquakes over a broad range of injected volumes, suggesting that, although runaway ruptures are possible, most injection-induced events so far have been self-arrested ruptures.

  16. Finite-fault slip model of the 2016 Mw 7.5 Chiloé earthquake, southern Chile, estimated from Sentinel-1 data

    Science.gov (United States)

    Xu, Wenbin

    2017-05-01

    Subduction earthquakes have been widely studied in the Chilean subduction zone, but earthquakes occurring in its southern part have attracted less research interest primarily due to its lower rate of seismic activity. Here I use Sentinel-1 interferometric synthetic aperture radar (InSAR) data and range offset measurements to generate coseismic crustal deformation maps of the 2016 Mw 7.5 Chiloé earthquake in southern Chile. I find a concentrated crustal deformation with ground displacement of approximately 50 cm in the southern part of the Chiloé island. The best fitting fault model shows a pure thrust-fault motion on a shallow dipping plane orienting 4° NNE. The InSAR-determined moment is 2.4 × 1020 Nm with a shear modulus of 30 GPa, equivalent to Mw 7.56, which is slightly lower than the seismic moment. The model shows that the slip did not reach the trench, and it reruptured part of the fault that ruptured in the 1960 Mw 9.5 earthquake. The 2016 event has only released a small portion of the accumulated strain energy on the 1960 rupture zone, suggesting that the seismic hazard of future great earthquakes in southern Chile is high.

  17. Simulated earthquake ground motions

    International Nuclear Information System (INIS)

    Vanmarcke, E.H.; Gasparini, D.A.

    1977-01-01

    The paper reviews current methods for generating synthetic earthquake ground motions. Emphasis is on the special requirements demanded of procedures to generate motions for use in nuclear power plant seismic response analysis. Specifically, very close agreement is usually sought between the response spectra of the simulated motions and prescribed, smooth design response spectra. The features and capabilities of the computer program SIMQKE, which has been widely used in power plant seismic work are described. Problems and pitfalls associated with the use of synthetic ground motions in seismic safety assessment are also pointed out. The limitations and paucity of recorded accelerograms together with the widespread use of time-history dynamic analysis for obtaining structural and secondary systems' response have motivated the development of earthquake simulation capabilities. A common model for synthesizing earthquakes is that of superposing sinusoidal components with random phase angles. The input parameters for such a model are, then, the amplitudes and phase angles of the contributing sinusoids as well as the characteristics of the variation of motion intensity with time, especially the duration of the motion. The amplitudes are determined from estimates of the Fourier spectrum or the spectral density function of the ground motion. These amplitudes may be assumed to be varying in time or constant for the duration of the earthquake. In the nuclear industry, the common procedure is to specify a set of smooth response spectra for use in aseismic design. This development and the need for time histories have generated much practical interest in synthesizing earthquakes whose response spectra 'match', or are compatible with a set of specified smooth response spectra

  18. Diverse rupture modes for surface-deforming upper plate earthquakes in the southern Puget Lowland of Washington State

    Science.gov (United States)

    Nelson, Alan R.; Personius, Stephen F.; Sherrod, Brian L.; Kelsey, Harvey M.; Johnson, Samuel Y.; Bradley, Lee-Ann; Wells, Ray E.

    2014-01-01

    Earthquake prehistory of the southern Puget Lowland, in the north-south compressive regime of the migrating Cascadia forearc, reflects diverse earthquake rupture modes with variable recurrence. Stratigraphy and Bayesian analyses of previously reported and new 14C ages in trenches and cores along backthrust scarps in the Seattle fault zone restrict a large earthquake to 1040–910 cal yr B.P. (2σ), an interval that includes the time of the M 7–7.5 Restoration Point earthquake. A newly identified surface-rupturing earthquake along the Waterman Point backthrust dates to 940–380 cal yr B.P., bringing the number of earthquakes in the Seattle fault zone in the past 3500 yr to 4 or 5. Whether scarps record earthquakes of moderate (M 5.5–6.0) or large (M 6.5–7.0) magnitude, backthrusts of the Seattle fault zone may slip during moderate to large earthquakes every few hundred years for periods of 1000–2000 yr, and then not slip for periods of at least several thousands of years. Four new fault scarp trenches in the Tacoma fault zone show evidence of late Holocene folding and faulting about the time of a large earthquake or earthquakes inferred from widespread coseismic subsidence ca. 1000 cal yr B.P.; 12 ages from 8 sites in the Tacoma fault zone limit the earthquakes to 1050–980 cal yr B.P. Evidence is too sparse to determine whether a large earthquake was closely predated or postdated by other earthquakes in the Tacoma basin, but the scarp of the Tacoma fault was formed by multiple earthquakes. In the northeast-striking Saddle Mountain deformation zone, along the western limit of the Seattle and Tacoma fault zones, analysis of previous ages limits earthquakes to 1200–310 cal yr B.P. The prehistory clarifies earthquake clustering in the central Puget Lowland, but cannot resolve potential structural links among the three Holocene fault zones.

  19. RECENT STRONG EARTHQUAKES IN CENTRAL ASIA: REGULAR TECTONOPHYSICAL FEATURES OF LOCATIONS IN THE STRUCTURE AND GEODYNAMICS OF THE LITHOSPHERE. PART 1. MAIN GEODYNAMIC FACTORS PREDETERMINING LOCATIONS OF STRONG EARTHQUAKES IN THE STRUCTURE OF THE LITHOSPHER

    Directory of Open Access Journals (Sweden)

    S. I. Sherman

    2015-01-01

    Full Text Available Studying locations of strong earthquakes (М≥8 in space and time in Central Asia has been among top prob-lems for many years and still remains challenging for international research teams. The authors propose a new ap-proach that requires changing the paradigm of earthquake focus – solid rock relations, while this paradigm is a basis for practically all known physical models of earthquake foci. This paper describes the first step towards developing a new concept of the seismic process, including generation of strong earthquakes, with reference to specific geodynamic features of the part of the study region wherein strong earthquakes were recorded in the past two centuries. Our analysis of the locations of М≥8 earthquakes shows that in the past two centuries such earthquakes took place in areas of the dynamic influence of large deep faults in the western regions of Central Asia. In the continental Asia, there is a clear submeridional structural boundary (95–105°E between the western and eastern regions, and this is a factor controlling localization of strong seismic events in the western regions. Obviously, the Indostan plate’s pressure from the south is an energy source for such events. The strong earthquakes are located in a relatively small part of the territory of Central Asia (i.e. the western regions, which is significantly different from its neighbouring areas at the north, east and west, as evidenced by its specific geodynamic parameters. (1 The crust is twice as thick in the western regions than in the eastern regions. (2 In the western regions, the block structures re-sulting from the crust destruction, which are mainly represented by lense-shaped forms elongated in the submeridio-nal direction, tend to dominate. (3 Active faults bordering large block structures are characterized by significant slip velocities that reach maximum values in the central part of the Tibetan plateau. Further northward, slip velocities decrease

  20. Evaluation of Earthquake-Induced Effects on Neighbouring Faults and Volcanoes: Application to the 2016 Pedernales Earthquake

    Science.gov (United States)

    Bejar, M.; Alvarez Gomez, J. A.; Staller, A.; Luna, M. P.; Perez Lopez, R.; Monserrat, O.; Chunga, K.; Herrera, G.; Jordá, L.; Lima, A.; Martínez-Díaz, J. J.

    2017-12-01

    It has long been recognized that earthquakes change the stress in the upper crust around the fault rupture and can influence the short-term behaviour of neighbouring faults and volcanoes. Rapid estimates of these stress changes can provide the authorities managing the post-disaster situation with a useful tool to identify and monitor potential threads and to update the estimates of seismic and volcanic hazard in a region. Space geodesy is now routinely used following an earthquake to image the displacement of the ground and estimate the rupture geometry and the distribution of slip. Using the obtained source model, it is possible to evaluate the remaining moment deficit and to infer the stress changes on nearby faults and volcanoes produced by the earthquake, which can be used to identify which faults and volcanoes are brought closer to failure or activation. Although these procedures are commonly used today, the transference of these results to the authorities managing the post-disaster situation is not straightforward and thus its usefulness is reduced in practice. Here we propose a methodology to evaluate the potential influence of an earthquake on nearby faults and volcanoes and create easy-to-understand maps for decision-making support after an earthquake. We apply this methodology to the Mw 7.8, 2016 Ecuador earthquake. Using Sentinel-1 SAR and continuous GPS data, we measure the coseismic ground deformation and estimate the distribution of slip. Then we use this model to evaluate the moment deficit on the subduction interface and changes of stress on the surrounding faults and volcanoes. The results are compared with the seismic and volcanic events that have occurred after the earthquake. We discuss potential and limits of the methodology and the lessons learnt from discussion with local authorities.

  1. Stress triggering and the Canterbury earthquake sequence

    Science.gov (United States)

    Steacy, Sandy; Jiménez, Abigail; Holden, Caroline

    2014-01-01

    The Canterbury earthquake sequence, which includes the devastating Christchurch event of 2011 February, has to date led to losses of around 40 billion NZ dollars. The location and severity of the earthquakes was a surprise to most inhabitants as the seismic hazard model was dominated by an expected Mw > 8 earthquake on the Alpine fault and an Mw 7.5 earthquake on the Porters Pass fault, 150 and 80 km to the west of Christchurch. The sequence to date has included an Mw = 7.1 earthquake and 3 Mw ≥ 5.9 events which migrated from west to east. Here we investigate whether the later events are consistent with stress triggering and whether a simple stress map produced shortly after the first earthquake would have accurately indicated the regions where the subsequent activity occurred. We find that 100 per cent of M > 5.5 earthquakes occurred in positive stress areas computed using a slip model for the first event that was available within 10 d of its occurrence. We further find that the stress changes at the starting points of major slip patches of post-Darfield main events are consistent with triggering although this is not always true at the hypocentral locations. Our results suggest that Coulomb stress changes contributed to the evolution of the Canterbury sequence and we note additional areas of increased stress in the Christchurch region and on the Porters Pass fault.

  2. Modelling the phonotactic structure of natural language words with simple recurrent networks

    NARCIS (Netherlands)

    Stoianov, [No Value; Nerbonne, J; Bouma, H; Coppen, PA; vanHalteren, H; Teunissen, L

    1998-01-01

    Simple Recurrent Networks (SRN) are Neural Network (connectionist) models able to process natural language. Phonotactics concerns the order of symbols in words. We continued an earlier unsuccessful trial to model the phonotactics of Dutch words with SRNs. In order to overcome the previously reported

  3. Long-term earthquake forecasts based on the epidemic-type aftershock sequence (ETAS model for short-term clustering

    Directory of Open Access Journals (Sweden)

    Jiancang Zhuang

    2012-07-01

    Full Text Available Based on the ETAS (epidemic-type aftershock sequence model, which is used for describing the features of short-term clustering of earthquake occurrence, this paper presents some theories and techniques related to evaluating the probability distribution of the maximum magnitude in a given space-time window, where the Gutenberg-Richter law for earthquake magnitude distribution cannot be directly applied. It is seen that the distribution of the maximum magnitude in a given space-time volume is determined in the longterm by the background seismicity rate and the magnitude distribution of the largest events in each earthquake cluster. The techniques introduced were applied to the seismicity in the Japan region in the period from 1926 to 2009. It was found that the regions most likely to have big earthquakes are along the Tohoku (northeastern Japan Arc and the Kuril Arc, both with much higher probabilities than the offshore Nankai and Tokai regions.

  4. Numerical models of pore pressure and stress changes along basement faults due to wastewater injection: Applications to the 2014 Milan, Kansas Earthquake

    Science.gov (United States)

    Hearn, Elizabeth H.; Koltermann, Christine; Rubinstein, Justin R.

    2018-01-01

    We have developed groundwater flow models to explore the possible relationship between wastewater injection and the 12 November 2014 Mw 4.8 Milan, Kansas earthquake. We calculate pore pressure increases in the uppermost crust using a suite of models in which hydraulic properties of the Arbuckle Formation and the Milan earthquake fault zone, the Milan earthquake hypocenter depth, and fault zone geometry are varied. Given pre‐earthquake injection volumes and reasonable hydrogeologic properties, significantly increasing pore pressure at the Milan hypocenter requires that most flow occur through a conductive channel (i.e., the lower Arbuckle and the fault zone) rather than a conductive 3‐D volume. For a range of reasonable lower Arbuckle and fault zone hydraulic parameters, the modeled pore pressure increase at the Milan hypocenter exceeds a minimum triggering threshold of 0.01 MPa at the time of the earthquake. Critical factors include injection into the base of the Arbuckle Formation and proximity of the injection point to a narrow fault damage zone or conductive fracture in the pre‐Cambrian basement with a hydraulic diffusivity of about 3–30 m2/s. The maximum pore pressure increase we obtain at the Milan hypocenter before the earthquake is 0.06 MPa. This suggests that the Milan earthquake occurred on a fault segment that was critically stressed prior to significant wastewater injection in the area. Given continued wastewater injection into the upper Arbuckle in the Milan region, assessment of the middle Arbuckle as a hydraulic barrier remains an important research priority.

  5. Seismic-resistant design of nuclear power stations in Japan, earthquake country. Lessons learned from Chuetsu-oki earthquake

    International Nuclear Information System (INIS)

    Irikura, Kojiro

    2008-01-01

    The new assessment (back-check) of earthquake-proof safety was being conducted at Kashiwazaki-Kariwa Nuclear Power Plants, Tokyo Electric Co. in response to a request based on the guideline for reactor evaluation for seismic-resistant design code, revised in 2006, when the 2007 Chuetsu-oki Earthquake occurred and brought about an unexpectedly huge tremor in this area, although the magnitude of the earthquake was only 6.8 but the intensity of earthquake motion exceeded 2.5-fold more than supposed. This paper introduces how and why the guideline for seismic-resistant design of nuclear facilities was revised in 2006, the outline of the Chuetsu-oki Earthquake, and preliminary findings and lessons learned from the Earthquake. The paper specifically discusses on (1) how we may specify in advance geologic active faults as has been overlooked this time, (2) how we can make adequate models for seismic origin from which we can extract its characteristics, and (3) how the estimation of strong ground motion simulation may be possible for ground vibration level of a possibly overlooked fault. (S. Ohno)

  6. Tutorial on earthquake rotational effects: historical examples

    Czech Academy of Sciences Publication Activity Database

    Kozák, Jan

    2009-01-01

    Roč. 99, 2B (2009), s. 998-1010 ISSN 0037-1106 Institutional research plan: CEZ:AV0Z30120515 Keywords : rotational seismic models * earthquake rotational effects * historical earthquakes Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.860, year: 2009

  7. Prediction of earthquakes: a data evaluation and exchange problem

    Energy Technology Data Exchange (ETDEWEB)

    Melchior, Paul

    1978-11-15

    Recent experiences in earthquake prediction are recalled. Precursor information seems to be available from geodetic measurements, hydrological and geochemical measurements, electric and magnetic measurements, purely seismic phenomena, and zoological phenomena; some new methods are proposed. A list of possible earthquake triggers is given. The dilatancy model is contrasted with a dry model; they seem to be equally successful. In conclusion, the space and time range of the precursors is discussed in relation to the magnitude of earthquakes. (RWR)

  8. Reliability of Soil Sublayers Under Earthquake Excitation

    DEFF Research Database (Denmark)

    Nielsen, Søren R. K.; Mørk, Kim Jørgensen

    A hysteretic model is formulated for a multi-layer subsoil subjected to horizontal earthquake shear waves (SH-waves). For each layer a modified Bouc-Wen model is used, relating the increments of the hysteretic shear stress to increments of the shear strain of the layer. Liquefaction is considered...... for each layer. The horizontal earthquake acceleration process at bedrock level is modelled as a non-stationary white noise, filtered through a time-invariant linear second order filter....

  9. Simulating Earthquakes for Science and Society: Earthquake Visualizations Ideal for use in Science Communication and Education

    Science.gov (United States)

    de Groot, R.

    2008-12-01

    The Southern California Earthquake Center (SCEC) has been developing groundbreaking computer modeling capabilities for studying earthquakes. These visualizations were initially shared within the scientific community but have recently gained visibility via television news coverage in Southern California. Computers have opened up a whole new world for scientists working with large data sets, and students can benefit from the same opportunities (Libarkin & Brick, 2002). For example, The Great Southern California ShakeOut was based on a potential magnitude 7.8 earthquake on the southern San Andreas fault. The visualization created for the ShakeOut was a key scientific and communication tool for the earthquake drill. This presentation will also feature SCEC Virtual Display of Objects visualization software developed by SCEC Undergraduate Studies in Earthquake Information Technology interns. According to Gordin and Pea (1995), theoretically visualization should make science accessible, provide means for authentic inquiry, and lay the groundwork to understand and critique scientific issues. This presentation will discuss how the new SCEC visualizations and other earthquake imagery achieve these results, how they fit within the context of major themes and study areas in science communication, and how the efficacy of these tools can be improved.

  10. Fractals and Forecasting in Earthquakes and Finance

    Science.gov (United States)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.

    2011-12-01

    It is now recognized that Benoit Mandelbrot's fractals play a critical role in describing a vast range of physical and social phenomena. Here we focus on two systems, earthquakes and finance. Since 1942, earthquakes have been characterized by the Gutenberg-Richter magnitude-frequency relation, which in more recent times is often written as a moment-frequency power law. A similar relation can be shown to hold for financial markets. Moreover, a recent New York Times article, titled "A Richter Scale for the Markets" [1] summarized the emerging viewpoint that stock market crashes can be described with similar ideas as large and great earthquakes. The idea that stock market crashes can be related in any way to earthquake phenomena has its roots in Mandelbrot's 1963 work on speculative prices in commodities markets such as cotton [2]. He pointed out that Gaussian statistics did not account for the excessive number of booms and busts that characterize such markets. Here we show that both earthquakes and financial crashes can both be described by a common Landau-Ginzburg-type free energy model, involving the presence of a classical limit of stability, or spinodal. These metastable systems are characterized by fractal statistics near the spinodal. For earthquakes, the independent ("order") parameter is the slip deficit along a fault, whereas for the financial markets, it is financial leverage in place. For financial markets, asset values play the role of a free energy. In both systems, a common set of techniques can be used to compute the probabilities of future earthquakes or crashes. In the case of financial models, the probabilities are closely related to implied volatility, an important component of Black-Scholes models for stock valuations. [2] B. Mandelbrot, The variation of certain speculative prices, J. Business, 36, 294 (1963)

  11. A Comparison of Geodetic and Geologic Rates Prior to Large Strike-Slip Earthquakes: A Diversity of Earthquake-Cycle Behaviors?

    Science.gov (United States)

    Dolan, James F.; Meade, Brendan J.

    2017-12-01

    Comparison of preevent geodetic and geologic rates in three large-magnitude (Mw = 7.6-7.9) strike-slip earthquakes reveals a wide range of behaviors. Specifically, geodetic rates of 26-28 mm/yr for the North Anatolian fault along the 1999 MW = 7.6 Izmit rupture are ˜40% faster than Holocene geologic rates. In contrast, geodetic rates of ˜6-8 mm/yr along the Denali fault prior to the 2002 MW = 7.9 Denali earthquake are only approximately half as fast as the latest Pleistocene-Holocene geologic rate of ˜12 mm/yr. In the third example where a sufficiently long pre-earthquake geodetic time series exists, the geodetic and geologic rates along the 2001 MW = 7.8 Kokoxili rupture on the Kunlun fault are approximately equal at ˜11 mm/yr. These results are not readily explicable with extant earthquake-cycle modeling, suggesting that they may instead be due to some combination of regional kinematic fault interactions, temporal variations in the strength of lithospheric-scale shear zones, and/or variations in local relative plate motion rate. Whatever the exact causes of these variable behaviors, these observations indicate that either the ratio of geodetic to geologic rates before an earthquake may not be diagnostic of the time to the next earthquake, as predicted by many rheologically based geodynamic models of earthquake-cycle behavior, or different behaviors characterize different fault systems in a manner that is not yet understood or predictable.

  12. Evaluation of Seismic Rupture Models for the 2011 Tohoku-Oki Earthquake Using Tsunami Simulation

    Directory of Open Access Journals (Sweden)

    Ming-Da Chiou

    2013-01-01

    Full Text Available Developing a realistic, three-dimensional rupture model of the large offshore earthquake is difficult to accomplish directly through band-limited ground-motion observations. A potential indirect method is using a tsunami simulation to verify the rupture model in reverse because the initial conditions of the associated tsunamis are caused by a coseismic seafloor displacement correlating to the rupture pattern along the main faulting. In this study, five well-developed rupture models for the 2011 Tohoku-Oki earthquake were adopted to evaluate differences in simulated tsunamis and various rupture asperities. The leading wave of the simulated tsunamis triggered by the seafloor displacement in Yamazaki et al. (2011 model resulted in the smallest root-mean-squared difference (~0.082 m on average from the records of the eight DART (Deep-ocean Assessment and Reporting of Tsunamis stations. This indicates that the main seismic rupture during the 2011 Tohoku earthquake should occur in a large shallow slip in a narrow range adjacent to the Japan trench. This study also quantified the influences of ocean stratification and tides which are normally overlooked in tsunami simulations. The discrepancy between the simulations with and without stratification was less than 5% of the first peak wave height at the eight DART stations. The simulations, run with and without the presence of tides, resulted in a ~1% discrepancy in the height of the leading wave. Because simulations accounting for tides and stratification are time-consuming and their influences are negligible, particularly in the first tsunami wave, the two factors can be ignored in a tsunami prediction for practical purposes.

  13. Seismo-tectonic model regarding the genesis and occurrence of Vrancea (Romania) earthquakes

    International Nuclear Information System (INIS)

    Enescu, D.; Enescu, B.D.

    1998-01-01

    The first part of this paper contains a very short description of some previous attempts in seismo-tectonic modeling of Vrancea zone. The seismo-tectonic model developed by the authors of this work is presented in the second part of the paper. This model is based on the spatial distribution of hypo-centers and focal mechanism characteristics. Lithosphere structure and tectonics of the directly implied zones represent very important characteristics of the seismo-tectonic model. Some two-dimensional and three-dimensional sketches of the model, which satisfy all the above mentioned characteristics and give realistic explanations regarding the genesis and occurrence of Vrancea earthquakes are presented. (authors)

  14. A dynamic model of liquid containers (tanks) with legs and probability analysis of response to simulated earthquake

    International Nuclear Information System (INIS)

    Fujita, Takafumi; Shimosaka, Haruo

    1980-01-01

    This paper is described on the results of analysis of the response of liquid containers (tanks) to earthquakes. Sine wave oscillation was applied experimentally to model tanks with legs. A model with one degree of freedom is good enough for the analysis. To investigate the reason of this fact, the response multiplication factor of tank displacement was analysed. The shapes of the model tanks were rectangular and cylindrical. Analyses were made by a potential theory. The experimental studies show that the characteristics of attenuation of oscillation was non-linear. The model analysis of this non-linear attenuation was also performed. Good agreement between the experimental and the analytical results was recognized. The probability analysis of the response to earthquake with simulated shock waves was performed, using the above mentioned model, and good agreement between the experiment and the analysis was obtained. (Kato, T.)

  15. Valuation of Indonesian catastrophic earthquake bonds with generalized extreme value (GEV) distribution and Cox-Ingersoll-Ross (CIR) interest rate model

    Science.gov (United States)

    Gunardi, Setiawan, Ezra Putranda

    2015-12-01

    Indonesia is a country with high risk of earthquake, because of its position in the border of earth's tectonic plate. An earthquake could raise very high amount of damage, loss, and other economic impacts. So, Indonesia needs a mechanism for transferring the risk of earthquake from the government or the (reinsurance) company, as it could collect enough money for implementing the rehabilitation and reconstruction program. One of the mechanisms is by issuing catastrophe bond, `act-of-God bond', or simply CAT bond. A catastrophe bond issued by a special-purpose-vehicle (SPV) company, and then sold to the investor. The revenue from this transaction is joined with the money (premium) from the sponsor company and then invested in other product. If a catastrophe happened before the time-of-maturity, cash flow from the SPV to the investor will discounted or stopped, and the cash flow is paid to the sponsor company to compensate their loss because of this catastrophe event. When we consider the earthquake only, the amount of discounted cash flow could determine based on the earthquake's magnitude. A case study with Indonesian earthquake magnitude data show that the probability of maximum magnitude can model by generalized extreme value (GEV) distribution. In pricing this catastrophe bond, we assumed stochastic interest rate that following the Cox-Ingersoll-Ross (CIR) interest rate model. We develop formulas for pricing three types of catastrophe bond, namely zero coupon bonds, `coupon only at risk' bond, and `principal and coupon at risk' bond. Relationship between price of the catastrophe bond and CIR model's parameter, GEV's parameter, percentage of coupon, and discounted cash flow rule then explained via Monte Carlo simulation.

  16. Fault geometry and earthquake mechanics

    Directory of Open Access Journals (Sweden)

    D. J. Andrews

    1994-06-01

    Full Text Available Earthquake mechanics may be determined by the geometry of a fault system. Slip on a fractal branching fault surface can explain: 1 regeneration of stress irregularities in an earthquake; 2 the concentration of stress drop in an earthquake into asperities; 3 starting and stopping of earthquake slip at fault junctions, and 4 self-similar scaling of earthquakes. Slip at fault junctions provides a natural realization of barrier and asperity models without appealing to variations of fault strength. Fault systems are observed to have a branching fractal structure, and slip may occur at many fault junctions in an earthquake. Consider the mechanics of slip at one fault junction. In order to avoid a stress singularity of order 1/r, an intersection of faults must be a triple junction and the Burgers vectors on the three fault segments at the junction must sum to zero. In other words, to lowest order the deformation consists of rigid block displacement, which ensures that the local stress due to the dislocations is zero. The elastic dislocation solution, however, ignores the fact that the configuration of the blocks changes at the scale of the displacement. A volume change occurs at the junction; either a void opens or intense local deformation is required to avoid material overlap. The volume change is proportional to the product of the slip increment and the total slip since the formation of the junction. Energy absorbed at the junction, equal to confining pressure times the volume change, is not large enongh to prevent slip at a new junction. The ratio of energy absorbed at a new junction to elastic energy released in an earthquake is no larger than P/µ where P is confining pressure and µ is the shear modulus. At a depth of 10 km this dimensionless ratio has th value P/µ= 0.01. As slip accumulates at a fault junction in a number of earthquakes, the fault segments are displaced such that they no longer meet at a single point. For this reason the

  17. The Road to Total Earthquake Safety

    Science.gov (United States)

    Frohlich, Cliff

    Cinna Lomnitz is possibly the most distinguished earthquake seismologist in all of Central and South America. Among many other credentials, Lomnitz has personally experienced the shaking and devastation that accompanied no fewer than five major earthquakes—Chile, 1939; Kern County, California, 1952; Chile, 1960; Caracas,Venezuela, 1967; and Mexico City, 1985. Thus he clearly has much to teach someone like myself, who has never even actually felt a real earthquake.What is this slim book? The Road to Total Earthquake Safety summarizes Lomnitz's May 1999 presentation at the Seventh Mallet-Milne Lecture, sponsored by the Society for Earthquake and Civil Engineering Dynamics. His arguments are motivated by the damage that occurred in three earthquakes—Mexico City, 1985; Loma Prieta, California, 1989; and Kobe, Japan, 1995. All three quakes occurred in regions where earthquakes are common. Yet in all three some of the worst damage occurred in structures located a significant distance from the epicenter and engineered specifically to resist earthquakes. Some of the damage also indicated that the structures failed because they had experienced considerable rotational or twisting motion. Clearly, Lomnitz argues, there must be fundamental flaws in the usually accepted models explaining how earthquakes generate strong motions, and how we should design resistant structures.

  18. Earthquake Culture: A Significant Element in Earthquake Disaster Risk Assessment and Earthquake Disaster Risk Management

    OpenAIRE

    Ibrion, Mihaela

    2018-01-01

    This book chapter brings to attention the dramatic impact of large earthquake disasters on local communities and society and highlights the necessity of building and enhancing the earthquake culture. Iran was considered as a research case study and fifteen large earthquake disasters in Iran were investigated and analyzed over more than a century-time period. It was found that the earthquake culture in Iran was and is still conditioned by many factors or parameters which are not integrated and...

  19. Modeling earthquake magnitudes from injection-induced seismicity on rough faults

    Science.gov (United States)

    Maurer, J.; Dunham, E. M.; Segall, P.

    2017-12-01

    It is an open question whether perturbations to the in-situ stress field due to fluid injection affect the magnitudes of induced earthquakes. It has been suggested that characteristics such as the total injected fluid volume control the size of induced events (e.g., Baisch et al., 2010; Shapiro et al., 2011). On the other hand, Van der Elst et al. (2016) argue that the size distribution of induced earthquakes follows Gutenberg-Richter, the same as tectonic events. Numerical simulations support the idea that ruptures nucleating inside regions with high shear-to-effective normal stress ratio may not propagate into regions with lower stress (Dieterich et al., 2015; Schmitt et al., 2015), however, these calculations are done on geometrically smooth faults. Fang & Dunham (2013) show that rupture length on geometrically rough faults is variable, but strongly dependent on background shear/effective normal stress. In this study, we use a 2-D elasto-dynamic rupture simulator that includes rough fault geometry and off-fault plasticity (Dunham et al., 2011) to simulate earthquake ruptures under realistic conditions. We consider aggregate results for faults with and without stress perturbations due to fluid injection. We model a uniform far-field background stress (with local perturbations around the fault due to geometry), superimpose a poroelastic stress field in the medium due to injection, and compute the effective stress on the fault as inputs to the rupture simulator. Preliminary results indicate that even minor stress perturbations on the fault due to injection can have a significant impact on the resulting distribution of rupture lengths, but individual results are highly dependent on the details of the local stress perturbations on the fault due to geometric roughness.

  20. Earthquake simulation, actual earthquake monitoring and analytical methods for soil-structure interaction investigation

    Energy Technology Data Exchange (ETDEWEB)

    Tang, H T [Seismic Center, Electric Power Research Institute, Palo Alto, CA (United States)

    1988-07-01

    Approaches for conducting in-situ soil-structure interaction experiments are discussed. High explosives detonated under the ground can generate strong ground motion to induce soil-structure interaction (SSI). The explosive induced data are useful in studying the dynamic characteristics of the soil-structure system associated with the inertial aspect of the SSI problem. The plane waves generated by the explosives cannot adequately address the kinematic interaction associated with actual earthquakes because of he difference in wave fields and their effects. Earthquake monitoring is ideal for obtaining SSI data that can address all aspects of the SSI problem. The only limitation is the level of excitation that can be obtained. Neither the simulated earthquake experiments nor the earthquake monitoring experiments can have exact similitude if reduced scale test structures are used. If gravity effects are small, reasonable correlations between the scaled model and the prototype can be obtained provided that input motion can be scaled appropriately. The key product of the in-situ experiments is the data base that can be used to qualify analytical methods for prototypical applications. (author)

  1. Measuring the effectiveness of earthquake forecasting in insurance strategies

    Science.gov (United States)

    Mignan, A.; Muir-Wood, R.

    2009-04-01

    Given the difficulty of judging whether the skill of a particular methodology of earthquake forecasts is offset by the inevitable false alarms and missed predictions, it is important to find a means to weigh the successes and failures according to a common currency. Rather than judge subjectively the relative costs and benefits of predictions, we develop a simple method to determine if the use of earthquake forecasts can increase the profitability of active financial risk management strategies employed in standard insurance procedures. Three types of risk management transactions are employed: (1) insurance underwriting, (2) reinsurance purchasing and (3) investment in CAT bonds. For each case premiums are collected based on modelled technical risk costs and losses are modelled for the portfolio in force at the time of the earthquake. A set of predetermined actions follow from the announcement of any change in earthquake hazard, so that, for each earthquake forecaster, the financial performance of an active risk management strategy can be compared with the equivalent passive strategy in which no notice is taken of earthquake forecasts. Overall performance can be tracked through time to determine which strategy gives the best long term financial performance. This will be determined by whether the skill in forecasting the location and timing of a significant earthquake (where loss is avoided) is outweighed by false predictions (when no premium is collected). This methodology is to be tested in California, where catastrophe modeling is reasonably mature and where a number of researchers issue earthquake forecasts.

  2. A GIS-based time-dependent seismic source modeling of Northern Iran

    Science.gov (United States)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  3. Deterministic modeling for microzonation of Bucharest: Case study for August 30, 1986, and May 30-31, 1990. Vrancea earthquakes

    International Nuclear Information System (INIS)

    Cioflan, C.O.; Apostol, B.F.; Moldoveanu, C.L.; Marmureanu, G.; Panza, G.F.

    2002-03-01

    The mapping of the seismic ground motion in Bucharest, due to the strong Vrancea earthquakes is carried out using a complex hybrid waveform modeling method which combines the modal summation technique, valid for laterally homogenous anelastic media, with finite-differences technique and optimizes the advantages of both methods. For recent earthquakes, it is possible to validate the modeling by comparing the synthetic seismograms with the records. As controlling records we consider the accelerograms of the Magurele station, low pass filtered with a cut off frequency of 1.0 Hz, of the 3 last major strong (M w >6) Vrancea earthquakes. Using the hybrid method with a double-couple- seismic source approximation, scaled for the source dimensions and relatively simple regional (bedrock) and local structure models we succeeded in reproducing the recorded ground motion in Bucharest, at a satisfactory level for seismic engineering. Extending the modeling to the whole territory of the Bucharest area, we construct a new seismic microzonation map, where five different zones are identified by their characteristic response spectra. (author)

  4. Slip reactivation model for the 2011 Mw9 Tohoku earthquake: Dynamic rupture, sea floor displacements and tsunami simulations.

    Science.gov (United States)

    Galvez, P.; Dalguer, L. A.; Rahnema, K.; Bader, M.

    2014-12-01

    The 2011 Mw9 Tohoku earthquake has been recorded with a vast GPS and seismic network given unprecedented chance to seismologists to unveil complex rupture processes in a mega-thrust event. In fact more than one thousand near field strong-motion stations across Japan (K-Net and Kik-Net) revealed complex ground motion patterns attributed to the source effects, allowing to capture detailed information of the rupture process. The seismic stations surrounding the Miyagi regions (MYGH013) show two clear distinct waveforms separated by 40 seconds. This observation is consistent with the kinematic source model obtained from the inversion of strong motion data performed by Lee's et al (2011). In this model two rupture fronts separated by 40 seconds emanate close to the hypocenter and propagate towards the trench. This feature is clearly observed by stacking the slip-rate snapshots on fault points aligned in the EW direction passing through the hypocenter (Gabriel et al, 2012), suggesting slip reactivation during the main event. A repeating slip on large earthquakes may occur due to frictional melting and thermal fluid pressurization effects. Kanamori & Heaton (2002) argued that during faulting of large earthquakes the temperature rises high enough creating melting and further reduction of friction coefficient. We created a 3D dynamic rupture model to reproduce this slip reactivation pattern using SPECFEM3D (Galvez et al, 2014) based on a slip-weakening friction with sudden two sequential stress drops . Our model starts like a M7-8 earthquake breaking dimly the trench, then after 40 seconds a second rupture emerges close to the trench producing additional slip capable to fully break the trench and transforming the earthquake into a megathrust event. The resulting sea floor displacements are in agreement with 1Hz GPS displacements (GEONET). The seismograms agree roughly with seismic records along the coast of Japan.The simulated sea floor displacement reaches 8-10 meters of

  5. Statistical physics approach to earthquake occurrence and forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Arcangelis, Lucilla de [Department of Industrial and Information Engineering, Second University of Naples, Aversa (CE) (Italy); Godano, Cataldo [Department of Mathematics and Physics, Second University of Naples, Caserta (Italy); Grasso, Jean Robert [ISTerre, IRD-CNRS-OSUG, University of Grenoble, Saint Martin d’Héres (France); Lippiello, Eugenio, E-mail: eugenio.lippiello@unina2.it [Department of Mathematics and Physics, Second University of Naples, Caserta (Italy)

    2016-04-25

    There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space–time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for

  6. An Agent Model of Temporal Dynamics in Relapse and Recurrence in Depression

    NARCIS (Netherlands)

    Aziz, A.A.; Klein, M.C.A.; Treur, J.

    2009-01-01

    This paper presents a dynamic agent model of recurrences of a depression for an individual. Based on several personal characteristics and a representation of events (i.e. life events or daily hassles) the agent model can simulate whether a human agent that recovered from a depression will fall into

  7. Experimental study of structural response to earthquakes

    International Nuclear Information System (INIS)

    Clough, R.W.; Bertero, V.V.; Bouwkamp, J.G.; Popov, E.P.

    1975-01-01

    The objectives, methods, and some of the principal results obtained from experimental studies of the behavior of structures subjected to earthquakes are described. Although such investigations are being conducted in many laboratories throughout the world, the information presented deals specifically with projects being carried out at the Earthquake Engineering Research Center (EERC) of the University of California, Berkeley. A primary purpose of these investigations is to obtain detailed information on the inelastic response mechanisms in typical structural systems so that the experimentally observed performance can be compared with computer generated analytical predictions. Only by such comparisons can the mathematical models used in dynamic nonlinear analyses be verified and improved. Two experimental procedures for investigating earthquake structural response are discussed: the earthquake simulator facility which subjects the base of the test structure to acceleration histories similar to those recorded in actual earthquakes, and systems of hydraulic rams which impose specified displacement histories on the test components, equivalent to motions developed in structures subjected to actual'quakes. The general concept and performance of the 20ft square EERC earthquake simulator is described, and the testing of a two story concrete frame building is outlined. Correlation of the experimental results with analytical predictions demonstrates that satisfactory agreement can be obtained only if the mathematical model incorporates a stiffness deterioration mechanism which simulates the cracking and other damage suffered by the structure

  8. Earthquakes and Earthquake Engineering. LC Science Tracer Bullet.

    Science.gov (United States)

    Buydos, John F., Comp.

    An earthquake is a shaking of the ground resulting from a disturbance in the earth's interior. Seismology is the (1) study of earthquakes; (2) origin, propagation, and energy of seismic phenomena; (3) prediction of these phenomena; and (4) investigation of the structure of the earth. Earthquake engineering or engineering seismology includes the…

  9. Enhanced Earthquake-Resistance on the High Level Radioactive Waste Canister

    International Nuclear Information System (INIS)

    Choi, Youngchul; Yoon, Chanhoon; Lee, Jeaowan; Kim, Jinsup; Choi, Heuijoo

    2014-01-01

    In this paper, the earthquake-resistance type buffer was developed with the method protecting safely about the earthquake. The main parameter having an effect on the earthquake-resistant performance was analyzed and the earthquake-proof type buffer material was designed. The shear analysis model was developed and the performance of the earthquake-resistance buffer material was evaluated. The dynamic behavior of the radioactive waste disposal canister was analyzed in case the earthquake was generated. In the case, the disposal canister gets the serious damage. In this paper, the earthquake-resistance buffer material was developed in order to prevent this damage. By putting the buffer in which the density is small between the canister and buffer, the earthquake-resistant performance was improved about 80%

  10. Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) Model - An Unified Concept for Earthquake Precursors Validation

    Science.gov (United States)

    Pulinets, S.; Ouzounov, D.

    2010-01-01

    The paper presents a conception of complex multidisciplinary approach to the problem of clarification the nature of short-term earthquake precursors observed in atmosphere, atmospheric electricity and in ionosphere and magnetosphere. Our approach is based on the most fundamental principles of tectonics giving understanding that earthquake is an ultimate result of relative movement of tectonic plates and blocks of different sizes. Different kind of gases: methane, helium, hydrogen, and carbon dioxide leaking from the crust can serve as carrier gases for radon including underwater seismically active faults. Radon action on atmospheric gases is similar to the cosmic rays effects in upper layers of atmosphere: it is the air ionization and formation by ions the nucleus of water condensation. Condensation of water vapor is accompanied by the latent heat exhalation is the main cause for observing atmospheric thermal anomalies. Formation of large ion clusters changes the conductivity of boundary layer of atmosphere and parameters of the global electric circuit over the active tectonic faults. Variations of atmospheric electricity are the main source of ionospheric anomalies over seismically active areas. Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model can explain most of these events as a synergy between different ground surface, atmosphere and ionosphere processes and anomalous variations which are usually named as short-term earthquake precursors. A newly developed approach of Interdisciplinary Space-Terrestrial Framework (ISTF) can provide also a verification of these precursory processes in seismically active regions. The main outcome of this paper is the unified concept for systematic validation of different types of earthquake precursors united by physical basis in one common theory.

  11. Evaluating earthquake hazards in the Los Angeles region; an earth-science perspective

    Science.gov (United States)

    Ziony, Joseph I.

    1985-01-01

    geologic and seismologic record indicates that parts of the San Andreas and San Jacinto faults have generated major earthquakes having recurrence intervals of several tens to a few hundred years. In contrast, the geologic evidence at points along other active faults suggests recurrence intervals measured in many hundreds to several thousands of years. The distribution and character of late Quaternary surface faulting permit estimation of the likely location, style, and amount of future surface displacements. An extensive body of geologic and geotechnical information is used to evaluate areal differences in future levels of shaking. Bedrock and alluvial deposits are differentiated according to the physical properties that control shaking response; maps of these properties are prepared by analyzing existing geologic and soils maps, the geomorphology of surficial units, and. geotechnical data obtained from boreholes. The shear-wave velocities of near-surface geologic units must be estimated for some methods of evaluating shaking potential. Regional-scale maps of highly generalized shearwave velocity groups, based on the age and texture of exposed geologic units and on a simple two-dimensional model of Quaternary sediment distribution, provide a first approximation of the areal variability in shaking response. More accurate depictions of near-surface shear-wave velocity useful for predicting ground-motion parameters take into account the thickness of the Quaternary deposits, vertical variations in sediment .type, and the correlation of shear-wave velocity with standard penetration resistance of different sediments. A map of the upper Santa Ana River basin showing shear-wave velocities to depths equal to one-quarter wavelength of a 1-s shear wave demonstrates the three-dimensional mapping procedure. Four methods for predicting the distribution and strength of shaking from future earthquakes are presented. These techniques use different measures of strong-motion

  12. Slip-accumulation patterns and earthquake recurrences along the Talas-Fergana Fault - Contributions of high-resolution geomorphic offsets.

    Science.gov (United States)

    Rizza, M.; Dubois, C.; Fleury, J.; Abdrakhmatov, K.; Pousse, L.; Baikulov, S.; Vezinet, A.

    2017-12-01

    In the western Tien-Shan Range, the largest intracontinental strike-slip fault is the Karatau-Talas Fergana Fault system. This dextral fault system is subdivided into two main segments: the Karatau fault to the north and the Talas-Fergana fault (TFF) to the south. Kinematics and rates of deformation for the TFF during the Quaternary period are still debated and are poorly constrained. Only a few paleoseismological investigations are availabe along the TFF (Burtman et al., 1996; Korjenkov et al., 2010) and no systematic quantifications of the dextral displacements along the TFF has been undertaken. As such, the appraisal of the TFF behavior demands new tectonic information. In this study, we present the first detailed analysis of the morphology and the segmentation of the TFF and an offset inventory of morphological markers along the TFF. To discuss temporal and spatial recurrence patterns of slip accumulated over multiple seismic events, our study focused on a 60 km-long section of the TFF (Chatkal segment). Using tri-stereo Pleiades satellite images, high-resolution DEMs (1*1 m pixel size) have been generated in order to (i) analyze the fine-scale fault geometry and (ii) thoroughly measure geomorphic offsets. Photogrammetry data obtained from our drone survey on high interest sites, provide higher-resolution DEMs of 0.5 * 0.5 m pixel size.Our remote sensing mapping allows an unprecedented subdivision - into five distinct segments - of the study area. About 215 geomorphic markers have been measured and offsets range from 4.5m to 180 m. More than 80% of these offsets are smaller than 60 m, suggesting landscape reset during glacial maximum. Calculations of Cumulative Offset Probability Density (COPD) for the whole 60 km-long section as well as for each segments support distinct behavior from a segment to another and thus variability in slip-accumulation patterns. Our data argue for uniform slip model behavior along this section of the TFF. Moreover, we excavated a

  13. New Geological Evidence of Past Earthquakes and Tsunami Along the Nankai Trough, Japan

    Science.gov (United States)

    De Batist, M. A. O.; Heyvaert, V.; Hubert-Ferrari, A.; Fujiwara, O.; Shishikura, M.; Yokoyama, Y.; Brückner, H.; Garrett, E.; Boes, E.; Lamair, L.; Nakamura, A.; Miyairi, Y.; Yamamoto, S.

    2015-12-01

    The east coast of Japan is prone to tsunamigenic megathrust earthquakes, as tragically demonstrated in 2011 by the Tōhoku earthquake (Mw 9.0) and tsunami. The Nankai Trough subduction zone, to the southwest of the area affected by the Tōhoku disaster and facing the densely populated and heavily industrialized southern coastline of central and west Japan, is expected to generate another megathrust earthquake and tsunami in the near future. This subduction zone is, however, segmented and appears to be characterized by a variable rupture mode, involving single- as well as multi-segment ruptures, which has immediate implications for their tsunamigenic potential, and also renders the collection of sufficiently long time records of past earthquakes and tsunami in this region fundamental for an adequate hazard and risk assessment. Over the past three decades, Japanese researchers have acquired a large amount of geological evidence of past earthquakes and tsunami, in many cases extending back in time for several thousands of years. This evidence includes uplifted marine terraces, turbidites, liquefaction features, subsided marshes and tsunami deposits in coastal lakes and lowlands. Despite these efforts, current understanding of the behaviour of the subduction zone still remains limited, due to site-specific evidence creation and preservation thresholds and issues over alternative hypotheses for proposed palaeoseismic evidence and insufficiently precise chronological control. Within the QuakeRecNankai project we are generating a long and coherent time series of megathrust earthquake and tsunami recurrences along the Nankai Trough subduction zone by integrating all existing evidence with new geological records of paleo-tsunami in the Lake Hamana region and of paleo-earthquakes from selected lakes in the Mount Fuji area. We combine extensive fieldwork in coastal plain areas and lakes, with advanced sedimentological and geochemical analyses and innovative dating techniques.

  14. Analog earthquakes

    International Nuclear Information System (INIS)

    Hofmann, R.B.

    1995-01-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository

  15. Satellite Geodetic Constraints On Earthquake Processes: Implications of the 1999 Turkish Earthquakes for Fault Mechanics and Seismic Hazards on the San Andreas Fault

    Science.gov (United States)

    Reilinger, Robert

    2005-01-01

    Our principal activities during the initial phase of this project include: 1) Continued monitoring of postseismic deformation for the 1999 Izmit and Duzce, Turkey earthquakes from repeated GPS survey measurements and expansion of the Marmara Continuous GPS Network (MAGNET), 2) Establishing three North Anatolian fault crossing profiles (10 sitedprofile) at locations that experienced major surface-fault earthquakes at different times in the past to examine strain accumulation as a function of time in the earthquake cycle (2004), 3) Repeat observations of selected sites in the fault-crossing profiles (2005), 4) Repeat surveys of the Marmara GPS network to continue to monitor postseismic deformation, 5) Refining block models for the Marmara Sea seismic gap area to better understand earthquake hazards in the Greater Istanbul area, 6) Continuing development of models for afterslip and distributed viscoelastic deformation for the earthquake cycle. We are keeping close contact with MIT colleagues (Brad Hager, and Eric Hetland) who are developing models for S. California and for the earthquake cycle in general (Hetland, 2006). In addition, our Turkish partners at the Marmara Research Center have undertaken repeat, micro-gravity measurements at the MAGNET sites and have provided us estimates of gravity change during the period 2003 - 2005.

  16. Earthquake Drill using the Earthquake Early Warning System at an Elementary School

    Science.gov (United States)

    Oki, Satoko; Yazaki, Yoshiaki; Koketsu, Kazuki

    2010-05-01

    Japan frequently suffers from many kinds of disasters such as earthquakes, typhoons, floods, volcanic eruptions, and landslides. On average, we lose about 120 people a year due to natural hazards in this decade. Above all, earthquakes are noteworthy, since it may kill thousands of people in a moment like in Kobe in 1995. People know that we may have "a big one" some day as long as we live on this land and that what to do; retrofit houses, appliance heavy furniture to walls, add latches to kitchen cabinets, and prepare emergency packs. Yet most of them do not take the action, and result in the loss of many lives. It is only the victims that learn something from the earthquake, and it has never become the lore of the nations. One of the most essential ways to reduce the damage is to educate the general public to be able to make the sound decision on what to do at the moment when an earthquake hits. This will require the knowledge of the backgrounds of the on-going phenomenon. The Ministry of Education, Culture, Sports, Science and Technology (MEXT), therefore, offered for public subscription to choose several model areas to adopt scientific education to the local elementary schools. This presentation is the report of a year and half courses that we had at the model elementary school in Tokyo Metropolitan Area. The tectonic setting of this area is very complicated; there are the Pacific and Philippine Sea plates subducting beneath the North America and the Eurasia plates. The subduction of the Philippine Sea plate causes mega-thrust earthquakes such as the 1923 Kanto earthquake (M 7.9) making 105,000 fatalities. A magnitude 7 or greater earthquake beneath this area is recently evaluated to occur with a probability of 70 % in 30 years. This is of immediate concern for the devastating loss of life and property because the Tokyo urban region now has a population of 42 million and is the center of approximately 40 % of the nation's activities, which may cause great global

  17. Ground-rupturing earthquakes on the northern Big Bend of the San Andreas Fault, California, 800 A.D. to Present

    Science.gov (United States)

    Scharer, Katherine M.; Weldon, Ray; Biasi, Glenn; Streig, Ashley; Fumal, Thomas E.

    2017-01-01

    Paleoseismic data on the timing of ground-rupturing earthquakes constrain the recurrence behavior of active faults and can provide insight on the rupture history of a fault if earthquakes dated at neighboring sites overlap in age and are considered correlative. This study presents the evidence and ages for 11 earthquakes that occurred along the Big Bend section of the southern San Andreas Fault at the Frazier Mountain paleoseismic site. The most recent earthquake to rupture the site was the Mw7.7–7.9 Fort Tejon earthquake of 1857. We use over 30 trench excavations to document the structural and sedimentological evolution of a small pull-apart basin that has been repeatedly faulted and folded by ground-rupturing earthquakes. A sedimentation rate of 0.4 cm/yr and abundant organic material for radiocarbon dating contribute to a record that is considered complete since 800 A.D. and includes 10 paleoearthquakes. Earthquakes have ruptured this location on average every ~100 years over the last 1200 years, but individual intervals range from ~22 to 186 years. The coefficient of variation of the length of time between earthquakes (0.7) indicates quasiperiodic behavior, similar to other sites along the southern San Andreas Fault. Comparison with the earthquake chronology at neighboring sites along the fault indicates that only one other 1857-size earthquake could have occurred since 1350 A.D., and since 800 A.D., the Big Bend and Mojave sections have ruptured together at most 50% of the time in Mw ≥ 7.3 earthquakes.

  18. Tsunami-hazard assessment based on subaquatic slope-failure susceptibility and tsunami-inundation modeling

    Science.gov (United States)

    Anselmetti, Flavio; Hilbe, Michael; Strupler, Michael; Baumgartner, Christoph; Bolz, Markus; Braschler, Urs; Eberli, Josef; Liniger, Markus; Scheiwiller, Peter; Strasser, Michael

    2015-04-01

    Due to their smaller dimensions and confined bathymetry, lakes act as model oceans that may be used as analogues for the much larger oceans and their margins. Numerous studies in the perialpine lakes of Central Europe have shown that their shores were repeatedly struck by several-meters-high tsunami waves, which were caused by subaquatic slides usually triggered by earthquake shaking. A profound knowledge of these hazards, their intensities and recurrence rates is needed in order to perform thorough tsunami-hazard assessment for the usually densely populated lake shores. In this context, we present results of a study combining i) basinwide slope-stability analysis of subaquatic sediment-charged slopes with ii) identification of scenarios for subaquatic slides triggered by seismic shaking, iii) forward modeling of resulting tsunami waves and iv) mapping of intensity of onshore inundation in populated areas. Sedimentological, stratigraphical and geotechnical knowledge of the potentially unstable sediment drape on the slopes is required for slope-stability assessment. Together with critical ground accelerations calculated from already failed slopes and paleoseismic recurrence rates, scenarios for subaquatic sediment slides are established. Following a previously used approach, the slides are modeled as a Bingham plastic on a 2D grid. The effect on the water column and wave propagation are simulated using the shallow-water equations (GeoClaw code), which also provide data for tsunami inundation, including flow depth, flow velocity and momentum as key variables. Combining these parameters leads to so called «intensity maps» for flooding that provide a link to the established hazard mapping framework, which so far does not include these phenomena. The current versions of these maps consider a 'worst case' deterministic earthquake scenario, however, similar maps can be calculated using probabilistic earthquake recurrence rates, which are expressed in variable amounts of

  19. Computational fluid dynamics (CFD) using porous media modeling predicts recurrence after coiling of cerebral aneurysms.

    Science.gov (United States)

    Umeda, Yasuyuki; Ishida, Fujimaro; Tsuji, Masanori; Furukawa, Kazuhiro; Shiba, Masato; Yasuda, Ryuta; Toma, Naoki; Sakaida, Hiroshi; Suzuki, Hidenori

    2017-01-01

    This study aimed to predict recurrence after coil embolization of unruptured cerebral aneurysms with computational fluid dynamics (CFD) using porous media modeling (porous media CFD). A total of 37 unruptured cerebral aneurysms treated with coiling were analyzed using follow-up angiograms, simulated CFD prior to coiling (control CFD), and porous media CFD. Coiled aneurysms were classified into stable or recurrence groups according to follow-up angiogram findings. Morphological parameters, coil packing density, and hemodynamic variables were evaluated for their correlations with aneurysmal recurrence. We also calculated residual flow volumes (RFVs), a novel hemodynamic parameter used to quantify the residual aneurysm volume after simulated coiling, which has a mean fluid domain > 1.0 cm/s. Follow-up angiograms showed 24 aneurysms in the stable group and 13 in the recurrence group. Mann-Whitney U test demonstrated that maximum size, dome volume, neck width, neck area, and coil packing density were significantly different between the two groups (P CFD and larger RFVs in the porous media CFD. Multivariate logistic regression analyses demonstrated that RFV was the only independently significant factor (odds ratio, 1.06; 95% confidence interval, 1.01-1.11; P = 0.016). The study findings suggest that RFV collected under porous media modeling predicts the recurrence of coiled aneurysms.

  20. Shortcomings of InSAR for studying megathrust earthquakes: The case of the M w 9.0 Tohoku-Oki earthquake

    KAUST Repository

    Feng, Guangcai

    2012-05-28

    Interferometric Synthetic Aperture Radar (InSAR) observations are sometimes the only geodetic data of large subduction-zone earthquakes. However, these data usually suffer from spatially long-wavelength orbital and atmospheric errors that can be difficult to distinguish from the coseismic deformation and may therefore result in biased fault-slip inversions. To study how well InSAR constrains fault-slip of large subduction zone earthquakes, we use data of the 11 March 2011 Tohoku-Oki earthquake (Mw9.0) and test InSAR-derived fault-slip models against models constrained by GPS data from the extensive nationwide network in Japan. The coseismic deformation field was mapped using InSAR data acquired from multiple ascending and descending passes of the ALOS and Envisat satellites. We then estimated several fault-slip distribution models that were constrained using the InSAR data alone, onland and seafloor GPS/acoustic data, or combinations of the different data sets. Based on comparisons of the slip models, we find that there is no real gain by including InSAR observations for determining the fault slip distribution of this earthquake. That said, however, some of the main fault-slip patterns can be retrieved using the InSAR data alone when estimating long wavelength orbital/atmospheric ramps as a part of the modeling. Our final preferred fault-slip solution of the Tohoku-Oki earthquake is based only on the GPS data and has maximum reverse- and strike-slip of 36.0 m and 6.0 m, respectively, located northeast of the epicenter at a depth of 6 km, and has a total geodetic moment is 3.6 × 1022 Nm (Mw 9.01), similar to seismological estimates.

  1. Local Recurrence After Uveal Melanoma Proton Beam Therapy: Recurrence Types and Prognostic Consequences

    International Nuclear Information System (INIS)

    Caujolle, Jean-Pierre; Paoli, Vincent; Chamorey, Emmanuel; Maschi, Celia; Baillif, Stéphanie; Herault, Joël; Gastaud, Pierre; Hannoun-Levi, Jean Michel

    2013-01-01

    Purpose: To study the prognosis of the different types of uveal melanoma recurrences treated by proton beam therapy (PBT). Methods and Materials: This retrospective study analyzed 61 cases of uveal melanoma local recurrences on a total of 1102 patients treated by PBT between June 1991 and December 2010. Survival rates have been determined by using Kaplan-Meier curves. Prognostic factors have been evaluated by using log-rank test or Cox model. Results: Our local recurrence rate was 6.1% at 5 years. These recurrences were divided into 25 patients with marginal recurrences, 18 global recurrences, 12 distant recurrences, and 6 extrascleral extensions. Five factors have been identified as statistically significant risk factors of local recurrence in the univariate analysis: large tumoral diameter, small tumoral volume, low ratio of tumoral volume over eyeball volume, iris root involvement, and safety margin inferior to 1 mm. In the local recurrence-free population, the overall survival rate was 68.7% at 10 years and the specific survival rate was 83.6% at 10 years. In the local recurrence population, the overall survival rate was 43.1% at 10 years and the specific survival rate was 55% at 10 years. The multivariate analysis of death risk factors has shown a better prognosis for marginal recurrences. Conclusion: Survival rate of marginal recurrences is superior to that of the other recurrences. The type of recurrence is a clinical prognostic value to take into account. The influence of local recurrence retreatment by proton beam therapy should be evaluated by novel studies

  2. Local Recurrence After Uveal Melanoma Proton Beam Therapy: Recurrence Types and Prognostic Consequences

    Energy Technology Data Exchange (ETDEWEB)

    Caujolle, Jean-Pierre, E-mail: ncaujolle@aol.com [Department of Ophthalmology, Saint Roch Hospital, Nice Teaching Hospital, Nice (France); Paoli, Vincent [Department of Ophthalmology, Saint Roch Hospital, Nice Teaching Hospital, Nice (France); Chamorey, Emmanuel [Department of Radiation Oncology, Protontherapy Center, Centre Antoine Lacassagne, Nice (France); Department of Biostatistics and Epidemiology, Centre Antoine Lacassagne, Nice (France); Maschi, Celia; Baillif, Stéphanie [Department of Ophthalmology, Saint Roch Hospital, Nice Teaching Hospital, Nice (France); Herault, Joël [Department of Radiation Oncology, Protontherapy Center, Centre Antoine Lacassagne, Nice (France); Gastaud, Pierre [Department of Ophthalmology, Saint Roch Hospital, Nice Teaching Hospital, Nice (France); Hannoun-Levi, Jean Michel [Department of Radiation Oncology, Protontherapy Center, Centre Antoine Lacassagne, Nice (France)

    2013-04-01

    Purpose: To study the prognosis of the different types of uveal melanoma recurrences treated by proton beam therapy (PBT). Methods and Materials: This retrospective study analyzed 61 cases of uveal melanoma local recurrences on a total of 1102 patients treated by PBT between June 1991 and December 2010. Survival rates have been determined by using Kaplan-Meier curves. Prognostic factors have been evaluated by using log-rank test or Cox model. Results: Our local recurrence rate was 6.1% at 5 years. These recurrences were divided into 25 patients with marginal recurrences, 18 global recurrences, 12 distant recurrences, and 6 extrascleral extensions. Five factors have been identified as statistically significant risk factors of local recurrence in the univariate analysis: large tumoral diameter, small tumoral volume, low ratio of tumoral volume over eyeball volume, iris root involvement, and safety margin inferior to 1 mm. In the local recurrence-free population, the overall survival rate was 68.7% at 10 years and the specific survival rate was 83.6% at 10 years. In the local recurrence population, the overall survival rate was 43.1% at 10 years and the specific survival rate was 55% at 10 years. The multivariate analysis of death risk factors has shown a better prognosis for marginal recurrences. Conclusion: Survival rate of marginal recurrences is superior to that of the other recurrences. The type of recurrence is a clinical prognostic value to take into account. The influence of local recurrence retreatment by proton beam therapy should be evaluated by novel studies.

  3. Stress and Strain Rates from Faults Reconstructed by Earthquakes Relocalization

    Science.gov (United States)

    Morra, G.; Chiaraluce, L.; Di Stefano, R.; Michele, M.; Cambiotti, G.; Yuen, D. A.; Brunsvik, B.

    2017-12-01

    Recurrence of main earthquakes on the same fault depends on kinematic setting, hosting lithologies and fault geometry and population. Northern and central Italy transitioned from convergence to post-orogenic extension. This has produced a unique and very complex tectonic setting characterized by superimposed normal faults, crossing different geologic domains, that allows to investigate a variety of seismic manifestations. In the past twenty years three seismic sequences (1997 Colfiorito, 2009 L'Aquila and 2016-17 Amatrice-Norcia-Visso) activated a 150km long normal fault system located between the central and northern apennines and allowing the recordings of thousands of seismic events. Both the 1997 and the 2009 main shocks were preceded by a series of small pre-shocks occurring in proximity to the future largest events. It has been proposed and modelled that the seismicity pattern of the two foreshocks sequences was caused by active dilatancy phenomenon, due to fluid flow in the source area. Seismic activity has continued intensively until three events with 6.0

  4. Earthquake imprints on a lacustrine deltaic system: The Kürk Delta along the East Anatolian Fault (Turkey)

    KAUST Repository

    Hubert-Ferrari, Auré lia; El-Ouahabi, Meriam; Garcia-Moreno, David; Avsar, Ulas; Altınok, Sevgi; Schmidt, Sabine; Fagel, Nathalie; Ç ağatay, Namık

    2017-01-01

    Deltas contain sedimentary records that are not only indicative of water-level changes, but also particularly sensitive to earthquake shaking typically resulting in soft-sediment-deformation structures. The Kürk lacustrine delta lies at the south-western extremity of Lake Hazar in eastern Turkey and is adjacent to the seismogenic East Anatolian Fault, which has generated earthquakes of magnitude 7. This study re-evaluates water-level changes and earthquake shaking that have affected the Kürk Delta, combining geophysical data (seismic-reflection profiles and side-scan sonar), remote sensing images, historical data, onland outcrops and offshore coring. The history of water-level changes provides a temporal framework for the depositional record. In addition to the common soft-sediment deformation documented previously, onland outcrops reveal a record of deformation (fracturing, tilt and clastic dykes) linked to large earthquake-induced liquefactions and lateral spreading. The recurrent liquefaction structures can be used to obtain a palaeoseismological record. Five event horizons were identified that could be linked to historical earthquakes occurring in the last 1000 years along the East Anatolian Fault. Sedimentary cores sampling the most recent subaqueous sedimentation revealed the occurrence of another type of earthquake indicator. Based on radionuclide dating (Cs and Pb), two major sedimentary events were attributed to the ad 1874 to 1875 East Anatolian Fault earthquake sequence. Their sedimentological characteristics were determined by X-ray imagery, X-ray diffraction, loss-on-ignition, grain-size distribution and geophysical measurements. The events are interpreted to be hyperpycnal deposits linked to post-seismic sediment reworking of earthquake-triggered landslides.

  5. Earthquake imprints on a lacustrine deltaic system: The Kürk Delta along the East Anatolian Fault (Turkey)

    KAUST Repository

    Hubert-Ferrari, Aurélia

    2017-01-05

    Deltas contain sedimentary records that are not only indicative of water-level changes, but also particularly sensitive to earthquake shaking typically resulting in soft-sediment-deformation structures. The Kürk lacustrine delta lies at the south-western extremity of Lake Hazar in eastern Turkey and is adjacent to the seismogenic East Anatolian Fault, which has generated earthquakes of magnitude 7. This study re-evaluates water-level changes and earthquake shaking that have affected the Kürk Delta, combining geophysical data (seismic-reflection profiles and side-scan sonar), remote sensing images, historical data, onland outcrops and offshore coring. The history of water-level changes provides a temporal framework for the depositional record. In addition to the common soft-sediment deformation documented previously, onland outcrops reveal a record of deformation (fracturing, tilt and clastic dykes) linked to large earthquake-induced liquefactions and lateral spreading. The recurrent liquefaction structures can be used to obtain a palaeoseismological record. Five event horizons were identified that could be linked to historical earthquakes occurring in the last 1000 years along the East Anatolian Fault. Sedimentary cores sampling the most recent subaqueous sedimentation revealed the occurrence of another type of earthquake indicator. Based on radionuclide dating (Cs and Pb), two major sedimentary events were attributed to the ad 1874 to 1875 East Anatolian Fault earthquake sequence. Their sedimentological characteristics were determined by X-ray imagery, X-ray diffraction, loss-on-ignition, grain-size distribution and geophysical measurements. The events are interpreted to be hyperpycnal deposits linked to post-seismic sediment reworking of earthquake-triggered landslides.

  6. Earthquake outlook for the San Francisco Bay region 2014–2043

    Science.gov (United States)

    Aagaard, Brad T.; Blair, James Luke; Boatwright, John; Garcia, Susan H.; Harris, Ruth A.; Michael, Andrew J.; Schwartz, David P.; DiLeo, Jeanne S.; Jacques, Kate; Donlin, Carolyn

    2016-06-13

    Using information from recent earthquakes, improved mapping of active faults, and a new model for estimating earthquake probabilities, the 2014 Working Group on California Earthquake Probabilities updated the 30-year earthquake forecast for California. They concluded that there is a 72 percent probability (or likelihood) of at least one earthquake of magnitude 6.7 or greater striking somewhere in the San Francisco Bay region before 2043. Earthquakes this large are capable of causing widespread damage; therefore, communities in the region should take simple steps to help reduce injuries, damage, and disruption, as well as accelerate recovery from these earthquakes.

  7. Automatic Earthquake Detection by Active Learning

    Science.gov (United States)

    Bergen, K.; Beroza, G. C.

    2017-12-01

    In recent years, advances in machine learning have transformed fields such as image recognition, natural language processing and recommender systems. Many of these performance gains have relied on the availability of large, labeled data sets to train high-accuracy models; labeled data sets are those for which each sample includes a target class label, such as waveforms tagged as either earthquakes or noise. Earthquake seismologists are increasingly leveraging machine learning and data mining techniques to detect and analyze weak earthquake signals in large seismic data sets. One of the challenges in applying machine learning to seismic data sets is the limited labeled data problem; learning algorithms need to be given examples of earthquake waveforms, but the number of known events, taken from earthquake catalogs, may be insufficient to build an accurate detector. Furthermore, earthquake catalogs are known to be incomplete, resulting in training data that may be biased towards larger events and contain inaccurate labels. This challenge is compounded by the class imbalance problem; the events of interest, earthquakes, are infrequent relative to noise in continuous data sets, and many learning algorithms perform poorly on rare classes. In this work, we investigate the use of active learning for automatic earthquake detection. Active learning is a type of semi-supervised machine learning that uses a human-in-the-loop approach to strategically supplement a small initial training set. The learning algorithm incorporates domain expertise through interaction between a human expert and the algorithm, with the algorithm actively posing queries to the user to improve detection performance. We demonstrate the potential of active machine learning to improve earthquake detection performance with limited available training data.

  8. Fault Rupture Model of the 2016 Gyeongju, South Korea, Earthquake and Its Implication for the Underground Fault System

    Science.gov (United States)

    Uchide, Takahiko; Song, Seok Goo

    2018-03-01

    The 2016 Gyeongju earthquake (ML 5.8) was the largest instrumentally recorded inland event in South Korea. It occurred in the southeast of the Korean Peninsula and was preceded by a large ML 5.1 foreshock. The aftershock seismicity data indicate that these earthquakes occurred on two closely collocated parallel faults that are oblique to the surface trace of the Yangsan fault. We investigate the rupture properties of these earthquakes using finite-fault slip inversion analyses. The obtained models indicate that the ruptures propagated NNE-ward and SSW-ward for the main shock and the large foreshock, respectively. This indicates that these earthquakes occurred on right-step faults and were initiated around a fault jog. The stress drops were up to 62 and 43 MPa for the main shock and the largest foreshock, respectively. These high stress drops imply high strength excess, which may be overcome by the stress concentration around the fault jog.

  9. Modeling of a historical earthquake in Erzincan, Turkey (Ms 7.8, in 1939) using regional seismological information obtained from a recent event

    Science.gov (United States)

    Karimzadeh, Shaghayegh; Askan, Aysegul

    2018-04-01

    Located within a basin structure, at the conjunction of North East Anatolian, North Anatolian and Ovacik Faults, Erzincan city center (Turkey) is one of the most hazardous regions in the world. Combination of the seismotectonic and geological settings of the region has resulted in series of significant seismic activities including the 1939 (Ms 7.8) as well as the 1992 (Mw = 6.6) earthquakes. The devastative 1939 earthquake occurred in the pre-instrumental era in the region with no available local seismograms. Thus, a limited number of studies exist on that earthquake. However, the 1992 event, despite the sparse local network at that time, has been studied extensively. This study aims to simulate the 1939 Erzincan earthquake using available regional seismic and geological parameters. Despite several uncertainties involved, such an effort to quantitatively model the 1939 earthquake is promising, given the historical reports of extensive damage and fatalities in the area. The results of this study are expressed in terms of anticipated acceleration time histories at certain locations, spatial distribution of selected ground motion parameters and felt intensity maps in the region. Simulated motions are first compared against empirical ground motion prediction equations derived with both local and global datasets. Next, anticipated intensity maps of the 1939 earthquake are obtained using local correlations between peak ground motion parameters and felt intensity values. Comparisons of the estimated intensity distributions with the corresponding observed intensities indicate a reasonable modeling of the 1939 earthquake.

  10. A Self-Consistent Fault Slip Model for the 2011 Tohoku Earthquake and Tsunami

    Science.gov (United States)

    Yamazaki, Yoshiki; Cheung, Kwok Fai; Lay, Thorne

    2018-02-01

    The unprecedented geophysical and hydrographic data sets from the 2011 Tohoku earthquake and tsunami have facilitated numerous modeling and inversion analyses for a wide range of dislocation models. Significant uncertainties remain in the slip distribution as well as the possible contribution of tsunami excitation from submarine slumping or anelastic wedge deformation. We seek a self-consistent model for the primary teleseismic and tsunami observations through an iterative approach that begins with downsampling of a finite fault model inverted from global seismic records. Direct adjustment of the fault displacement guided by high-resolution forward modeling of near-field tsunami waveform and runup measurements improves the features that are not satisfactorily accounted for by the seismic wave inversion. The results show acute sensitivity of the runup to impulsive tsunami waves generated by near-trench slip. The adjusted finite fault model is able to reproduce the DART records across the Pacific Ocean in forward modeling of the far-field tsunami as well as the global seismic records through a finer-scale subfault moment- and rake-constrained inversion, thereby validating its ability to account for the tsunami and teleseismic observations without requiring an exotic source. The upsampled final model gives reasonably good fits to onshore and offshore geodetic observations albeit early after-slip effects and wedge faulting that cannot be reliably accounted for. The large predicted slip of over 20 m at shallow depth extending northward to 39.7°N indicates extensive rerupture and reduced seismic hazard of the 1896 tsunami earthquake zone, as inferred to varying extents by several recent joint and tsunami-only inversions.

  11. Operational Earthquake Forecasting: Proposed Guidelines for Implementation (Invited)

    Science.gov (United States)

    Jordan, T. H.

    2010-12-01

    The goal of operational earthquake forecasting (OEF) is to provide the public with authoritative information about how seismic hazards are changing with time. During periods of high seismic activity, short-term earthquake forecasts based on empirical statistical models can attain nominal probability gains in excess of 100 relative to the long-term forecasts used in probabilistic seismic hazard analysis (PSHA). Prospective experiments are underway by the Collaboratory for the Study of Earthquake Predictability (CSEP) to evaluate the reliability and skill of these seismicity-based forecasts in a variety of tectonic environments. How such information should be used for civil protection is by no means clear, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing formal procedures for OEF in this sort of “low-probability environment.” Nevertheless, the need to move more quickly towards OEF has been underscored by recent experiences, such as the 2009 L’Aquila earthquake sequence and other seismic crises in which an anxious public has been confused by informal, inconsistent earthquake forecasts. Whether scientists like it or not, rising public expectations for real-time information, accelerated by the use of social media, will require civil protection agencies to develop sources of authoritative information about the short-term earthquake probabilities. In this presentation, I will discuss guidelines for the implementation of OEF informed by my experience on the California Earthquake Prediction Evaluation Council, convened by CalEMA, and the International Commission on Earthquake Forecasting, convened by the Italian government following the L’Aquila disaster. (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and

  12. The seismic cycle at subduction thrusts: 1. Insights from laboratory models

    KAUST Repository

    Corbi, F.; Funiciello, F.; Moroni, M.; van Dinther, Y.; Mai, Paul Martin; Dalguer, L. A.; Faccenna, C.

    2013-01-01

    Subduction megathrust earthquakes occur at the interface between the subducting and overriding plates. These hazardous phenomena are only partially understood because of the absence of direct observations, the restriction of the instrumental seismic record to the past century, and the limited resolution/completeness of historical to geological archives. To overcome these restrictions, modeling has become a key-tool to study megathrust earthquakes. We present a novel model to investigate the seismic cycle at subduction thrusts using complementary analog (paper 1) and numerical (paper 2) approaches. Here we introduce a simple scaled gelatin-on-sandpaper setup including realistic tectonic loading, spontaneous rupture nucleation, and viscoelastic response of the lithosphere. Particle image velocimetry allows to derive model deformation and earthquake source parameters. Analog earthquakes are characterized by “quasi-periodic” recurrence. Consistent with elastic theory, the interseismic stage shows rearward motion, subsidence in the outer wedge and uplift of the “coastal area” as a response of locked plate interface at shallow depth. The coseismic stage exhibits order of magnitude higher velocities and reversal of the interseismic deformation pattern in the seaward direction, subsidence of the coastal area, and uplift in the outer wedge. Like natural earthquakes, analog earthquakes generally nucleate in the deeper portion of the rupture area and preferentially propagate upward in a crack-like fashion. Scaled rupture width-slip proportionality and seismic moment-duration scaling verifies dynamic similarities with earthquakes. Experimental repeatability is statistically verified. Comparing analog results with natural observations, we conclude that this technique is suitable for investigating the parameter space influencing the subduction interplate seismic cycle.

  13. The seismic cycle at subduction thrusts: 1. Insights from laboratory models

    KAUST Repository

    Corbi, F.

    2013-04-01

    Subduction megathrust earthquakes occur at the interface between the subducting and overriding plates. These hazardous phenomena are only partially understood because of the absence of direct observations, the restriction of the instrumental seismic record to the past century, and the limited resolution/completeness of historical to geological archives. To overcome these restrictions, modeling has become a key-tool to study megathrust earthquakes. We present a novel model to investigate the seismic cycle at subduction thrusts using complementary analog (paper 1) and numerical (paper 2) approaches. Here we introduce a simple scaled gelatin-on-sandpaper setup including realistic tectonic loading, spontaneous rupture nucleation, and viscoelastic response of the lithosphere. Particle image velocimetry allows to derive model deformation and earthquake source parameters. Analog earthquakes are characterized by “quasi-periodic” recurrence. Consistent with elastic theory, the interseismic stage shows rearward motion, subsidence in the outer wedge and uplift of the “coastal area” as a response of locked plate interface at shallow depth. The coseismic stage exhibits order of magnitude higher velocities and reversal of the interseismic deformation pattern in the seaward direction, subsidence of the coastal area, and uplift in the outer wedge. Like natural earthquakes, analog earthquakes generally nucleate in the deeper portion of the rupture area and preferentially propagate upward in a crack-like fashion. Scaled rupture width-slip proportionality and seismic moment-duration scaling verifies dynamic similarities with earthquakes. Experimental repeatability is statistically verified. Comparing analog results with natural observations, we conclude that this technique is suitable for investigating the parameter space influencing the subduction interplate seismic cycle.

  14. Latin American contributions to the GEM’s Earthquake Consequences Database

    OpenAIRE

    Cardona Arboleda, Omar Dario; Ordaz Schroeder, Mario Gustavo; Salgado Gálvez, Mario Andrés; Carreño Tibaduiza, Martha Liliana; Barbat Barbat, Horia Alejandro

    2016-01-01

    One of the projects of the Global Earthquake Model (GEM) was to develop a global earthquake consequences database (GEMECD) which served both to be an open and public repository of damages and losses on different types of elements at global level and also as a benchmark for the development of vulnerability models that could capture specific characteristics of the affected countries. The online earthquakes consequences database has information on 71 events where 14 correspond to events that occ...

  15. Numerical Modeling on Co-seismic Influence of Wenchuan 8.0 Earthquake in Sichuan-Yunnan Area, China

    Science.gov (United States)

    Chen, L.; Li, H.; Lu, Y.; Li, Y.; Ye, J.

    2009-12-01

    In this paper, a three dimensional finite element model for active faults which are handled by contact friction elements in Sichuan-Yunnan area is built. Applying the boundary conditions determined through GPS data, a numerical simulations on spatial patterns of stress-strain changes induced by Wenchuan Ms8.0 earthquake are performed. Some primary results are: a) the co-seismic displacements in Longmen shan fault zone by the initial cracking event benefit not only the NE-direction expanding of subsequent fracture process but also the focal mechanism conversions from thrust to right lateral strike for the most of following sub-cracking events. b) tectonic movements induced by the Wenchuan earthquake are stronger in the upper wall of Longmen shan fault belt than in the lower wall and are influenced remarkably by the northeast boundary faults of the rhombic block. c) the extrema of stress changes induced by the main shock are 106Pa and its spatial size is about 400km long and 100km wide. The total stress level is reduced in the most regions in Longmen shan fault zone, whereas stress change is rather weak in its southwest segment and possibly result in fewer aftershocks in there. d) effects induced by the Wenchuan earthquake to the major active faults are obviously different from each other. e) triggering effect of the Wenchuan earthquake to the following Huili 6.1 earthquake is very weak.

  16. Studies of the subsurface effects of earthquakes

    International Nuclear Information System (INIS)

    Marine, I.W.

    1980-01-01

    As part of the National Terminal Waste Storage Program, the Savannah River Laboratory is conducting a series of studies on the subsurface effects of earthquakes. This report summarizes three subcontracted studies. (1) Earthquake damage to underground facilities: the purpose of this study was to document damage and nondamage caused by earthquakes to tunnels and shallow underground openings; to mines and other deep openings; and to wells, shafts, and other vertical facilities. (2) Earthquake related displacement fields near underground facilities: the study included an analysis of block motion, an analysis of the dependence of displacement on the orientation and distance of joints from the earthquake source, and displacement related to distance and depth near a causative fault as a result of various shapes, depths, and senses of movement on the causative fault. (3) Numerical simulation of earthquake effects on tunnels for generic nuclear waste repositories: the objective of this study was to use numerical modeling to determine under what conditions seismic waves might cause instability of an underground opening or create fracturing that would increase the permeability of the rock mass

  17. Earthquake potential revealed by tidal influence on earthquake size-frequency statistics

    Science.gov (United States)

    Ide, Satoshi; Yabe, Suguru; Tanaka, Yoshiyuki

    2016-11-01

    The possibility that tidal stress can trigger earthquakes is long debated. In particular, a clear causal relationship between small earthquakes and the phase of tidal stress is elusive. However, tectonic tremors deep within subduction zones are highly sensitive to tidal stress levels, with tremor rate increasing at an exponential rate with rising tidal stress. Thus, slow deformation and the possibility of earthquakes at subduction plate boundaries may be enhanced during periods of large tidal stress. Here we calculate the tidal stress history, and specifically the amplitude of tidal stress, on a fault plane in the two weeks before large earthquakes globally, based on data from the global, Japanese, and Californian earthquake catalogues. We find that very large earthquakes, including the 2004 Sumatran, 2010 Maule earthquake in Chile and the 2011 Tohoku-Oki earthquake in Japan, tend to occur near the time of maximum tidal stress amplitude. This tendency is not obvious for small earthquakes. However, we also find that the fraction of large earthquakes increases (the b-value of the Gutenberg-Richter relation decreases) as the amplitude of tidal shear stress increases. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. This suggests that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. We conclude that large earthquakes are more probable during periods of high tidal stress.

  18. A smartphone application for earthquakes that matter!

    Science.gov (United States)

    Bossu, Rémy; Etivant, Caroline; Roussel, Fréderic; Mazet-Roux, Gilles; Steed, Robert

    2014-05-01

    level of shaking intensity with empirical models of fatality losses calibrated on past earthquakes in each country. Non-seismic detections and macroseismic questionnaires collected online are combined to identify as many as possible of the felt earthquakes regardless their magnitude. Non seismic detections include Twitter earthquake detections, developed by the US Geological Survey, where the number of tweets containing the keyword "earthquake" is monitored in real time and flashsourcing, developed by the EMSC, which detect traffic surges on its rapid earthquake information website caused by the natural convergence of eyewitnesses who rush to the Internet to investigate the cause of the shaking that they have just felt. All together, we estimate that the number of detected felt earthquakes is around 1 000 per year, compared with the 35 000 earthquakes annually reported by the EMSC! Felt events are already the subject of the web page "Latest significant earthquakes" on EMSC website (http://www.emsc-csem.org/Earthquake/significant_earthquakes.php) and of a dedicated Twitter service @LastQuake. We will present the identification process of the earthquakes that matter, the smartphone application itself (to be released in May) and its future evolutions.

  19. Earthquake risk assessment of building structures

    International Nuclear Information System (INIS)

    Ellingwood, Bruce R.

    2001-01-01

    During the past two decades, probabilistic risk analysis tools have been applied to assess the performance of new and existing building structural systems. Structural design and evaluation of buildings and other facilities with regard to their ability to withstand the effects of earthquakes requires special considerations that are not normally a part of such evaluations for other occupancy, service and environmental loads. This paper reviews some of these special considerations, specifically as they pertain to probability-based codified design and reliability-based condition assessment of existing buildings. Difficulties experienced in implementing probability-based limit states design criteria for earthquake are summarized. Comparisons of predicted and observed building damage highlight the limitations of using current deterministic approaches for post-earthquake building condition assessment. The importance of inherent randomness and modeling uncertainty in forecasting building performance is examined through a building fragility assessment of a steel frame with welded connections that was damaged during the Northridge Earthquake of 1994. The prospects for future improvements in earthquake-resistant design procedures based on a more rational probability-based treatment of uncertainty are examined

  20. Modeling of periodic great earthquakes on the San Andreas fault: Effects of nonlinear crustal rheology

    Science.gov (United States)

    Reches, Ze'ev; Schubert, Gerald; Anderson, Charles

    1994-01-01

    We analyze the cycle of great earthquakes along the San Andreas fault with a finite element numerical model of deformation in a crust with a nonlinear viscoelastic rheology. The viscous component of deformation has an effective viscosity that depends exponentially on the inverse absolute temperature and nonlinearity on the shear stress; the elastic deformation is linear. Crustal thickness and temperature are constrained by seismic and heat flow data for California. The models are for anti plane strain in a 25-km-thick crustal layer having a very long, vertical strike-slip fault; the crustal block extends 250 km to either side of the fault. During the earthquake cycle that lasts 160 years, a constant plate velocity v(sub p)/2 = 17.5 mm yr is applied to the base of the crust and to the vertical end of the crustal block 250 km away from the fault. The upper half of the fault is locked during the interseismic period, while its lower half slips at the constant plate velocity. The locked part of the fault is moved abruptly 2.8 m every 160 years to simulate great earthquakes. The results are sensitive to crustal rheology. Models with quartzite-like rheology display profound transient stages in the velocity, displacement, and stress fields. The predicted transient zone extends about 3-4 times the crustal thickness on each side of the fault, significantly wider than the zone of deformation in elastic models. Models with diabase-like rheology behave similarly to elastic models and exhibit no transient stages. The model predictions are compared with geodetic observations of fault-parallel velocities in northern and central California and local rates of shear strain along the San Andreas fault. The observations are best fit by models which are 10-100 times less viscous than a quartzite-like rheology. Since the lower crust in California is composed of intermediate to mafic rocks, the present result suggests that the in situ viscosity of the crustal rock is orders of magnitude

  1. Tensile earthquakes: Theory, modeling and inversion

    Czech Academy of Sciences Publication Activity Database

    Vavryčuk, Václav

    2011-01-01

    Roč. 116, B12 (2011), B12320/1-B12320/14 ISSN 0148-0227 R&D Projects: GA AV ČR IAA300120801; GA ČR GAP210/10/2063; GA MŠk LM2010008 Institutional research plan: CEZ:AV0Z30120515 Keywords : earthquake * focal mechanism * moment tensor * non-double-couple component Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 3.021, year: 2011

  2. Impact of Short-term Changes In Earthquake Hazard on Risk In Christchurch, New Zealand

    Science.gov (United States)

    Nyst, M.

    2012-12-01

    due to events of varying severities and recurrence intervals, annual premium rates can be set with some longer term risk planning in mind. However, this metric is particularly sensitive to high frequency, moderate magnitude events. Inclusion of earthquake aftershock sequence characteristics into the stochastic event set may have a strong impact on the AAL, depending on the time window of aftershocks that is taken into account. We will present our model of the aftershock-derived, time-dependent hazard for the region of the two earthquakes and will bring about a detailed view on regional, short-term hazard. Dealing with this short-term hazard poses a challenge to the earthquake insurance business. In this presentation we will look at these short-term hazard changes from a risk perspective and quantify the impact on earthquake risk in terms of the main risk metrics used in the industry.

  3. Paleoseismic Trenching on 1939 Erzincan and 1942 Niksar-Erbaa Earthquake Surface Ruptures, the North Anatolian Fault (Turkey)

    Science.gov (United States)

    Akyuz, H. S.; Karabacak, V.; Zabci, C.; Sancar, T.; Altunel, E.; Gursoy, H.; Tatar, O.

    2009-04-01

    Two devastating earthquakes occurred between Erzincan (39.75N, 39.49E) and Erbaa, Tokat (40.70N, 36.58E) just three years one after another in 1939 and 1942. While 1939 Erzincan earthquake (M=7.8) ruptured nearly 360 km, 1942 Erbaa-Niksar earthquake (M=7.1) has a length of 50 km surface rupture. Totally, more than 35000 citizens lost their lives after these events. Although Turkey has one of the richest historical earthquake records, there is no clear evidence of the spatial distribution of paleoevents within these two earthquake segments of the North Anatolian Fault. 17 August 1668 Anatolian earthquake is one of the known previous earthquakes that may have occurred on the same segments with a probable rupture length of more than 400 km. It is still under debate in different catalogues, if it was ruptured in multiple events or a single one. We achieved paleoseismic trench studies to have a better understanding on the recurrence of large earthquakes on these two faults in the framework of T.C. DPT. Project no. 2006K120220. We excavated a total of 8 trenches in 7 different sites. While three of them are along the 1942 Erbaa-Niksar Earthquake rupture, others are located on the 1939 Erzincan one. Alanici and Direkli trenches were excavated on the 1942 rupture. Direkli trench site is located at the west of Niksar, Tokat (40.62N, 36.85E) on the fluvial terrace deposits of the Kelkit River. Only one paleoevent could be determined from the structural relationships of the trench wall stratigraphy. By radiocarbon dating of charcoal sample from above the event horizon indicates that this earthquake should have occurred before 480-412 BC. The second trench, Alanici, on the same segment was located between Erbaa and Niksar (40.65N, 36.78E) at the western boundary of a sag-pond. While signs of two (possible three) earthquakes were identified on the trench wall, the prior event to 1942 Earthquake is dated to be before 5th century AD. We interpreted this to have possibility of

  4. Recurrence in affective disorder

    DEFF Research Database (Denmark)

    Kessing, L V; Olsen, E W; Andersen, P K

    1999-01-01

    The risk of recurrence in affective disorder is influenced by the number of prior episodes and by a person's tendency toward recurrence. Newly developed frailty models were used to estimate the effect of the number of episodes on the rate of recurrence, taking into account individual frailty toward...... recurrence. The study base was the Danish psychiatric case register of all hospital admissions for primary affective disorder in Denmark during 1971-1993. A total of 20,350 first-admission patients were discharged with a diagnosis of major affective disorder. For women with unipolar disorder and for all...... kinds of patients with bipolar disorder, the rate of recurrence was affected by the number of prior episodes even when the effect was adjusted for individual frailty toward recurrence. No effect of episodes but a large effect of the frailty parameter was found for unipolar men. The authors concluded...

  5. Embedding recurrent neural networks into predator-prey models.

    Science.gov (United States)

    Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon

    1999-03-01

    We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.

  6. Subduction zone and crustal dynamics of western Washington; a tectonic model for earthquake hazards evaluation

    Science.gov (United States)

    Stanley, Dal; Villaseñor, Antonio; Benz, Harley

    1999-01-01

    The Cascadia subduction zone is extremely complex in the western Washington region, involving local deformation of the subducting Juan de Fuca plate and complicated block structures in the crust. It has been postulated that the Cascadia subduction zone could be the source for a large thrust earthquake, possibly as large as M9.0. Large intraplate earthquakes from within the subducting Juan de Fuca plate beneath the Puget Sound region have accounted for most of the energy release in this century and future such large earthquakes are expected. Added to these possible hazards is clear evidence for strong crustal deformation events in the Puget Sound region near faults such as the Seattle fault, which passes through the southern Seattle metropolitan area. In order to understand the nature of these individual earthquake sources and their possible interrelationship, we have conducted an extensive seismotectonic study of the region. We have employed P-wave velocity models developed using local earthquake tomography as a key tool in this research. Other information utilized includes geological, paleoseismic, gravity, magnetic, magnetotelluric, deformation, seismicity, focal mechanism and geodetic data. Neotectonic concepts were tested and augmented through use of anelastic (creep) deformation models based on thin-plate, finite-element techniques developed by Peter Bird, UCLA. These programs model anelastic strain rate, stress, and velocity fields for given rheological parameters, variable crust and lithosphere thicknesses, heat flow, and elevation. Known faults in western Washington and the main Cascadia subduction thrust were incorporated in the modeling process. Significant results from the velocity models include delineation of a previously studied arch in the subducting Juan de Fuca plate. The axis of the arch is oriented in the direction of current subduction and asymmetrically deformed due to the effects of a northern buttress mapped in the velocity models. This

  7. Laboratory generated M -6 earthquakes

    Science.gov (United States)

    McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.

    2014-01-01

    We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.

  8. Principles for selecting earthquake motions in engineering design of large dams

    Science.gov (United States)

    Krinitzsky, E.L.; Marcuson, William F.

    1983-01-01

    This report gives a synopsis of the various tools and techniques used in selecting earthquake ground motion parameters for large dams. It presents 18 charts giving newly developed relations for acceleration, velocity, and duration versus site earthquake intensity for near- and far-field hard and soft sites and earthquakes having magnitudes above and below 7. The material for this report is based on procedures developed at the Waterways Experiment Station. Although these procedures are suggested primarily for large dams, they may also be applicable for other facilities. Because no standard procedure exists for selecting earthquake motions in engineering design of large dams, a number of precautions are presented to guide users. The selection of earthquake motions is dependent on which one of two types of engineering analyses are performed. A pseudostatic analysis uses a coefficient usually obtained from an appropriate contour map; whereas, a dynamic analysis uses either accelerograms assigned to a site or specified respunse spectra. Each type of analysis requires significantly different input motions. All selections of design motions must allow for the lack of representative strong motion records, especially near-field motions from earthquakes of magnitude 7 and greater, as well as an enormous spread in the available data. Limited data must be projected and its spread bracketed in order to fill in the gaps and to assure that there will be no surprises. Because each site may have differing special characteristics in its geology, seismic history, attenuation, recurrence, interpreted maximum events, etc., as integrated approach gives best results. Each part of the site investigation requires a number of decisions. In some cases, the decision to use a 'least ork' approach may be suitable, simply assuming the worst of several possibilities and testing for it. Because there are no standard procedures to follow, multiple approaches are useful. For example, peak motions at

  9. Update earthquake risk assessment in Cairo, Egypt

    Science.gov (United States)

    Badawy, Ahmed; Korrat, Ibrahim; El-Hadidy, Mahmoud; Gaber, Hanan

    2017-07-01

    The Cairo earthquake (12 October 1992; m b = 5.8) is still and after 25 years one of the most painful events and is dug into the Egyptians memory. This is not due to the strength of the earthquake but due to the accompanied losses and damages (561 dead; 10,000 injured and 3000 families lost their homes). Nowadays, the most frequent and important question that should rise is "what if this earthquake is repeated today." In this study, we simulate the same size earthquake (12 October 1992) ground motion shaking and the consequent social-economic impacts in terms of losses and damages. Seismic hazard, earthquake catalogs, soil types, demographics, and building inventories were integrated into HAZUS-MH to produce a sound earthquake risk assessment for Cairo including economic and social losses. Generally, the earthquake risk assessment clearly indicates that "the losses and damages may be increased twice or three times" in Cairo compared to the 1992 earthquake. The earthquake risk profile reveals that five districts (Al-Sahel, El Basateen, Dar El-Salam, Gharb, and Madinat Nasr sharq) lie in high seismic risks, and three districts (Manshiyat Naser, El-Waily, and Wassat (center)) are in low seismic risk level. Moreover, the building damage estimations reflect that Gharb is the highest vulnerable district. The analysis shows that the Cairo urban area faces high risk. Deteriorating buildings and infrastructure make the city particularly vulnerable to earthquake risks. For instance, more than 90 % of the estimated buildings damages are concentrated within the most densely populated (El Basateen, Dar El-Salam, Gharb, and Madinat Nasr Gharb) districts. Moreover, about 75 % of casualties are in the same districts. Actually, an earthquake risk assessment for Cairo represents a crucial application of the HAZUS earthquake loss estimation model for risk management. Finally, for mitigation, risk reduction, and to improve the seismic performance of structures and assure life safety

  10. Earthquake, GIS and multimedia. The 1883 Casamicciola earthquake

    Directory of Open Access Journals (Sweden)

    M. Rebuffat

    1995-06-01

    Full Text Available A series of multimedia monographs concerning the main seismic events that have affected the Italian territory are in the process of being produced for the Documental Integrated Multimedia Project (DIMP started by the Italian National Seismic Survey (NSS. The purpose of the project is to reconstruct the historical record of earthquakes and promote an earthquake public education. Producing the monographs. developed in ARC INFO and working in UNIX. involved designing a special filing and management methodology to integrate heterogeneous information (images, papers, cartographies, etc.. This paper describes the possibilities of a GIS (Geographic Information System in the filing and management of documental information. As an example we present the first monograph on the 1883 Casamicciola earthquake. on the island of Ischia (Campania, Italy. This earthquake is particularly interesting for the following reasons: I historical-cultural context (first destructive seismic event after the unification of Italy; 2 its features (volcanic earthquake; 3 the socioeconomic consequences caused at such an important seaside resort.

  11. Inter-Disciplinary Validation of Pre Earthquake Signals. Case Study for Major Earthquakes in Asia (2004-2010) and for 2011 Tohoku Earthquake

    Science.gov (United States)

    Ouzounov, D.; Pulinets, S.; Hattori, K.; Liu, J.-Y.; Yang. T. Y.; Parrot, M.; Kafatos, M.; Taylor, P.

    2012-01-01

    We carried out multi-sensors observations in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several physical and environmental parameters, which we found, associated with the earthquake processes: thermal infrared radiation, temperature and concentration of electrons in the ionosphere, radon/ion activities, and air temperature/humidity in the atmosphere. We used satellite and ground observations and interpreted them with the Lithosphere-Atmosphere- Ionosphere Coupling (LAIC) model, one of possible paradigms we study and support. We made two independent continues hind-cast investigations in Taiwan and Japan for total of 102 earthquakes (M>6) occurring from 2004-2011. We analyzed: (1) ionospheric electromagnetic radiation, plasma and energetic electron measurements from DEMETER (2) emitted long-wavelength radiation (OLR) from NOAA/AVHRR and NASA/EOS; (3) radon/ion variations (in situ data); and 4) GPS Total Electron Content (TEC) measurements collected from space and ground based observations. This joint analysis of ground and satellite data has shown that one to six (or more) days prior to the largest earthquakes there were anomalies in all of the analyzed physical observations. For the latest March 11 , 2011 Tohoku earthquake, our analysis shows again the same relationship between several independent observations characterizing the lithosphere /atmosphere coupling. On March 7th we found a rapid increase of emitted infrared radiation observed from satellite data and subsequently an anomaly developed near the epicenter. The GPS/TEC data indicated an increase and variation in electron density reaching a maximum value on March 8. Beginning from this day we confirmed an abnormal TEC variation over the epicenter in the lower ionosphere. These findings revealed the existence of atmospheric and ionospheric phenomena occurring prior to the 2011 Tohoku earthquake, which indicated new evidence of a distinct

  12. OMG Earthquake! Can Twitter improve earthquake response?

    Science.gov (United States)

    Earle, P. S.; Guy, M.; Ostrum, C.; Horvath, S.; Buckmaster, R. A.

    2009-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment its earthquake response products and the delivery of hazard information. The goal is to gather near real-time, earthquake-related messages (tweets) and provide geo-located earthquake detections and rough maps of the corresponding felt areas. Twitter and other social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources. People local to an event often publish information within seconds via these technologies. In contrast, depending on the location of the earthquake, scientific alerts take between 2 to 20 minutes. Examining the tweets following the March 30, 2009, M4.3 Morgan Hill earthquake shows it is possible (in some cases) to rapidly detect and map the felt area of an earthquake using Twitter responses. Within a minute of the earthquake, the frequency of “earthquake” tweets rose above the background level of less than 1 per hour to about 150 per minute. Using the tweets submitted in the first minute, a rough map of the felt area can be obtained by plotting the tweet locations. Mapping the tweets from the first six minutes shows observations extending from Monterey to Sacramento, similar to the perceived shaking region mapped by the USGS “Did You Feel It” system. The tweets submitted after the earthquake also provided (very) short first-impression narratives from people who experienced the shaking. Accurately assessing the potential and robustness of a Twitter-based system is difficult because only tweets spanning the previous seven days can be searched, making a historical study impossible. We have, however, been archiving tweets for several months, and it is clear that significant limitations do exist. The main drawback is the lack of quantitative information

  13. Modal-space reference-model-tracking fuzzy control of earthquake excited structures

    Science.gov (United States)

    Park, Kwan-Soon; Ok, Seung-Yong

    2015-01-01

    This paper describes an adaptive modal-space reference-model-tracking fuzzy control technique for the vibration control of earthquake-excited structures. In the proposed approach, the fuzzy logic is introduced to update optimal control force so that the controlled structural response can track the desired response of a reference model. For easy and practical implementation, the reference model is constructed by assigning the target damping ratios to the first few dominant modes in modal space. The numerical simulation results demonstrate that the proposed approach successfully achieves not only the adaptive fault-tolerant control system against partial actuator failures but also the robust performance against the variations of the uncertain system properties by redistributing the feedback control forces to the available actuators.

  14. Earthquake Early Warning Systems

    OpenAIRE

    Pei-Yang Lin

    2011-01-01

    Because of Taiwan’s unique geographical environment, earthquake disasters occur frequently in Taiwan. The Central Weather Bureau collated earthquake data from between 1901 and 2006 (Central Weather Bureau, 2007) and found that 97 earthquakes had occurred, of which, 52 resulted in casualties. The 921 Chichi Earthquake had the most profound impact. Because earthquakes have instant destructive power and current scientific technologies cannot provide precise early warnings in advance, earthquake ...

  15. Fault failure with moderate earthquakes

    Science.gov (United States)

    Johnston, M. J. S.; Linde, A. T.; Gladwin, M. T.; Borcherdt, R. D.

    1987-12-01

    High resolution strain and tilt recordings were made in the near-field of, and prior to, the May 1983 Coalinga earthquake ( ML = 6.7, Δ = 51 km), the August 4, 1985, Kettleman Hills earthquake ( ML = 5.5, Δ = 34 km), the April 1984 Morgan Hill earthquake ( ML = 6.1, Δ = 55 km), the November 1984 Round Valley earthquake ( ML = 5.8, Δ = 54 km), the January 14, 1978, Izu, Japan earthquake ( ML = 7.0, Δ = 28 km), and several other smaller magnitude earthquakes. These recordings were made with near-surface instruments (resolution 10 -8), with borehole dilatometers (resolution 10 -10) and a 3-component borehole strainmeter (resolution 10 -9). While observed coseismic offsets are generally in good agreement with expectations from elastic dislocation theory, and while post-seismic deformation continued, in some cases, with a moment comparable to that of the main shock, preseismic strain or tilt perturbations from hours to seconds (or less) before the main shock are not apparent above the present resolution. Precursory slip for these events, if any occurred, must have had a moment less than a few percent of that of the main event. To the extent that these records reflect general fault behavior, the strong constraint on the size and amount of slip triggering major rupture makes prediction of the onset times and final magnitudes of the rupture zones a difficult task unless the instruments are fortuitously installed near the rupture initiation point. These data are best explained by an inhomogeneous failure model for which various areas of the fault plane have either different stress-slip constitutive laws or spatially varying constitutive parameters. Other work on seismic waveform analysis and synthetic waveforms indicates that the rupturing process is inhomogeneous and controlled by points of higher strength. These models indicate that rupture initiation occurs at smaller regions of higher strength which, when broken, allow runaway catastrophic failure.

  16. Pattern recognition and modelling of earthquake registrations with interactive computer support

    International Nuclear Information System (INIS)

    Manova, Katarina S.

    2004-01-01

    The object of the thesis is Pattern Recognition. Pattern recognition i.e. classification, is applied in many fields: speech recognition, hand printed character recognition, medical analysis, satellite and aerial-photo interpretations, biology, computer vision, information retrieval and so on. In this thesis is studied its applicability in seismology. Signal classification is an area of great importance in a wide variety of applications. This thesis deals with the problem of (automatic) classification of earthquake signals, which are non-stationary signals. Non-stationary signal classification is an area of active research in the signal and image processing community. The goal of the thesis is recognition of earthquake signals according to their epicentral zone. Source classification i.e. recognition is based on transformation of seismograms (earthquake registrations) to images, via time-frequency transformations, and applying image processing and pattern recognition techniques for feature extraction, classification and recognition. The tested data include local earthquakes from seismic regions in Macedonia. By using actual seismic data it is shown that proposed methods provide satisfactory results for classification and recognition.(Author)

  17. A comparison among observations and earthquake simulator results for the allcal2 California fault model

    Science.gov (United States)

    Tullis, Terry. E.; Richards-Dinger, Keith B.; Barall, Michael; Dieterich, James H.; Field, Edward H.; Heien, Eric M.; Kellogg, Louise; Pollitz, Fred F.; Rundle, John B.; Sachs, Michael K.; Turcotte, Donald L.; Ward, Steven N.; Yikilmaz, M. Burak

    2012-01-01

    In order to understand earthquake hazards we would ideally have a statistical description of earthquakes for tens of thousands of years. Unfortunately the ∼100‐year instrumental, several 100‐year historical, and few 1000‐year paleoseismological records are woefully inadequate to provide a statistically significant record. Physics‐based earthquake simulators can generate arbitrarily long histories of earthquakes; thus they can provide a statistically meaningful history of simulated earthquakes. The question is, how realistic are these simulated histories? This purpose of this paper is to begin to answer that question. We compare the results between different simulators and with information that is known from the limited instrumental, historic, and paleoseismological data.As expected, the results from all the simulators show that the observational record is too short to properly represent the system behavior; therefore, although tests of the simulators against the limited observations are necessary, they are not a sufficient test of the simulators’ realism. The simulators appear to pass this necessary test. In addition, the physics‐based simulators show similar behavior even though there are large differences in the methodology. This suggests that they represent realistic behavior. Different assumptions concerning the constitutive properties of the faults do result in enhanced capabilities of some simulators. However, it appears that the similar behavior of the different simulators may result from the fault‐system geometry, slip rates, and assumed strength drops, along with the shared physics of stress transfer.This paper describes the results of running four earthquake simulators that are described elsewhere in this issue of Seismological Research Letters. The simulators ALLCAL (Ward, 2012), VIRTCAL (Sachs et al., 2012), RSQSim (Richards‐Dinger and Dieterich, 2012), and ViscoSim (Pollitz, 2012) were run on our most recent all‐California fault

  18. Lake sediments as natural seismographs: Earthquake-related deformations (seismites) in central Canadian lakes

    Science.gov (United States)

    Doughty, M.; Eyles, N.; Eyles, C. H.; Wallace, K.; Boyce, J. I.

    2014-11-01

    Central Canada experiences numerous intraplate earthquakes but their recurrence and source areas remain obscure due to shortness of the instrumental and historic records. Unconsolidated fine-grained sediments in lake basins are 'natural seismographs' with the potential to record ancient earthquakes during the last 10,000 years since the retreat of the Laurentide Ice Sheet. Many lake basins are cut into bedrock and are structurally-controlled by the same Precambrian basement structures (shear zones, terrane boundaries and other lineaments) implicated as the source of ongoing mid-plate earthquake activity. A regional seismic sub-bottom profiling of lakes Gull, Muskoka, Joseph, Rousseau, Ontario, Wanapitei, Fairbanks, Vermilion, Nipissing, Georgian Bay, Mazinaw, Simcoe, Timiskaming, Kipawa, Parry Sound and Lake of Bays, encompassing a total of more than 2000 kilometres of high-resolution track line data supplemented by multibeam and sidescan sonar survey records show a consistent sub-bottom stratigraphy of relatively-thick lowermost lateglacial facies composed of interbedded semi-transparent mass flow facies (debrites, slumps) and rhythmically-laminated silty-clays. Mass flows together with cratered ('kettled') lake floors and associated deformations reflect a dynamic ice-contact glaciolacustrine environment. Exceptionally thick mass flow successions in Lake Timiskaming along the floor of the Timiskaming Graben within the seismically-active Western Quebec Seismic Zone (WQSZ), point to a higher frequency of earthquakes and slope failure during deglaciation and rapid glacio-isostatic rebound though faulting continues into the postglacial. Lateglacial faulting, diapiric deformation and slumping of coeval lateglacial sediments is observed in Parry Sound, Lake Muskoka and Lake Joseph, which are all located above prominent Precambrian terrane boundaries. Lateglacial sediments are sharply overlain by relatively-thin rhythmically-laminated and often semi

  19. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    Science.gov (United States)

    Hardy, N F; Buonomano, Dean V

    2018-02-01

    Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.

  20. Electrostatically actuated resonant switches for earthquake detection

    KAUST Repository

    Ramini, Abdallah H.

    2013-04-01

    The modeling and design of electrostatically actuated resonant switches (EARS) for earthquake and seismic applications are presented. The basic concepts are based on operating an electrically actuated resonator close to instability bands of frequency, where it is forced to collapse (pull-in) if operated within these bands. By careful tuning, the resonator can be made to enter the instability zone upon the detection of the earthquake signal, thereby pulling-in as a switch. Such a switching action can be functionalized for useful functionalities, such as shutting off gas pipelines in the case of earthquakes, or can be used to activate a network of sensors for seismic activity recording in health monitoring applications. By placing a resonator on a printed circuit board (PCB) of a natural frequency close to that of the earthquake\\'s frequency, we show significant improvement on the detection limit of the EARS lowering it considerably to less than 60% of the EARS by itself without the PCB. © 2013 IEEE.

  1. Modeling the poroelastic response to megathrust earthquakes: A look at the 2012 Mw 7.6 Costa Rican event

    Science.gov (United States)

    McCormack, Kimberly A.; Hesse, Marc A.

    2018-04-01

    We model the subsurface hydrologic response to the 7.6 Mw subduction zone earthquake that occurred on the plate interface beneath the Nicoya peninsula in Costa Rica on September 5, 2012. The regional-scale poroelastic model of the overlying plate integrates seismologic, geodetic and hydrologic data sets to predict the post-seismic poroelastic response. A representative two-dimensional model shows that thrust earthquakes with a slip width less than a third of their depth produce complex multi-lobed pressure perturbations in the shallow subsurface. This leads to multiple poroelastic relaxation timescales that may overlap with the longer viscoelastic timescales. In the three-dimensional model, the complex slip distribution of 2012 Nicoya event and its small width to depth ratio lead to a pore pressure distribution comprising multiple trench parallel ridges of high and low pressure. This leads to complex groundwater flow patterns, non-monotonic variations in predicted well water levels, and poroelastic relaxation on multiple time scales. The model also predicts significant tectonically driven submarine groundwater discharge off-shore. In the weeks following the earthquake, the predicted net submarine groundwater discharge in the study area increases, creating a 100 fold increase in net discharge relative to topography-driven flow over the first 30 days. Our model suggests the hydrological response on land is more complex than typically acknowledged in tectonic studies. This may complicate the interpretation of transient post-seismic surface deformations. Combined tectonic-hydrological observation networks have the potential to reduce such ambiguities.

  2. Twitter earthquake detection: Earthquake monitoring in a social world

    Science.gov (United States)

    Earle, Paul S.; Bowden, Daniel C.; Guy, Michelle R.

    2011-01-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets) with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word "earthquake" clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  3. Quantitative prediction of strong motion for a potential earthquake fault

    Directory of Open Access Journals (Sweden)

    Shamita Das

    2010-02-01

    Full Text Available This paper describes a new method for calculating strong motion records for a given seismic region on the basis of the laws of physics using information on the tectonics and physical properties of the earthquake fault. Our method is based on a earthquake model, called a «barrier model», which is characterized by five source parameters: fault length, width, maximum slip, rupture velocity, and barrier interval. The first three parameters may be constrained from plate tectonics, and the fourth parameter is roughly a constant. The most important parameter controlling the earthquake strong motion is the last parameter, «barrier interval». There are three methods to estimate the barrier interval for a given seismic region: 1 surface measurement of slip across fault breaks, 2 model fitting with observed near and far-field seismograms, and 3 scaling law data for small earthquakes in the region. The barrier intervals were estimated for a dozen earthquakes and four seismic regions by the above three methods. Our preliminary results for California suggest that the barrier interval may be determined if the maximum slip is given. The relation between the barrier interval and maximum slip varies from one seismic region to another. For example, the interval appears to be unusually long for Kilauea, Hawaii, which may explain why only scattered evidence of strong ground shaking was observed in the epicentral area of the Island of Hawaii earthquake of November 29, 1975. The stress drop associated with an individual fault segment estimated from the barrier interval and maximum slip lies between 100 and 1000 bars. These values are about one order of magnitude greater than those estimated earlier by the use of crack models without barriers. Thus, the barrier model can resolve, at least partially, the well known discrepancy between the stress-drops measured in the laboratory and those estimated for earthquakes.

  4. Earthquake prediction the ory and its relation to precursors

    International Nuclear Information System (INIS)

    Negarestani, A.; Setayeshi, S.; Ghannadi-Maragheh, M.; Akasheh, B.

    2001-01-01

    Since we don't have enough knowledge about the Physics of earthquakes. therefore. the study of seismic precursors plays an important role in earthquake prediction. Earthquake prediction is a science which discusses about precursory phenomena during seismogenic process, and then investigates the correlation and association among them and the intrinsic relation between precursors and the seismogenic process. ar the end judges comprehensively the seismic status and finally makes earthquake prediction. There are two ways for predicting earthquake prediction. The first is to study the physics of seismogenic process and to determine the parameters in the process based on the source theories and the second way is to use seismic precursors. In this paper the theory of earthquake is reviewed. We also study theory of earthquake using models of earthquake origin, the relation between seismogenic process and various accompanying precursory phenomena. The earthquake prediction is divided into three categories: long-term, medium-term and short-term. We study seismic anomalous behavior. electric field, crustal deformation, gravity. magnetism of earth. change of groundwater variation. groundwater geochemistry and change of Radon gas emission. Finally, it is concluded the there is a correlation between Radon gas emission and earthquake phenomena. Meanwhile, there are some samples from actual processing in this area

  5. Time history nonlinear earthquake response analysis considering materials and geometrical nonlinearity

    International Nuclear Information System (INIS)

    Kobayashi, T.; Yoshikawa, K.; Takaoka, E.; Nakazawa, M.; Shikama, Y.

    2002-01-01

    A time history nonlinear earthquake response analysis method was proposed and applied to earthquake response prediction analysis for a Large Scale Seismic Test (LSST) Program in Hualien, Taiwan, in which a 1/4 scale model of a nuclear reactor containment structure was constructed on sandy gravel layer. In the analysis both of strain-dependent material nonlinearity, and geometrical nonlinearity by base mat uplift, were considered. The 'Lattice Model' for the soil-structure interaction model was employed. An earthquake record on soil surface at the site was used as control motion, and deconvoluted to the input motion of the analysis model at GL-52 m with 300 Gal of maximum acceleration. The following two analyses were considered: (A) time history nonlinear, (B) equivalent linear, and the advantage of time history nonlinear earthquake response analysis method is discussed

  6. Ground water and earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Ts' ai, T H

    1977-11-01

    Chinese folk wisdom has long seen a relationship between ground water and earthquakes. Before an earthquake there is often an unusual change in the ground water level and volume of flow. Changes in the amount of particulate matter in ground water as well as changes in color, bubbling, gas emission, and noises and geysers are also often observed before earthquakes. Analysis of these features can help predict earthquakes. Other factors unrelated to earthquakes can cause some of these changes, too. As a first step it is necessary to find sites which are sensitive to changes in ground stress to be used as sensor points for predicting earthquakes. The necessary features are described. Recording of seismic waves of earthquake aftershocks is also an important part of earthquake predictions.

  7. Numerical modeling of possible lower ionospheric anomalies associated with Nepal earthquake in May, 2015

    Science.gov (United States)

    Chakraborty, Suman; Sasmal, Sudipta; Basak, Tamal; Ghosh, Soujan; Palit, Sourav; Chakrabarti, Sandip K.; Ray, Suman

    2017-10-01

    We present perturbations due to seismo-ionospheric coupling processes in propagation characteristics of sub-ionospheric Very Low Frequency (VLF) signals received at Ionospheric & Earthquake Research Centre (IERC) (Lat. 22.50°N, Long. 87.48°E), India. The study is done during and prior to an earthquake of Richter scale magnitude M = 7.3 occurring at a depth of 18 km at southeast of Kodari, Nepal on 12 May 2015 at 12:35:19 IST (07:05:19 UT). The recorded VLF signal of Japanese transmitter JJI at frequency 22.2 kHz (Lat. 32.08°N, Long. 130.83°E) suffers from strong shifts in sunrise and sunset terminator times towards nighttime starting from three to four days prior to the earthquake. The signal shows a similar variation in terminator times during a major aftershock of magnitude M = 6.7 on 16 May, 2015 at 17:04:10 IST (11:34:10 UT). These shifts in terminator times is numerically modeled using Long Wavelength Propagation Capability (LWPC) Programme. The unperturbed VLF signal is simulated by using the day and night variation of reflection height (h‧) and steepness parameter (β) fed in LWPC for the entire path. The perturbed signal is obtained by additional variation of these parameters inside the earthquake preparation zone. It is found that the shift of the terminator time towards nighttime happens only when the reflection height is increased. We also calculate electron density profile by using the Wait's exponential formula for specified location over the propagation path.

  8. Real-Time Earthquake Monitoring with Spatio-Temporal Fields

    Science.gov (United States)

    Whittier, J. C.; Nittel, S.; Subasinghe, I.

    2017-10-01

    With live streaming sensors and sensor networks, increasingly large numbers of individual sensors are deployed in physical space. Sensor data streams are a fundamentally novel mechanism to deliver observations to information systems. They enable us to represent spatio-temporal continuous phenomena such as radiation accidents, toxic plumes, or earthquakes almost as instantaneously as they happen in the real world. Sensor data streams discretely sample an earthquake, while the earthquake is continuous over space and time. Programmers attempting to integrate many streams to analyze earthquake activity and scope need to write code to integrate potentially very large sets of asynchronously sampled, concurrent streams in tedious application code. In previous work, we proposed the field stream data model (Liang et al., 2016) for data stream engines. Abstracting the stream of an individual sensor as a temporal field, the field represents the Earth's movement at the sensor position as continuous. This simplifies analysis across many sensors significantly. In this paper, we undertake a feasibility study of using the field stream model and the open source Data Stream Engine (DSE) Apache Spark(Apache Spark, 2017) to implement a real-time earthquake event detection with a subset of the 250 GPS sensor data streams of the Southern California Integrated GPS Network (SCIGN). The field-based real-time stream queries compute maximum displacement values over the latest query window of each stream, and related spatially neighboring streams to identify earthquake events and their extent. Further, we correlated the detected events with an USGS earthquake event feed. The query results are visualized in real-time.

  9. Risk assessment models to predict caries recurrence after oral rehabilitation under general anaesthesia: a pilot study.

    Science.gov (United States)

    Lin, Yai-Tin; Kalhan, Ashish Chetan; Lin, Yng-Tzer Joseph; Kalhan, Tosha Ashish; Chou, Chein-Chin; Gao, Xiao Li; Hsu, Chin-Ying Stephen

    2018-05-08

    Oral rehabilitation under general anaesthesia (GA), commonly employed to treat high caries-risk children, has been associated with high economic and individual/family burden, besides high post-GA caries recurrence rates. As there is no caries prediction model available for paediatric GA patients, this study was performed to build caries risk assessment/prediction models using pre-GA data and to explore mid-term prognostic factors for early identification of high-risk children prone to caries relapse post-GA oral rehabilitation. Ninety-two children were identified and recruited with parental consent before oral rehabilitation under GA. Biopsychosocial data collection at baseline and the 6-month follow-up were conducted using questionnaire (Q), microbiological assessment (M) and clinical examination (C). The prediction models constructed using data collected from Q, Q + M and Q + M + C demonstrated an accuracy of 72%, 78% and 82%, respectively. Furthermore, of the 83 (90.2%) patients recalled 6 months after GA intervention, recurrent caries was identified in 54.2%, together with reduced bacterial counts, lower plaque index and increased percentage of children toothbrushing for themselves (all P < 0.05). Additionally, meal-time and toothbrushing duration were shown, through bivariate analyses, to be significant prognostic determinants for caries recurrence (both P < 0.05). Risk assessment/prediction models built using pre-GA data may be promising in identifying high-risk children prone to post-GA caries recurrence, although future internal and external validation of predictive models is warranted. © 2018 FDI World Dental Federation.

  10. The default mode network and recurrent depression: a neurobiological model of cognitive risk factors.

    Science.gov (United States)

    Marchetti, Igor; Koster, Ernst H W; Sonuga-Barke, Edmund J; De Raedt, Rudi

    2012-09-01

    A neurobiological account of cognitive vulnerability for recurrent depression is presented based on recent developments of resting state neural networks. We propose that alterations in the interplay between task positive (TP) and task negative (TN) elements of the Default Mode Network (DMN) act as a neurobiological risk factor for recurrent depression mediated by cognitive mechanisms. In the framework, depression is characterized by an imbalance between TN-TP components leading to an overpowering of TP by TN activity. The TN-TP imbalance is associated with a dysfunctional internally-focused cognitive style as well as a failure to attenuate TN activity in the transition from rest to task. Thus we propose the TN-TP imbalance as overarching neural mechanism involved in crucial cognitive risk factors for recurrent depression, namely rumination, impaired attentional control, and cognitive reactivity. During remission the TN-TP imbalance persists predisposing to vulnerability of recurrent depression. Empirical data to support this model is reviewed. Finally, we specify how this framework can guide future research efforts.

  11. The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook

    Science.gov (United States)

    Mai, P. M.

    2017-12-01

    Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.

  12. Multiple faulting events revealed by trench analysis of the seismogenic structure of the 1976 Ms7.1 Luanxian earthquake, Tangshan Region, China

    Science.gov (United States)

    Guo, Hui; Jiang, Wali; Xie, Xinsheng

    2017-10-01

    The Ms7.8 Tangshan earthquake occurred on 28 July 1976 at 03:42 CST. Approximately 15 h later, the Ms7.1 Luanxian earthquake occurred approximately 40 km northeast of the main shock. The two earthquakes formed different surface rupture zones. The surface rupture of the Tangshan earthquake was NNE-trending and more than 47 km long. The surface rupture of the Luanxian earthquake was more than 6 km long and consisted of two sections, forming a protruding arc to the west. The north and south sections were NE- and NW-trending and 2 km and 4 km long, respectively. A trench was excavated in Sanshanyuan Village across the NE-trending rupture of the Luanxian earthquake, at the macroscopic epicenter of the Luanxian earthquake. Analysis of this trench revealed that the surface rupture is connected to the underground active fault. The following major conclusions regarding Late Quaternary fault activity have been reached. (1) The Sanshanyuan trench indicated that its fault planes trend NE30° and dip SE or NW at angles of approximately 69-82°. (2) The fault experienced four faulting events prior to the Luanxian earthquake at 27.98 ka with an average recurrence interval of approximately 7.5 ka. (3) The Ms7.1 Luanxian earthquake resulted from the activity of the Luanxian Western fault and was triggered by the Ms7.8 Tangshan earthquake. The seismogenic faults of the 1976 Ms7.1 Luanxian earthquake and the 1976 Ms7.8 Tangshan earthquake are not the same fault. This example of an M7 earthquake triggered by a nearly M8 earthquake after more than 10 h on a nearby fault is a worthy topic of research for the future prediction of strong earthquakes.

  13. A record of large earthquakes during the past two millennia on the southern Green Valley Fault, California

    Science.gov (United States)

    Lienkaemper, James J.; Baldwin, John N.; Turner, Robert; Sickler, Robert R.; Brown, Johnathan

    2013-01-01

    We document evidence for surface-rupturing earthquakes (events) at two trench sites on the southern Green Valley fault, California (SGVF). The 75-80-km long dextral SGVF creeps ~1-4 mm/yr. We identify stratigraphic horizons disrupted by upward-flowering shears and in-filled fissures unlikely to have formed from creep alone. The Mason Rd site exhibits four events from ~1013 CE to the Present. The Lopes Ranch site (LR, 12 km to the south) exhibits three events from 18 BCE to Present including the most recent event (MRE), 1610 ±52 yr CE (1σ) and a two-event interval (18 BCE-238 CE) isolated by a millennium of low deposition. Using Oxcal to model the timing of the 4-event earthquake sequence from radiocarbon data and the LR MRE yields a mean recurrence interval (RI or μ) of 199 ±82 yr (1σ) and ±35 yr (standard error of the mean), the first based on geologic data. The time since the most recent earthquake (open window since MRE) is 402 yr ±52 yr, well past μ~200 yr. The shape of the probability density function (pdf) of the average RI from Oxcal resembles a Brownian Passage Time (BPT) pdf (i.e., rather than normal) that permits rarer longer ruptures potentially involving the Berryessa and Hunting Creek sections of the northernmost GVF. The model coefficient of variation (cv, σ/μ) is 0.41, but a larger value (cv ~0.6) fits better when using BPT. A BPT pdf with μ of 250 yr and cv of 0.6 yields 30-yr rupture probabilities of 20-25% versus a Poisson probability of 11-17%.

  14. Time-predictable model application in probabilistic seismic hazard analysis of faults in Taiwan

    Directory of Open Access Journals (Sweden)

    Yu-Wen Chang

    2017-01-01

    Full Text Available Given the probability distribution function relating to the recurrence interval and the occurrence time of the previous occurrence of a fault, a time-dependent model of a particular fault for seismic hazard assessment was developed that takes into account the active fault rupture cyclic characteristics during a particular lifetime up to the present time. The Gutenberg and Richter (1944 exponential frequency-magnitude relation uses to describe the earthquake recurrence rate for a regional source. It is a reference for developing a composite procedure modelled the occurrence rate for the large earthquake of a fault when the activity information is shortage. The time-dependent model was used to describe the fault characteristic behavior. The seismic hazards contribution from all sources, including both time-dependent and time-independent models, were then added together to obtain the annual total lifetime hazard curves. The effects of time-dependent and time-independent models of fault [e.g., Brownian passage time (BPT and Poisson, respectively] in hazard calculations are also discussed. The proposed fault model result shows that the seismic demands of near fault areas are lower than the current hazard estimation where the time-dependent model was used on those faults, particularly, the elapsed time since the last event of the faults (such as the Chelungpu fault are short.

  15. Seismicity and seismic hazard in Sabah, East Malaysia from earthquake and geodetic data

    Science.gov (United States)

    Gilligan, A.; Rawlinson, N.; Tongkul, F.; Stephenson, R.

    2017-12-01

    While the levels of seismicity are low in most of Malaysia, the state of Sabah in northern Borneo has moderate levels of seismicity. Notable earthquakes in the region include the 1976 M6.2 Lahad Datu earthquake and the 2015 M6 Ranau earthquake. The recent Ranau earthquake resulted in the deaths of 18 people on Mt Kinabalu, an estimated 100 million RM ( US$23 million) damage to buildings, roads, and infrastructure from shaking, and flooding, reduced water quality, and damage to farms from landslides. Over the last 40 years the population of Sabah has increased to over four times what it was in 1976, yet seismic hazard in Sabah remains poorly understood. Using seismic and geodetic data we hope to better quantify the hazards posed by earthquakes in Sabah, and thus help to minimize risk. In order to do this we need to know about the locations of earthquakes, types of earthquakes that occur, and faults that are generating them. We use data from 15 MetMalaysia seismic stations currently operating in Sabah to develop a region-specific velocity model from receiver functions and a pre-existing surface wave model. We use this new velocity model to (re)locate earthquakes that occurred in Sabah from 2005-2016, including a large number of aftershocks from the 2015 Ranau earthquake. We use a probabilistic nonlinear earthquake location program to locate the earthquakes and then refine their relative locations using a double difference method. The recorded waveforms are further used to obtain moment tensor solutions for these earthquakes. Earthquake locations and moment tensor solutions are then compared with the locations of faults throughout Sabah. Faults are identified from high-resolution IFSAR images and subsequent fieldwork, with a particular focus on the Lahad Datau and Ranau areas. Used together, these seismic and geodetic data can help us to develop a new seismic hazard model for Sabah, as well as aiding in the delivery of outreach activities regarding seismic hazard

  16. Charles Darwin's earthquake reports

    Science.gov (United States)

    Galiev, Shamil

    2010-05-01

    problems which began to discuss only during the last time. Earthquakes often precede volcanic eruptions. According to Darwin, the earthquake-induced shock may be a common mechanism of the simultaneous eruptions of the volcanoes separated by long distances. In particular, Darwin wrote that ‘… the elevation of many hundred square miles of territory near Concepcion is part of the same phenomenon, with that splashing up, if I may so call it, of volcanic matter through the orifices in the Cordillera at the moment of the shock;…'. According to Darwin the crust is a system where fractured zones, and zones of seismic and volcanic activities interact. Darwin formulated the task of considering together the processes studied now as seismology and volcanology. However the difficulties are such that the study of interactions between earthquakes and volcanoes began only recently and his works on this had relatively little impact on the development of geosciences. In this report, we discuss how the latest data on seismic and volcanic events support the Darwin's observations and ideas about the 1835 Chilean earthquake. The material from researchspace. auckland. ac. nz/handle/2292/4474 is used. We show how modern mechanical tests from impact engineering and simple experiments with weakly-cohesive materials also support his observations and ideas. On the other hand, we developed the mathematical theory of the earthquake-induced catastrophic wave phenomena. This theory allow to explain the most important aspects the Darwin's earthquake reports. This is achieved through the simplification of fundamental governing equations of considering problems to strongly-nonlinear wave equations. Solutions of these equations are constructed with the help of analytic and numerical techniques. The solutions can model different strongly-nonlinear wave phenomena which generate in a variety of physical context. A comparison with relevant experimental observations is also presented.

  17. A mathematical model for predicting earthquake occurrence ...

    African Journals Online (AJOL)

    We consider the continental crust under damage. We use the observed results of microseism in many seismic stations of the world which was established to study the time series of the activities of the continental crust with a view to predicting possible time of occurrence of earthquake. We consider microseism time series ...

  18. Global observation of Omori-law decay in the rate of triggered earthquakes

    Science.gov (United States)

    Parsons, T.

    2001-12-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 events in El Salvador. In this study, earthquakes with M greater than 7.0 from the Harvard CMT catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near the main shocks are associated with calculated shear stress increases, while ~39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, triggered earthquakes obey an Omori-law rate decay that lasts between ~7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main-shock centroid. Earthquakes triggered by smaller quakes (foreshocks) also obey Omori's law, which is one of the few time-predictable patterns evident in the global occurrence of earthquakes. These observations indicate that earthquake probability calculations which include interactions from previous shocks should incorporate a transient Omori-law decay with time. In addition, a very simple model using the observed global rate change with time and spatial distribution of triggered earthquakes can be applied to immediately assess the likelihood of triggered earthquakes following large events, and can be in place until more sophisticated analyses are conducted.

  19. A model of return intervals between earthquake events

    Science.gov (United States)

    Zhou, Yu; Chechkin, Aleksei; Sokolov, Igor M.; Kantz, Holger

    2016-06-01

    Application of the diffusion entropy analysis and the standard deviation analysis to the time sequence of the southern California earthquake events from 1976 to 2002 uncovered scaling behavior typical for anomalous diffusion. However, the origin of such behavior is still under debate. Some studies attribute the scaling behavior to the correlations in the return intervals, or waiting times, between aftershocks or mainshocks. To elucidate a nature of the scaling, we applied specific reshulffling techniques to eliminate correlations between different types of events and then examined how it affects the scaling behavior. We demonstrate that the origin of the scaling behavior observed is the interplay between mainshock waiting time distribution and the structure of clusters of aftershocks, but not correlations in waiting times between the mainshocks and aftershocks themselves. Our findings are corroborated by numerical simulations of a simple model showing a very similar behavior. The mainshocks are modeled by a renewal process with a power-law waiting time distribution between events, and aftershocks follow a nonhomogeneous Poisson process with the rate governed by Omori's law.

  20. A Deterministic Approach to Earthquake Prediction

    Directory of Open Access Journals (Sweden)

    Vittorio Sgrigna

    2012-01-01

    Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.

  1. A bivariate model for analyzing recurrent multi-type automobile failures

    Science.gov (United States)

    Sunethra, A. A.; Sooriyarachchi, M. R.

    2017-09-01

    The failure mechanism in an automobile can be defined as a system of multi-type recurrent failures where failures can occur due to various multi-type failure modes and these failures are repetitive such that more than one failure can occur from each failure mode. In analysing such automobile failures, both the time and type of the failure serve as response variables. However, these two response variables are highly correlated with each other since the timing of failures has an association with the mode of the failure. When there are more than one correlated response variables, the fitting of a multivariate model is more preferable than separate univariate models. Therefore, a bivariate model of time and type of failure becomes appealing for such automobile failure data. When there are multiple failure observations pertaining to a single automobile, such data cannot be treated as independent data because failure instances of a single automobile are correlated with each other while failures among different automobiles can be treated as independent. Therefore, this study proposes a bivariate model consisting time and type of failure as responses adjusted for correlated data. The proposed model was formulated following the approaches of shared parameter models and random effects models for joining the responses and for representing the correlated data respectively. The proposed model is applied to a sample of automobile failures with three types of failure modes and up to five failure recurrences. The parametric distributions that were suitable for the two responses of time to failure and type of failure were Weibull distribution and multinomial distribution respectively. The proposed bivariate model was programmed in SAS Procedure Proc NLMIXED by user programming appropriate likelihood functions. The performance of the bivariate model was compared with separate univariate models fitted for the two responses and it was identified that better performance is secured by

  2. Recurrent Neural Network Model for Constructive Peptide Design.

    Science.gov (United States)

    Müller, Alex T; Hiss, Jan A; Schneider, Gisbert

    2018-02-26

    We present a generative long short-term memory (LSTM) recurrent neural network (RNN) for combinatorial de novo peptide design. RNN models capture patterns in sequential data and generate new data instances from the learned context. Amino acid sequences represent a suitable input for these machine-learning models. Generative models trained on peptide sequences could therefore facilitate the design of bespoke peptide libraries. We trained RNNs with LSTM units on pattern recognition of helical antimicrobial peptides and used the resulting model for de novo sequence generation. Of these sequences, 82% were predicted to be active antimicrobial peptides compared to 65% of randomly sampled sequences with the same amino acid distribution as the training set. The generated sequences also lie closer to the training data than manually designed amphipathic helices. The results of this study showcase the ability of LSTM RNNs to construct new amino acid sequences within the applicability domain of the model and motivate their prospective application to peptide and protein design without the need for the exhaustive enumeration of sequence libraries.

  3. Source Mechanism of May 30, 2015 Bonin Islands, Japan Deep Earthquake (Mw7.8) Estimated by Broadband Waveform Modeling

    Science.gov (United States)

    Tsuboi, S.; Nakamura, T.; Miyoshi, T.

    2015-12-01

    May 30, 2015 Bonin Islands, Japan earthquake (Mw 7.8, depth 679.9km GCMT) was one of the deepest earthquakes ever recorded. We apply the waveform inversion technique (Kikuchi & Kanamori, 1991) to obtain slip distribution in the source fault of this earthquake in the same manner as our previous work (Nakamura et al., 2010). We use 60 broadband seismograms of IRIS GSN seismic stations with epicentral distance between 30 and 90 degrees. The broadband original data are integrated into ground displacement and band-pass filtered in the frequency band 0.002-1 Hz. We use the velocity structure model IASP91 to calculate the wavefield near source and stations. We assume that the fault is squared with the length 50 km. We obtain source rupture model for both nodal planes with high dip angle (74 degree) and low dip angle (26 degree) and compare the synthetic seismograms with the observations to determine which source rupture model would explain the observations better. We calculate broadband synthetic seismograms with these source propagation models using the spectral-element method (Komatitsch & Tromp, 2001). We use new Earth Simulator system in JAMSTEC to compute synthetic seismograms using the spectral-element method. The simulations are performed on 7,776 processors, which require 1,944 nodes of the Earth Simulator. On this number of nodes, a simulation of 50 minutes of wave propagation accurate at periods of 3.8 seconds and longer requires about 5 hours of CPU time. Comparisons of the synthetic waveforms with the observation at teleseismic stations show that the arrival time of pP wave calculated for depth 679km matches well with the observation, which demonstrates that the earthquake really happened below the 660 km discontinuity. In our present forward simulations, the source rupture model with the low-angle fault dipping is likely to better explain the observations.

  4. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

  5. Foreshocks, aftershocks, and earthquake probabilities: Accounting for the landers earthquake

    Science.gov (United States)

    Jones, Lucile M.

    1994-01-01

    The equation to determine the probability that an earthquake occurring near a major fault will be a foreshock to a mainshock on that fault is modified to include the case of aftershocks to a previous earthquake occurring near the fault. The addition of aftershocks to the background seismicity makes its less probable that an earthquake will be a foreshock, because nonforeshocks have become more common. As the aftershocks decay with time, the probability that an earthquake will be a foreshock increases. However, fault interactions between the first mainshock and the major fault can increase the long-term probability of a characteristic earthquake on that fault, which will, in turn, increase the probability that an event is a foreshock, compensating for the decrease caused by the aftershocks.

  6. Dynamic strains for earthquake source characterization

    Science.gov (United States)

    Barbour, Andrew J.; Crowell, Brendan W

    2017-01-01

    Strainmeters measure elastodynamic deformation associated with earthquakes over a broad frequency band, with detection characteristics that complement traditional instrumentation, but they are commonly used to study slow transient deformation along active faults and at subduction zones, for example. Here, we analyze dynamic strains at Plate Boundary Observatory (PBO) borehole strainmeters (BSM) associated with 146 local and regional earthquakes from 2004–2014, with magnitudes from M 4.5 to 7.2. We find that peak values in seismic strain can be predicted from a general regression against distance and magnitude, with improvements in accuracy gained by accounting for biases associated with site–station effects and source–path effects, the latter exhibiting the strongest influence on the regression coefficients. To account for the influence of these biases in a general way, we include crustal‐type classifications from the CRUST1.0 global velocity model, which demonstrates that high‐frequency strain data from the PBO BSM network carry information on crustal structure and fault mechanics: earthquakes nucleating offshore on the Blanco fracture zone, for example, generate consistently lower dynamic strains than earthquakes around the Sierra Nevada microplate and in the Salton trough. Finally, we test our dynamic strain prediction equations on the 2011 M 9 Tohoku‐Oki earthquake, specifically continuous strain records derived from triangulation of 137 high‐rate Global Navigation Satellite System Earth Observation Network stations in Japan. Moment magnitudes inferred from these data and the strain model are in agreement when Global Positioning System subnetworks are unaffected by spatial aliasing.

  7. Analysis and modeling of safety parameters in the selection of optimal routes for emergency evacuation after the earthquake (Case study: 13 Aban neighborhood of Tehran

    Directory of Open Access Journals (Sweden)

    Sajad Ganjehi

    2013-08-01

    Full Text Available Introduction : Earthquakes are imminent threats to urban areas of Iran, especially Tehran. They can cause extensive destructions and lead to heavy casualties. One of the most important aspects of disaster management after earthquake is the rapid transfer of casualties to emergency shelters. To expedite emergency evacuation process the optimal safe path method should be considered. To examine the safety of road networks and to determine the most optimal route at pre-earthquake phase, a series of parameters should be taken into account.   Methods : In this study, we employed a multi-criteria decision-making approach to determine and evaluate the effective safety parameters for selection of optimal routes in emergency evacuation after an earthquake.   Results: The relationship between the parameters was analyzed and the effect of each parameter was listed. A process model was described and a case study was implemented in the 13th Aban neighborhood ( Tehran’s 20th municipal district . Then, an optimal path to safe places in an emergency evacuation after an earthquake in the 13th Aban neighborhood was selected.   Conclusion : Analytic hierarchy process (AHP, as the main model, was employed. Each parameter of the model was described. Also, the capabilities of GIS software such as layer coverage were used.     Keywords: Earthquake, emergency evacuation, Analytic Hierarchy Process (AHP, crisis management, optimization, 13th Aban neighborhood of Tehran

  8. Recurrence relations and time evolution in the three-dimensional Sawada model

    International Nuclear Information System (INIS)

    Lee, M.H.; Hong, J.

    1984-01-01

    Time-dependent behavior of the three-dimensional Sawada model is obtained by a method of recurrence relations. Exactly calculated quantities are the time evolution of the density-fluctuation operator and its random force. As an application, their linear coefficients, the relaxation and memory functions are used to obtain certain dynamic quantities, e.g., the mobility

  9. Statistics and Analysis of the Relations between Rainstorm Floods and Earthquakes

    Directory of Open Access Journals (Sweden)

    Baodeng Hou

    2016-01-01

    Full Text Available The frequent occurrence of geophysical disasters under climate change has drawn Chinese scholars to pay their attention to disaster relations. If the occurrence sequence of disasters could be identified, long-term disaster forecast could be realized. Based on the Earth Degassing Effect (EDE which is valid, this paper took the magnitude, epicenter, and occurrence time of the earthquake, as well as the epicenter and occurrence time of the rainstorm floods as basic factors to establish an integrated model to study the correlation between rainstorm floods and earthquakes. 2461 severe earthquakes occurred in China or within 3000 km from China and the 169 heavy rainstorm floods occurred in China over the past 200+ years as the input data of the model. The computational results showed that although most of the rainstorm floods have nothing to do with the severe earthquakes from a statistical perspective, some floods might relate to earthquakes. This is especially true when the earthquakes happen in the vapor transmission zone where rainstorms lead to abundant water vapors. In this regard, earthquakes are more likely to cause big rainstorm floods. However, many cases of rainstorm floods could be found after severe earthquakes with a large extent of uncertainty.

  10. Clustered and transient earthquake sequences in mid-continents

    Science.gov (United States)

    Liu, M.; Stein, S. A.; Wang, H.; Luo, G.

    2012-12-01

    Earthquakes result from sudden release of strain energy on faults. On plate boundary faults, strain energy is constantly accumulating from steady and relatively rapid relative plate motion, so large earthquakes continue to occur so long as motion continues on the boundary. In contrast, such steady accumulation of stain energy does not occur on faults in mid-continents, because the far-field tectonic loading is not steadily distributed between faults, and because stress perturbations from complex fault interactions and other stress triggers can be significant relative to the slow tectonic stressing. Consequently, mid-continental earthquakes are often temporally clustered and transient, and spatially migrating. This behavior is well illustrated by large earthquakes in North China in the past two millennia, during which no single large earthquakes repeated on the same fault segments, but moment release between large fault systems was complementary. Slow tectonic loading in mid-continents also causes long aftershock sequences. We show that the recent small earthquakes in the Tangshan region of North China are aftershocks of the 1976 Tangshan earthquake (M 7.5), rather than indicators of a new phase of seismic activity in North China, as many fear. Understanding the transient behavior of mid-continental earthquakes has important implications for assessing earthquake hazards. The sequence of large earthquakes in the New Madrid Seismic Zone (NMSZ) in central US, which includes a cluster of M~7 events in 1811-1812 and perhaps a few similar ones in the past millennium, is likely a transient process, releasing previously accumulated elastic strain on recently activated faults. If so, this earthquake sequence will eventually end. Using simple analysis and numerical modeling, we show that the large NMSZ earthquakes may be ending now or in the near future.

  11. Earthquake forecasting and warning

    Energy Technology Data Exchange (ETDEWEB)

    Rikitake, T.

    1983-01-01

    This review briefly describes two other books on the same subject either written or partially written by Rikitake. In this book, the status of earthquake prediction efforts in Japan, China, the Soviet Union, and the United States are updated. An overview of some of the organizational, legal, and societal aspects of earthquake prediction in these countries is presented, and scientific findings of precursory phenomena are included. A summary of circumstances surrounding the 1975 Haicheng earthquake, the 1978 Tangshan earthquake, and the 1976 Songpan-Pingwu earthquake (all magnitudes = 7.0) in China and the 1978 Izu-Oshima earthquake in Japan is presented. This book fails to comprehensively summarize recent advances in earthquake prediction research.

  12. Generating a robust prediction model for stage I lung adenocarcinoma recurrence after surgical resection.

    Science.gov (United States)

    Wu, Yu-Chung; Wei, Nien-Chih; Hung, Jung-Jyh; Yeh, Yi-Chen; Su, Li-Jen; Hsu, Wen-Hu; Chou, Teh-Ying

    2017-10-03

    Lung cancer mortality remains high even after successful resection. Adjuvant treatment benefits stage II and III patients, but not stage I patients, and most studies fail to predict recurrence in stage I patients. Our study included 211 lung adenocarcinoma patients (stages I-IIIA; 81% stage I) who received curative resections at Taipei Veterans General Hospital between January 2001 and December 2012. We generated a prediction model using 153 samples, with validation using an additional 58 clinical outcome-blinded samples. Gene expression profiles were generated using formalin-fixed, paraffin-embedded tissue samples and microarrays. Data analysis was performed using a supervised clustering method. The prediction model generated from mixed stage samples successfully separated patients at high vs. low risk for recurrence. The validation tests hazard ratio (HR = 4.38) was similar to that of the training tests (HR = 4.53), indicating a robust training process. Our prediction model successfully distinguished high- from low-risk stage IA and IB patients, with a difference in 5-year disease-free survival between high- and low-risk patients of 42% for stage IA and 45% for stage IB ( p model for identifying lung adenocarcinoma patients at high risk for recurrence who may benefit from adjuvant therapy. Our prediction performance of the difference in disease free survival between high risk and low risk groups demonstrates more than two fold improvement over earlier published results.

  13. Earthquake Loss Scenarios: Warnings about the Extent of Disasters

    Science.gov (United States)

    Wyss, M.; Tolis, S.; Rosset, P.

    2016-12-01

    It is imperative that losses expected due to future earthquakes be estimated. Officials and the public need to be aware of what disaster is likely in store for them in order to reduce the fatalities and efficiently help the injured. Scenarios for earthquake parameters can be constructed to a reasonable accuracy in highly active earthquake belts, based on knowledge of seismotectonics and history. Because of the inherent uncertainties of loss estimates however, it would be desirable that more than one group calculate an estimate for the same area. By discussing these estimates, one may find a consensus of the range of the potential disasters and persuade officials and residents of the reality of the earthquake threat. To model a scenario and estimate earthquake losses requires data sets that are sufficiently accurate of the number of people present, the built environment, and if possible the transmission of seismic waves. As examples we use loss estimates for possible repeats of historic earthquakes in Greece that occurred between -464 and 700. We model future large Greek earthquakes as having M6.8 and rupture lengths of 60 km. In four locations where historic earthquakes with serious losses have occurred, we estimate that 1,000 to 1,500 people might perish, with an additional factor of four people injured. Defining the area of influence of these earthquakes as that with shaking intensities larger and equal to V, we estimate that 1.0 to 2.2 million people in about 2,000 settlements may be affected. We calibrate the QLARM tool for calculating intensities and losses in Greece, using the M6, 1999 Athens earthquake and matching the isoseismal information for six earthquakes, which occurred in Greece during the last 140 years. Comparing fatality numbers that would occur theoretically today with the numbers reported, and correcting for the increase in population, we estimate that the improvement of the building stock has reduced the mortality and injury rate in Greek

  14. CyberShake-derived ground-motion prediction models for the Los Angeles region with application to earthquake early warning

    Science.gov (United States)

    Bose, Maren; Graves, Robert; Gill, David; Callaghan, Scott; Maechling, Phillip J.

    2014-01-01

    Real-time applications such as earthquake early warning (EEW) typically use empirical ground-motion prediction equations (GMPEs) along with event magnitude and source-to-site distances to estimate expected shaking levels. In this simplified approach, effects due to finite-fault geometry, directivity and site and basin response are often generalized, which may lead to a significant under- or overestimation of shaking from large earthquakes (M > 6.5) in some locations. For enhanced site-specific ground-motion predictions considering 3-D wave-propagation effects, we develop support vector regression (SVR) models from the SCEC CyberShake low-frequency (415 000 finite-fault rupture scenarios (6.5 ≤ M ≤ 8.5) for southern California defined in UCERF 2.0. We use CyberShake to demonstrate the application of synthetic waveform data to EEW as a ‘proof of concept’, being aware that these simulations are not yet fully validated and might not appropriately sample the range of rupture uncertainty. Our regression models predict the maximum and the temporal evolution of instrumental intensity (MMI) at 71 selected test sites using only the hypocentre, magnitude and rupture ratio, which characterizes uni- and bilateral rupture propagation. Our regression approach is completely data-driven (where here the CyberShake simulations are considered data) and does not enforce pre-defined functional forms or dependencies among input parameters. The models were established from a subset (∼20 per cent) of CyberShake simulations, but can explain MMI values of all >400 k rupture scenarios with a standard deviation of about 0.4 intensity units. We apply our models to determine threshold magnitudes (and warning times) for various active faults in southern California that earthquakes need to exceed to cause at least ‘moderate’, ‘strong’ or ‘very strong’ shaking in the Los Angeles (LA) basin. These thresholds are used to construct a simple and robust EEW algorithm: to

  15. Searching for evidence of a preferred rupture direction in small earthquakes at Parkfield

    Science.gov (United States)

    Kane, D. L.; Shearer, P. M.; Allmann, B.; Vernon, F. L.

    2009-12-01

    Theoretical modeling of strike-slip ruptures along a bimaterial interface suggests that the interface will have a preferred rupture direction and will produce asymmetric ground motion (Shi and Ben-Zion, 2006). This could have widespread implications for earthquake source physics and for hazard analysis on mature faults because larger ground motions would be expected in the direction of rupture propagation. Studies have shown that many large global earthquakes exhibit unilateral rupture, but a consistently preferred rupture direction along faults has not been observed. Some researchers have argued that the bimaterial interface model does not apply to natural faults, noting that the rupture of the M 6 2004 Parkfield earthquake propagated in the opposite direction from previous M 6 earthquakes along that section of the San Andreas Fault (Harris and Day, 2005). We analyze earthquake spectra from the Parkfield area to look for evidence of consistent rupture directivity along the San Andreas Fault. We separate the earthquakes into spatially defined clusters and quantify the differences in high-frequency energy among earthquakes recorded at each station. Propagation path effects are minimized in this analysis because we compare earthquakes located within a small volume and recorded by the same stations. By considering a number of potential end-member models, we seek to determine if a preferred rupture direction is present among small earthquakes at Parkfield.

  16. Recurrence predictive models for patients with hepatocellular carcinoma after radiofrequency ablation using support vector machines with feature selection methods.

    Science.gov (United States)

    Liang, Ja-Der; Ping, Xiao-Ou; Tseng, Yi-Ju; Huang, Guan-Tarn; Lai, Feipei; Yang, Pei-Ming

    2014-12-01

    Recurrence of hepatocellular carcinoma (HCC) is an important issue despite effective treatments with tumor eradication. Identification of patients who are at high risk for recurrence may provide more efficacious screening and detection of tumor recurrence. The aim of this study was to develop recurrence predictive models for HCC patients who received radiofrequency ablation (RFA) treatment. From January 2007 to December 2009, 83 newly diagnosed HCC patients receiving RFA as their first treatment were enrolled. Five feature selection methods including genetic algorithm (GA), simulated annealing (SA) algorithm, random forests (RF) and hybrid methods (GA+RF and SA+RF) were utilized for selecting an important subset of features from a total of 16 clinical features. These feature selection methods were combined with support vector machine (SVM) for developing predictive models with better performance. Five-fold cross-validation was used to train and test SVM models. The developed SVM-based predictive models with hybrid feature selection methods and 5-fold cross-validation had averages of the sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and area under the ROC curve as 67%, 86%, 82%, 69%, 90%, and 0.69, respectively. The SVM derived predictive model can provide suggestive high-risk recurrent patients, who should be closely followed up after complete RFA treatment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Statistical validation of earthquake related observations

    Science.gov (United States)

    Kossobokov, V. G.

    2011-12-01

    The confirmed fractal nature of earthquakes and their distribution in space and time implies that many traditional estimations of seismic hazard (from term-less to short-term ones) are usually based on erroneous assumptions of easy tractable or, conversely, delicately-designed models. The widespread practice of deceptive modeling considered as a "reasonable proxy" of the natural seismic process leads to seismic hazard assessment of unknown quality, which errors propagate non-linearly into inflicted estimates of risk and, eventually, into unexpected societal losses of unacceptable level. The studies aimed at forecast/prediction of earthquakes must include validation in the retro- (at least) and, eventually, in prospective tests. In the absence of such control a suggested "precursor/signal" remains a "candidate", which link to target seismic event is a model assumption. Predicting in advance is the only decisive test of forecast/predictions and, therefore, the score-card of any "established precursor/signal" represented by the empirical probabilities of alarms and failures-to-predict achieved in prospective testing must prove statistical significance rejecting the null-hypothesis of random coincidental occurrence in advance target earthquakes. We reiterate suggesting so-called "Seismic Roulette" null-hypothesis as the most adequate undisturbed random alternative accounting for the empirical spatial distribution of earthquakes: (i) Consider a roulette wheel with as many sectors as the number of earthquake locations from a sample catalog representing seismic locus, a sector per each location and (ii) make your bet according to prediction (i.e., determine, which locations are inside area of alarm, and put one chip in each of the corresponding sectors); (iii) Nature turns the wheel; (iv) accumulate statistics of wins and losses along with the number of chips spent. If a precursor in charge of prediction exposes an imperfection of Seismic Roulette then, having in mind

  18. Earthquake behavior of steel cushion-implemented reinforced concrete frames

    Science.gov (United States)

    Özkaynak, Hasan

    2018-04-01

    The earthquake performance of vulnerable structures can be increased by the implementation of supplementary energy-dissipative metallic elements. The main aim of this paper is to describe the earthquake behavior of steel cushion-implemented reinforced concrete frames (SCI-RCFR) in terms of displacement demands and energy components. Several quasi-static experiments were performed on steel cushions (SC) installed in reinforced concrete (RC) frames. The test results served as the basis of the analytical models of SCs and a bare reinforced concrete frame (B-RCFR). These models were integrated in order to obtain the resulting analytical model of the SCI-RCFR. Nonlinear-time history analyses (NTHA) were performed on the SCI-RCFR under the effects of the selected earthquake data set. According to the NTHA, SC application is an effective technique for increasing the seismic performance of RC structures. The main portion of the earthquake input energy was dissipated through SCs. SCs succeeded in decreasing the plastic energy demand on structural elements by almost 50% at distinct drift levels.

  19. Ionospheric earthquake precursors

    International Nuclear Information System (INIS)

    Bulachenko, A.L.; Oraevskij, V.N.; Pokhotelov, O.A.; Sorokin, V.N.; Strakhov, V.N.; Chmyrev, V.M.

    1996-01-01

    Results of experimental study on ionospheric earthquake precursors, program development on processes in the earthquake focus and physical mechanisms of formation of various type precursors are considered. Composition of experimental cosmic system for earthquake precursors monitoring is determined. 36 refs., 5 figs

  20. A Kinesthetic Demonstration for Locating Earthquake Epicenters

    Science.gov (United States)

    Keyantash, J.; Sperber, S.

    2005-12-01

    During Spring 2005, an inquiry-based curriculum for plate tectonics was developed for implementation in sixth-grade classrooms within the Los Angeles Unified School District (LAUSD). Two cohorts of LAUSD teachers received training and orientation to the plate tectonics unit during one week workshops in July 2005. However, during the training workshops, it was observed that there was considerable confusion among the teachers as to how the traditional "textbook" explanation of the time lag between P and S waves on a seismogram could possibly be used to determine the epicenter of an earthquake. One of the State of California science content standards for sixth grade students is that they understand how the epicenters of earthquakes are determined, so it was critical that the teachers themselves grasped the concept. In response to the adult learner difficulties, the classroom explanation of earthquake epicenter location was supplemented with an outdoor kinesthetic activity. Based upon the experience of the kinesthetic model, it was found that the hands-on model greatly cemented the teachers' understanding of the underlying theory. This paper details the steps of the kinesthetic demonstration for earthquake epicenter identification, as well as offering extended options for its classroom implementation.