New geological perspectives on earthquake recurrence models
International Nuclear Information System (INIS)
Schwartz, D.P.
1997-01-01
In most areas of the world the record of historical seismicity is too short or uncertain to accurately characterize the future distribution of earthquakes of different sizes in time and space. Most faults have not ruptured once, let alone repeatedly. Ultimately, the ability to correctly forecast the magnitude, location, and probability of future earthquakes depends on how well one can quantify the past behavior of earthquake sources. Paleoseismological trenching of active faults, historical surface ruptures, liquefaction features, and shaking-induced ground deformation structures provides fundamental information on the past behavior of earthquake sources. These studies quantify (a) the timing of individual past earthquakes and fault slip rates, which lead to estimates of recurrence intervals and the development of recurrence models and (b) the amount of displacement during individual events, which allows estimates of the sizes of past earthquakes on a fault. When timing and slip per event are combined with information on fault zone geometry and structure, models that define individual rupture segments can be developed. Paleoseismicity data, in the form of timing and size of past events, provide a window into the driving mechanism of the earthquake engine--the cycle of stress build-up and release
Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki
2012-01-01
The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.
Earthquake recurrence models fail when earthquakes fail to reset the stress field
Tormann, Thessa; Wiemer, Stefan; Hardebeck, Jeanne L.
2012-01-01
Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning once more the predictability of earthquakes in general. Our study investigates the spatial pattern of b-values along the Parkfield segment through the seismic cycle and documents a stably stressed structure. The forecasted rate of M6 earthquakes based on Parkfield's microseismicity b-values corresponds well to observed rates. We interpret the observed b-value stability in terms of the evolution of the stress field in that area: the M6 Parkfield earthquakes do not fully unload the stress on the fault, explaining why time recurrent models fail. We present the 1989 M6.9 Loma Prieta earthquake as counter example, which did release a significant portion of the stress along its fault segment and yields a substantial change in b-values.
International Nuclear Information System (INIS)
Esmer, Oezcan
2006-01-01
This paper first evaluates the earthquake prediction method (1999 ) used by US Geological Survey as the lead example and reviews also the recent models. Secondly, points out the ongoing debate on the predictability of earthquake recurrences and lists the main claims of both sides. The traditional methods and the 'frequentist' approach used in determining the earthquake probabilities cannot end the complaints that the earthquakes are unpredictable. It is argued that the prevailing 'crisis' in seismic research corresponds to the Pre-Maxent Age of the current situation. The period of Kuhnian 'Crisis' should give rise to a new paradigm based on the Information-Theoric framework including the inverse problem, Maxent and Bayesian methods. Paper aims to show that the information- theoric methods shall provide the required 'Methodica Firma' for the earthquake prediction models
Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.
2017-12-01
A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would
Tsunamigenic earthquakes in the Gulf of Cadiz: fault model and recurrence
Directory of Open Access Journals (Sweden)
L. M. Matias
2013-01-01
Full Text Available The Gulf of Cadiz, as part of the Azores-Gibraltar plate boundary, is recognized as a potential source of big earthquakes and tsunamis that may affect the bordering countries, as occurred on 1 November 1755. Preparing for the future, Portugal is establishing a national tsunami warning system in which the threat caused by any large-magnitude earthquake in the area is estimated from a comprehensive database of scenarios. In this paper we summarize the knowledge about the active tectonics in the Gulf of Cadiz and integrate the available seismological information in order to propose the generation model of destructive tsunamis to be applied in tsunami warnings. The fault model derived is then used to estimate the recurrence of large earthquakes using the fault slip rates obtained by Cunha et al. (2012 from thin-sheet neotectonic modelling. Finally we evaluate the consistency of seismicity rates derived from historical and instrumental catalogues with the convergence rates between Eurasia and Nubia given by plate kinematic models.
Geological and historical evidence of irregular recurrent earthquakes in Japan.
Satake, Kenji
2015-10-28
Great (M∼8) earthquakes repeatedly occur along the subduction zones around Japan and cause fault slip of a few to several metres releasing strains accumulated from decades to centuries of plate motions. Assuming a simple 'characteristic earthquake' model that similar earthquakes repeat at regular intervals, probabilities of future earthquake occurrence have been calculated by a government committee. However, recent studies on past earthquakes including geological traces from giant (M∼9) earthquakes indicate a variety of size and recurrence interval of interplate earthquakes. Along the Kuril Trench off Hokkaido, limited historical records indicate that average recurrence interval of great earthquakes is approximately 100 years, but the tsunami deposits show that giant earthquakes occurred at a much longer interval of approximately 400 years. Along the Japan Trench off northern Honshu, recurrence of giant earthquakes similar to the 2011 Tohoku earthquake with an interval of approximately 600 years is inferred from historical records and tsunami deposits. Along the Sagami Trough near Tokyo, two types of Kanto earthquakes with recurrence interval of a few hundred years and a few thousand years had been recognized, but studies show that the recent three Kanto earthquakes had different source extents. Along the Nankai Trough off western Japan, recurrence of great earthquakes with an interval of approximately 100 years has been identified from historical literature, but tsunami deposits indicate that the sizes of the recurrent earthquakes are variable. Such variability makes it difficult to apply a simple 'characteristic earthquake' model for the long-term forecast, and several attempts such as use of geological data for the evaluation of future earthquake probabilities or the estimation of maximum earthquake size in each subduction zone are being conducted by government committees. © 2015 The Author(s).
Ren, Junjie; Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.
Directory of Open Access Journals (Sweden)
Junjie Ren
2013-01-01
Full Text Available Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9 occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF and the Guanxian-Jiangyou fault (GJF. However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS and Interferometric Synthetic Aperture Radar (InSAR data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3 × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.
Chilean megathrust earthquake recurrence linked to frictional contrast at depth
Moreno, M.; Li, S.; Melnick, D.; Bedford, J. R.; Baez, J. C.; Motagh, M.; Metzger, S.; Vajedian, S.; Sippl, C.; Gutknecht, B. D.; Contreras-Reyes, E.; Deng, Z.; Tassara, A.; Oncken, O.
2018-04-01
Fundamental processes of the seismic cycle in subduction zones, including those controlling the recurrence and size of great earthquakes, are still poorly understood. Here, by studying the 2016 earthquake in southern Chile—the first large event within the rupture zone of the 1960 earthquake (moment magnitude (Mw) = 9.5)—we show that the frictional zonation of the plate interface fault at depth mechanically controls the timing of more frequent, moderate-size deep events (Mw shallow earthquakes (Mw > 8.5). We model the evolution of stress build-up for a seismogenic zone with heterogeneous friction to examine the link between the 2016 and 1960 earthquakes. Our results suggest that the deeper segments of the seismogenic megathrust are weaker and interseismically loaded by a more strongly coupled, shallower asperity. Deeper segments fail earlier ( 60 yr recurrence), producing moderate-size events that precede the failure of the shallower region, which fails in a great earthquake (recurrence >110 yr). We interpret the contrasting frictional strength and lag time between deeper and shallower earthquakes to be controlled by variations in pore fluid pressure. Our integrated analysis strengthens understanding of the mechanics and timing of great megathrust earthquakes, and therefore could aid in the seismic hazard assessment of other subduction zones.
Earthquake Recurrence and the Resolution Potential of Tectono‐Geomorphic Records
Zielke, Olaf
2018-01-01
combination consistently reproduces the above‐mentioned field observations. Doing so, I find that neither a purely characteristic earthquake (CE) nor a Gutenberg–Richter (GR) earthquake recurrence model is able to consistently reproduce those field
Silica precipitation potentially controls earthquake recurrence in seismogenic zones.
Saishu, Hanae; Okamoto, Atsushi; Otsubo, Makoto
2017-10-17
Silica precipitation is assumed to play a significant role in post-earthquake recovery of the mechanical and hydrological properties of seismogenic zones. However, the relationship between the widespread quartz veins around seismogenic zones and earthquake recurrence is poorly understood. Here we propose a novel model of quartz vein formation associated with fluid advection from host rocks and silica precipitation in a crack, in order to quantify the timescale of crack sealing. When applied to sets of extensional quartz veins around the Nobeoka Thrust of SW Japan, an ancient seismogenic splay fault, our model indicates that a fluid pressure drop of 10-25 MPa facilitates the formation of typical extensional quartz veins over a period of 6.6 × 10 0 -5.6 × 10 1 years, and that 89%-100% of porosity is recovered within ~3 × 10 2 years. The former and latter sealing timescales correspond to the extensional stress period (~3 × 10 1 years) and the recurrence interval of megaearthquakes in the Nankai Trough (~3 × 10 2 years), respectively. We therefore suggest that silica precipitation in the accretionary wedge controls the recurrence interval of large earthquakes in subduction zones.
Earthquake Recurrence and the Resolution Potential of Tectono‐Geomorphic Records
Zielke, Olaf
2018-04-17
A long‐standing debate in active tectonics addresses how slip is accumulated through space and time along a given fault or fault section. This debate is in part still ongoing because of the lack of sufficiently long instrumental data that may constrain the recurrence characteristics of surface‐rupturing earthquakes along individual faults. Geomorphic and stratigraphic records are used instead to constrain this behavior. Although geomorphic data frequently indicate slip accumulation via quasicharacteristic same‐size offset increments, stratigraphic data indicate that earthquake timing observes a quasirandom distribution. Assuming that both observations are valid within their respective frameworks, I want to address here which recurrence model is able to reproduce this seemingly contradictory behavior. I further want to address how aleatory offset variability and epistemic measurement uncertainty affect our ability to resolve single‐earthquake surface slip and along‐fault slip‐accumulation patterns. I use a statistical model that samples probability density functions (PDFs) for geomorphic marker formation (storm events), marker displacement (surface‐rupturing earthquakes), and offset measurement, generating tectono‐geomorphic catalogs to investigate which PDF combination consistently reproduces the above‐mentioned field observations. Doing so, I find that neither a purely characteristic earthquake (CE) nor a Gutenberg–Richter (GR) earthquake recurrence model is able to consistently reproduce those field observations. A combination of both however, with moderate‐size earthquakes following the GR model and large earthquakes following the CE model, is able to reproduce quasirandom earthquake recurrence times while simultaneously generating quasicharacteristic geomorphic offset increments. Along‐fault slip accumulation is dominated by, but not exclusively linked to, the occurrence of similar‐size large earthquakes. Further, the resolution
A minimalist model of characteristic earthquakes
DEFF Research Database (Denmark)
Vázquez-Prada, M.; González, Á.; Gómez, J.B.
2002-01-01
In a spirit akin to the sandpile model of self- organized criticality, we present a simple statistical model of the cellular-automaton type which simulates the role of an asperity in the dynamics of a one-dimensional fault. This model produces an earthquake spectrum similar to the characteristic-earthquake...... behaviour of some seismic faults. This model, that has no parameter, is amenable to an algebraic description as a Markov Chain. This possibility illuminates some important results, obtained by Monte Carlo simulations, such as the earthquake size-frequency relation and the recurrence time...... of the characteristic earthquake....
International Nuclear Information System (INIS)
Rohay, A.C.
1991-01-01
Gable Mountain is a segment of the Umtanum Ridge-Gable Mountain structural trend, an east-west trending series of anticlines, one of the major geologic structures on the Hanford Site. A probabilistic seismic exposure model indicates that Gable Mountain and two adjacent segments contribute significantly to the seismic hazard at the Hanford Site. Geologic measurements of the uplift of initially horizontal (11-12 Ma) basalt flows indicate that a broad, continuous, primary anticline grew at an average rate of 0.009-0.011 mm/a, and narrow, segmented, secondary anticlines grew at rates of 0.009 mm/a at Gable Butte and 0.018 mm/a at Gable Mountain. The buried Southeast Anticline appears to have a different geometry, consisting of a single, intermediate-width anticline with an estimated growth rate of 0.007 mm/a. The recurrence rate and maximum magnitude of earthquakes for the fault models were used to estimate the fault slip rate for each of the fault models and to determine the implied structural growth rate of the segments. The current model for Gable Mountain-Gable Butte predicts 0.004 mm/a of vertical uplift due to primary faulting and 0.008 mm/a due to secondary faulting. These rates are roughly half the structurally estimated rates for Gable Mountain, but the model does not account for the smaller secondary fold at Gable Butte. The model predicted an uplift rate for the Southeast Anticline of 0.006 mm/a, caused by the low open-quotes fault capabilityclose quotes weighting rather than a different fault geometry. The effects of previous modifications to the fault models are examined and potential future modifications are suggested. For example, the earthquake recurrence relationship used in the current exposure model has a b-value of 1.15, compared to a previous value of 0.85. This increases the implied deformation rates due to secondary fault models, and therefore supports the use of this regionally determined b-value to this fault/fold system
Quasi-periodic recurrence of large earthquakes on the southern San Andreas fault
Scharer, Katherine M.; Biasi, Glenn P.; Weldon, Ray J.; Fumal, Tom E.
2010-01-01
It has been 153 yr since the last large earthquake on the southern San Andreas fault (California, United States), but the average interseismic interval is only ~100 yr. If the recurrence of large earthquakes is periodic, rather than random or clustered, the length of this period is notable and would generally increase the risk estimated in probabilistic seismic hazard analyses. Unfortunately, robust characterization of a distribution describing earthquake recurrence on a single fault is limited by the brevity of most earthquake records. Here we use statistical tests on a 3000 yr combined record of 29 ground-rupturing earthquakes from Wrightwood, California. We show that earthquake recurrence there is more regular than expected from a Poisson distribution and is not clustered, leading us to conclude that recurrence is quasi-periodic. The observation of unimodal time dependence is persistent across an observationally based sensitivity analysis that critically examines alternative interpretations of the geologic record. The results support formal forecast efforts that use renewal models to estimate probabilities of future earthquakes on the southern San Andreas fault. Only four intervals (15%) from the record are longer than the present open interval, highlighting the current hazard posed by this fault.
Seismic Regionalization of Michoacan, Mexico and Recurrence Periods for Earthquakes
Magaña García, N.; Figueroa-Soto, Á.; Garduño-Monroy, V. H.; Zúñiga, R.
2017-12-01
Michoacán is one of the states with the highest occurrence of earthquakes in Mexico and it is a limit of convergence triggered by the subduction of Cocos plate over the North American plate, located in the zone of the Pacific Ocean of our country, in addition to the existence of active faults inside of the state like the Morelia-Acambay Fault System (MAFS).It is important to make a combination of seismic, paleosismological and geological studies to have good planning and development of urban complexes to mitigate disasters if destructive earthquakes appear. With statistical seismology it is possible to characterize the degree of seismic activity as well as to estimate the recurrence periods for earthquakes. For this work, seismicity catalog of Michoacán was compiled and homogenized in time and magnitude. This information was obtained from world and national agencies (SSN, CMT, etc), some data published by Mendoza and Martínez-López (2016) and starting from the seismic catalog homogenized by F. R. Zúñiga (Personal communication). From the analysis of the different focal mechanisms reported in the literature and geological studies, the seismic regionalization of the state of Michoacán complemented the one presented by Vázquez-Rosas (2012) and the recurrence periods for earthquakes within the four different seismotectonic regions. In addition, stable periods were determined for the b value of the Gutenberg-Richter (1944) using the Maximum Curvature and EMR (Entire Magnitude Range Method, 2005) techniques, which allowed us to determine recurrence periods: years for earthquakes upper to 7.5 for the subduction zone (A zone) with EMR technique and years with MAXC technique for the same years for earthquakes upper to 5 for B1 zone with EMR technique and years with MAXC technique; years for earthquakes upper to 7.0 for B2 zone with EMR technique and years with MAXC technique; and the last one, the Morelia-Acambay Fault Sistem zone (C zone) years for earthquakes
Earthquake likelihood model testing
Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.
2007-01-01
INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a
Recurrent slow slip event likely hastened by the 2011 Tohoku earthquake
Hirose, Hitoshi; Kimura, Hisanori; Enescu, Bogdan; Aoi, Shin
2012-01-01
Slow slip events (SSEs) are another mode of fault deformation than the fast faulting of regular earthquakes. Such transient episodes have been observed at plate boundaries in a number of subduction zones around the globe. The SSEs near the Boso Peninsula, central Japan, are among the most documented SSEs, with the longest repeating history, of almost 30 y, and have a recurrence interval of 5 to 7 y. A remarkable characteristic of the slow slip episodes is the accompanying earthquake swarm activity. Our stable, long-term seismic observations enable us to detect SSEs using the recorded earthquake catalog, by considering an earthquake swarm as a proxy for a slow slip episode. Six recurrent episodes are identified in this way since 1982. The average duration of the SSE interoccurrence interval is 68 mo; however, there are significant fluctuations from this mean. While a regular cycle can be explained using a simple physical model, the mechanisms that are responsible for the observed fluctuations are poorly known. Here we show that the latest SSE in the Boso Peninsula was likely hastened by the stress transfer from the March 11, 2011 great Tohoku earthquake. Moreover, a similar mechanism accounts for the delay of an SSE in 1990 by a nearby earthquake. The low stress buildups and drops during the SSE cycle can explain the strong sensitivity of these SSEs to stress transfer from external sources. PMID:22949688
Wrightwood and the earthquake cycle: What a long recurrence record tells us about how faults work
Weldon, R.; Scharer, K.; Fumal, T.; Biasi, G.
2004-01-01
The concept of the earthquake cycle is so well established that one often hears statements in the popular media like, "the Big One is overdue" and "the longer it waits, the bigger it will be." Surprisingly, data to critically test the variability in recurrence intervals, rupture displacements, and relationships between the two are almost nonexistent. To generate a long series of earthquake intervals and offsets, we have conducted paleoseismic investigations across the San Andreas fault near the town of Wrightwood, California, excavating 45 trenches over 18 years, and can now provide some answers to basic questions about recurrence behavior of large earthquakes. To date, we have characterized at least 30 prehistoric earthquakes in a 6000-yr-long record, complete for the past 1500 yr and for the interval 3000-1500 B.C. For the past 1500 yr, the mean recurrence interval is 105 yr (31-165 yr for individual intervals) and the mean slip is 3.2 m (0.7-7 m per event). The series is slightly more ordered than random and has a notable cluster of events, during which strain was released at 3 times the long-term average rate. Slip associated with an earthquake is not well predicted by the interval preceding it, and only the largest two earthquakes appear to affect the time interval to the next earthquake. Generally, short intervals tend to coincide with large displacements and long intervals with small displacements. The most significant correlation we find is that earthquakes are more frequent following periods of net strain accumulation spanning multiple seismic cycles. The extent of paleoearthquake ruptures may be inferred by correlating event ages between different sites along the San Andreas fault. Wrightwood and other nearby sites experience rupture that could be attributed to overlap of relatively independent segments that each behave in a more regular manner. However, the data are equally consistent with a model in which the irregular behavior seen at Wrightwood
Periodic, chaotic, and doubled earthquake recurrence intervals on the deep San Andreas fault.
Shelly, David R
2010-06-11
Earthquake recurrence histories may provide clues to the timing of future events, but long intervals between large events obscure full recurrence variability. In contrast, small earthquakes occur frequently, and recurrence intervals are quantifiable on a much shorter time scale. In this work, I examine an 8.5-year sequence of more than 900 recurring low-frequency earthquake bursts composing tremor beneath the San Andreas fault near Parkfield, California. These events exhibit tightly clustered recurrence intervals that, at times, oscillate between approximately 3 and approximately 6 days, but the patterns sometimes change abruptly. Although the environments of large and low-frequency earthquakes are different, these observations suggest that similar complexity might underlie sequences of large earthquakes.
Periodic, chaotic, and doubled earthquake recurrence intervals on the deep San Andreas Fault
Shelly, David R.
2010-01-01
Earthquake recurrence histories may provide clues to the timing of future events, but long intervals between large events obscure full recurrence variability. In contrast, small earthquakes occur frequently, and recurrence intervals are quantifiable on a much shorter time scale. In this work, I examine an 8.5-year sequence of more than 900 recurring low-frequency earthquake bursts composing tremor beneath the San Andreas fault near Parkfield, California. These events exhibit tightly clustered recurrence intervals that, at times, oscillate between ~3 and ~6 days, but the patterns sometimes change abruptly. Although the environments of large and low-frequency earthquakes are different, these observations suggest that similar complexity might underlie sequences of large earthquakes.
Howle, J.; Bawden, G. W.; Schweickert, R. A.; Hunter, L. E.; Rose, R.
2012-12-01
Utilizing high-resolution bare-earth LiDAR topography, field observations, and earlier results of Howle et al. (2012), we estimate latest Pleistocene/Holocene earthquake-recurrence intervals, propose scenarios for earthquake-rupture segmentation, and estimate potential earthquake moment magnitudes for the Tahoe-Sierra frontal fault zone (TSFFZ), west of Lake Tahoe, California. We have developed a new technique to estimate the vertical separation for the most recent and the previous ground-rupturing earthquakes at five sites along the Echo Peak and Mt. Tallac segments of the TSFFZ. At these sites are fault scarps with two bevels separated by an inflection point (compound fault scarps), indicating that the cumulative vertical separation (VS) across the scarp resulted from two events. This technique, modified from the modeling methods of Howle et al. (2012), uses the far-field plunge of the best-fit footwall vector and the fault-scarp morphology from high-resolution LiDAR profiles to estimate the per-event VS. From this data, we conclude that the adjacent and overlapping Echo Peak and Mt. Tallac segments have ruptured coseismically twice during the Holocene. The right-stepping, en echelon range-front segments of the TSFFZ show progressively greater VS rates and shorter earthquake-recurrence intervals from southeast to northwest. Our preliminary estimates suggest latest Pleistocene/ Holocene earthquake-recurrence intervals of 4.8±0.9x103 years for a coseismic rupture of the Echo Peak and Mt. Tallac segments, located at the southeastern end of the TSFFZ. For the Rubicon Peak segment, northwest of the Echo Peak and Mt. Tallac segments, our preliminary estimate of the maximum earthquake-recurrence interval is 2.8±1.0x103 years, based on data from two sites. The correspondence between high VS rates and short recurrence intervals suggests that earthquake sequences along the TSFFZ may initiate in the northwest part of the zone and then occur to the southeast with a lower
Zoeller, G.
2017-12-01
Paleo- and historic earthquakes are the most important source of information for the estimationof long-term recurrence intervals in fault zones, because sequences of paleoearthquakes cover more than one seismic cycle. On the other hand, these events are often rare, dating uncertainties are enormous and the problem of missing or misinterpreted events leads to additional problems. Taking these shortcomings into account, long-term recurrence intervals are usually unstable as long as no additional information are included. In the present study, we assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a ``clock-change'' model that leads to a Brownian Passage Time distribution for recurrence intervals. We take advantage of an earlier finding that the aperiodicity of this distribution can be related to the Gutenberg-Richter-b-value, which is usually around one and can be estimated easily from instrumental seismicity in the region under consideration. This allows to reduce the uncertainties in the estimation of the mean recurrence interval significantly, especially for short paleoearthquake sequences and high dating uncertainties. We present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times assuming a stationary Poisson process.
Modeling, Forecasting and Mitigating Extreme Earthquakes
Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.
2012-12-01
Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).
Sanders, C O
1993-05-14
Two lines of evidence suggest that large earthquakes that occur on either the San Jacinto fault zone (SJFZ) or the San Andreas fault zone (SAFZ) may be triggered by large earthquakes that occur on the other. First, the great 1857 Fort Tejon earthquake in the SAFZ seems to have triggered a progressive sequence of earthquakes in the SJFZ. These earthquakes occurred at times and locations that are consistent with triggering by a strain pulse that propagated southeastward at a rate of 1.7 kilometers per year along the SJFZ after the 1857 earthquake. Second, the similarity in average recurrence intervals in the SJFZ (about 150 years) and in the Mojave segment of the SAFZ (132 years) suggests that large earthquakes in the northern SJFZ may stimulate the relatively frequent major earthquakes on the Mojave segment. Analysis of historic earthquake occurrence in the SJFZ suggests little likelihood of extended quiescence between earthquake sequences.
GEM - The Global Earthquake Model
Smolka, A.
2009-04-01
Over 500,000 people died in the last decade due to earthquakes and tsunamis, mostly in the developing world, where the risk is increasing due to rapid population growth. In many seismic regions, no hazard and risk models exist, and even where models do exist, they are intelligible only by experts, or available only for commercial purposes. The Global Earthquake Model (GEM) answers the need for an openly accessible risk management tool. GEM is an internationally sanctioned public private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) which will establish an authoritative standard for calculating and communicating earthquake hazard and risk, and will be designed to serve as the critical instrument to support decisions and actions that reduce earthquake losses worldwide. GEM will integrate developments on the forefront of scientific and engineering knowledge of earthquakes, at global, regional and local scale. The work is organized in three modules: hazard, risk, and socio-economic impact. The hazard module calculates probabilities of earthquake occurrence and resulting shaking at any given location. The risk module calculates fatalities, injuries, and damage based on expected shaking, building vulnerability, and the distribution of population and of exposed values and facilities. The socio-economic impact module delivers tools for making educated decisions to mitigate and manage risk. GEM will be a versatile online tool, with open source code and a map-based graphical interface. The underlying data will be open wherever possible, and its modular input and output will be adapted to multiple user groups: scientists and engineers, risk managers and decision makers in the public and private sectors, and the public-at- large. GEM will be the first global model for seismic risk assessment at a national and regional scale, and aims to achieve broad scientific participation and independence. Its development will occur in a
Prevent recurrence of nuclear disaster (3). Agenda on nuclear safety from earthquake engineering
International Nuclear Information System (INIS)
Kameda, Hiroyuki; Takada, Tsuyoshi; Ebisawa, Katsumi; Nakamura, Susumu
2012-01-01
Based on results of activities of committee on seismic safety of nuclear power plants (NPPs) of Japan Association for Earthquake Engineering, which started activities after Chuetsu-oki earthquake and then experienced Great East Japan Earthquake, (under close collaboration with the committee of Atomic Energy Society of Japan started activities simultaneously), and taking account of further development of concept, agenda on nuclear safety were proposed from earthquake engineering. In order to prevent recurrence of nuclear disaster, individual technical issues of earthquake engineering and comprehensive issues of integration technology, multidisciplinary collaboration and establishment of technology governance based on them were of prime importance. This article described important problems to be solved; (1) technical issues and mission of seismic safety of NPPs, (2) decision making based on risk assessment - basis of technical governance, (3) framework of risk, design and regulation - framework of required technology governance, (4) technical issues of earthquake engineering for nuclear safety, (5) role of earthquake engineering in nuclear power risk communication and (6) importance of multidisciplinary collaboration. Responsibility of engineering would be attributed to establishment of technology governance, cultivation of individual technology and integration technology, and social communications. (T. Tanaka)
Geological evidence of recurrent great Kanto earthquakes at the Miura Peninsula, Japan
Shimazaki, K.; Kim, H. Y.; Chiba, T.; Satake, K.
2011-12-01
The Tokyo metropolitan area's well-documented earthquake history is dominated by the 1703 and 1923 great Kanto earthquakes produced by slip on the boundary between the subducting Philippine Sea plate and the overlying plate. Both earthquakes caused ˜1.5 m of uplift at the Miura Peninsula directly above the inferred fault rupture, and both were followed by tsunamis with heights of ˜5 m. We examined cores ˜2 m long from 8 tidal flat sites at the head of a small bay on the peninsula. The cores penetrated two to four layers of shelly gravel, as much as 0.5 m thick, with abundant shell fragments and mud clasts. The presence of gravel indicates strong tractive currents. Muddy bay deposits that bound the gravel layers show vertical changes in grain size and diatom assemblages consistent with abrupt shoaling at the times of the currents. The changes may further suggest gradual deepening of the bay during the intervals between the strong currents. We infer, based on 137Cs, 14C, and 210Pb dating, that the top two shelly gravel layers represent tsunamis associated with the 1703 and 1923 great Kanto earthquakes, and that the third layer was deposited by a tsunami during an earlier earthquake. The age range of this layer, AD 1060-1400, includes the time of an earthquake that occurred in 1293 according to a historical document. If so, the recurrence interval before the 1703 earthquake was almost twice as long as the interval between the 1703 and 1923 earthquakes.
Directory of Open Access Journals (Sweden)
Evgeny G. Bugaev
2011-01-01
Full Text Available Geological, geophysical and seismogeological studies are now conducted in a more detail and thus provide for determining seismic sources with higher accuracy, from the first meters to first dozens of meters [Waldhauser, Schaff, 2008]. It is now possible to consider uncertainty ellipses of earthquake hypocenters, that are recorded in the updated Earthquake Catalogue, as surfaces of earthquake focus generators. In our article, it is accepted that a maximum horizontal size of an uncertainty ellipse corresponds to an area of a focus generator, and seismic events are thus classified into two groups, earthquakes with nonstiff and stiff foci. Criteria of such a classification are two limits of elastic strain and brittle strain in case of uniaxial (3⋅10–5 or omnidirectional (10–6 compression. The criteria are established from results of analyses of parameters of seismic dislocations and earthquake foci with regard to studies of surface parameters and deformation parameters of fault zones. It is recommendable that the uniaxial compression criterion shall be applied to zones of interaction between tectonic plates, and the unilateral compression criterion shall be applied to low active (interplate areas. Sample cases demonstrate the use of data sets on nonstiff and stiff foci for separate evaluation of magnitude reoccurrence curves, analyses of structured and dissipated seismicity, review of the physical nature of nonlinearity of recurrence curves and conditions of preparation of strong earthquakes. Changes of parameters of the recurrence curves with changes of data collection square areas are considered. Reviewed are changes of parameters of the recurrence curves during preparation for the Japan major earthquake of 11 March 2011 prior to and after the major shock. It is emphasized that it is important to conduct even more detailed geological and geophysical studies and to improve precision and sensitivity of local seismological monitoring networks
Parallelization of the Coupled Earthquake Model
Block, Gary; Li, P. Peggy; Song, Yuhe T.
2007-01-01
This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.
Biasi, G. P.; Clark, K.; Berryman, K. R.; Cochran, U. A.; Prior, C.
2010-12-01
-correlate sections at the site. Within a series of dates from a section, ordering with intrinsic precision of the dates indicates an uncertainty at event horizons on the order of 50 years, while the transitions from peat to silt indicating an earthquake are separated by several times this amount. The effect is to create a stair-stepping date sequence that often allows us to link sections and improve dating resolution in both sections. The combined section provides clear evidence for at least 18 earthquake-induced cycles. Event recurrence would be about 390 years in a simple average. Internal evidence and close examination of date sequences provide preliminary indications of as many as 22 earthquakes could be represented at Hokuri Creek, and a recurrence interval of ~320 years. Both sequences indicate a middle sequence from 3800 to 1000 BC in which recurrence intervals are resolvably longer than average. Variability in recurrence is relatively small - relatively few intervals are even >1.5x the average. This indicates that large earthquakes on the Alpine Fault of South Island, New Zealand are best fit by a time-predictable model.
Patton, J. R.; Leroy, T. H.
2009-12-01
Earthquake and tsunami hazard for northwestern California and southern Oregon is predominately based on estimates of recurrence for earthquakes on the Cascadia subduction zone and upper plate thrust faults, each with unique deformation and recurrence histories. Coastal northern California is uniquely located to enable us to distinguish these different sources of seismic hazard as the accretionary prism extends on land in this region. This region experiences ground deformation from rupture of upper plate thrust faults like the Little Salmon fault. Most of this region is thought to be above the locked zone of the megathrust, so is subject to vertical deformation during the earthquake cycle. Secondary evidence of earthquake history is found here in the form of marsh soils that coseismically subside and commonly are overlain by estuarine mud and rarely tsunami sand. It is not currently known what the source of the subsidence is for this region; it may be due to upper plate rupture, megathrust rupture, or a combination of the two. Given that many earlier investigations utilized bulk peat for 14C age determinations and that these early studies were largely reconnaissance work, these studies need to be reevaluated. Recurrence Interval estimates are inconsistent when comparing terrestrial (~500 years) and marine (~220 years) data sets. This inconsistency may be due to 1) different sources of archival bias in marine and terrestrial data sets and/or 2) different sources of deformation. Factors controlling successful archiving of paleoseismic data are considered as this relates to geologic setting and how that might change through time. We compile, evaluate, and rank existing paleoseismic data in order to prioritize future paleoseismic investigations. 14C ages are recalibrated and quality assessments are made for each age determination. We then evaluate geologic setting and prioritize important research locations and goals based on these existing data. Terrestrial core
Frankel, Arthur D.
2011-01-01
This report summarizes a meeting of geologists, marine sedimentologists, geophysicists, and seismologists that was held on November 18–19, 2010 at Oregon State University in Corvallis, Oregon. The overall goal of the meeting was to evaluate observations of turbidite deposits to provide constraints on the recurrence time and rupture extent of great Cascadia subduction zone (CSZ) earthquakes for the next update of the U.S. national seismic hazard maps (NSHM). The meeting was convened at Oregon State University because this is the major center for collecting and evaluating turbidite evidence of great Cascadia earthquakes by Chris Goldfinger and his colleagues. We especially wanted the participants to see some of the numerous deep sea cores this group has collected that contain the turbidite deposits. Great earthquakes on the CSZ pose a major tsunami, ground-shaking, and ground-failure hazard to the Pacific Northwest. Figure 1 shows a map of the Pacific Northwest with a model for the rupture zone of a moment magnitude Mw 9.0 earthquake on the CSZ and the ground shaking intensity (in ShakeMap format) expected from such an earthquake, based on empirical ground-motion prediction equations. The damaging effects of such an earthquake would occur over a wide swath of the Pacific Northwest and an accompanying tsunami would likely cause devastation along the Pacifc Northwest coast and possibly cause damage and loss of life in other areas of the Pacific. A magnitude 8 earthquake on the CSZ would cause damaging ground shaking and ground failure over a substantial area and could also generate a destructive tsunami. The recent tragic occurrence of the 2011 Mw 9.0 Tohoku-Oki, Japan, earthquake highlights the importance of having accurate estimates of the recurrence times and magnitudes of great earthquakes on subduction zones. For the U.S. national seismic hazard maps, estimating the hazard from the Cascadia subduction zone has been based on coastal paleoseismic evidence of great
Hindle, D.; Mackey, K.
2011-02-01
Recorded seismicity from the northwestern Okhotsk plate, northeast Asia, is currently insufficient to account for the predicted slip rates along its boundaries due to plate tectonics. However, the magnitude-frequency relationship for earthquakes from the region suggests that larger earthquakes are possible in the future and that events of ˜Mw 7.5 which should occur every ˜100-350 years would account for almost all the slip of the plate along its boundaries due to Eurasia-North America convergence. We use models for seismic slip distribution along the bounding faults of Okhotsk to conclude that relatively little aseismic strain release is occurring and that larger future earthquakes are likely in the region. Our models broadly support the idea of a single Okhotsk plate, with the large majority of tectonic strain released along its boundaries.
Earthquake source model using strong motion displacement
Indian Academy of Sciences (India)
The strong motion displacement records available during an earthquake can be treated as the response of the earth as the a structural system to unknown forces acting at unknown locations. Thus, if the part of the earth participating in ground motion is modelled as a known finite elastic medium, one can attempt to model the ...
Institute of Scientific and Technical Information of China (English)
ZHANG Peizhen; MIN Wei; DENG Qidong; MAO Fengying
2005-01-01
The Haiyuan fault is a major seismogenic fault in north-central China where the1920 Haiyuan earthquake of magnitude 8.5 occurred, resulting in more than 220000 deaths. The fault zone can be divided into three segments based on their geometric patterns and associated geomorphology. To study paleoseismology and recurrent history of devastating earthquakes along the fault, we dug 17 trenches along different segments of the fault zone. Although only 10of them allow the paleoearthquake event to be dated, together with the 8 trenches dug previously they still provide adequate information that enables us to capture major paleoearthquakes occurring along the fault during the past geological time. We discovered 3 events along the eastern segment during the past 14000 a, 7 events along the middle segment during the past 9000 a, and 6 events along the western segment during the past 10000 a. These events clearly depict two temporal clusters. The first cluster occurs from 4600 to 6400 a, and the second occurs from 1000to 2800 a, approximately. Each cluster lasts about 2000 a. Time period between these two clusters is also about 2000 a. Based on fault geometry, segmentation pattern, and paleoearthquake events along the Haiyuan fault we can identify three scales of earthquake rupture: rupture of one segment, cascade rupture of two segments, and cascade rupture of entire fault (three segments).Interactions of slip patches on the surface of the fault may cause rupture on one patch or ruptures of more than two to three patchs to form the complex patterns of cascade rupture events.
An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...
Personius, Stephen; Crone, Anthony J.; Burns, Patricia A.; Reitman, Nadine G.
2017-01-01
We conducted a trench investigation and analyzed pre‐ and postearthquake topography to determine the timing and size of prehistoric surface ruptures on the Susitna Glacier fault (SGF), the thrust fault that initiated the 2002 Mw 7.9 Denali fault earthquake sequence in central Alaska. In two of our three hand‐excavated trenches, we found clear evidence for a single pre‐2002 earthquake (penultimate earthquake [PE]) and determined an age of 2210±420 cal. B.P. (2σ) for this event. We used structure‐from‐motion software to create a pre‐2002‐earthquake digital surface model (DSM) from 1:62,800‐scale aerial photography taken in 1980 and compared this DSM with postearthquake 5‐m/pixel Interferometric Synthetic Aperature Radar topography taken in 2010. Topographic profiles measured from the pre‐earthquake DSM show features that we interpret as fault and fold scarps. These landforms were about the same size as those formed in 2002, so we infer that the PE was similar in size to the initial (Mw 7.2) subevent of the 2002 sequence. A recurrence interval of 2270 yrs and dip slip of ∼4.8 m yield a single‐interval slip rate of ∼1.8 mm/yr. The lack of evidence for pre‐PE deformation indicates probable episodic (clustering) behavior on the SGF that may be related to strain migration among other similarly oriented thrust faults that together accommodate shortening south of the Denali fault. We suspect that slip‐partitioned thrust‐triggered earthquakes may be a common occurrence on the Denali fault system, but documenting the frequency of such events will be very difficult, given the lack of long‐term paleoseismic records, the number of potential thrust‐earthquake sources, and the pervasive glacial erosion in the region.
Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F
2011-10-04
The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.
Relaxation creep model of impending earthquake
Energy Technology Data Exchange (ETDEWEB)
Morgounov, V. A. [Russian Academy of Sciences, Institute of Physics of the Earth, Moscow (Russian Federation)
2001-04-01
The alternative view of the current status and perspective of seismic prediction studies is discussed. In the problem of the ascertainment of the uncertainty relation Cognoscibility-Unpredictability of Earthquakes, priorities of works on short-term earthquake prediction are defined due to the advantage that the final stage of nucleation of earthquake is characterized by a substantial activation of the process while its strain rate increases by the orders of magnitude and considerably increased signal-to-noise ratio. Based on the creep phenomenon under stress relaxation conditions, a model is proposed to explain different images of precursors of impending tectonic earthquakes. The onset of tertiary creep appears to correspond to the onset of instability and inevitably fails unless it unloaded. At this stage, the process acquires the self-regulating character to the greatest extent the property of irreversibility, one of the important components of prediction reliability. Data in situ suggest a principal possibility to diagnose the process of preparation by ground measurements of acoustic and electromagnetic emission in the rocks under constant strain in the condition of self-relaxed stress until the moment of fracture are discussed in context. It was obtained that electromagnetic emission precedes but does not accompany the phase of macrocrak development.
Human casualties in earthquakes: Modelling and mitigation
Spence, R.J.S.; So, E.K.M.
2011-01-01
Earthquake risk modelling is needed for the planning of post-event emergency operations, for the development of insurance schemes, for the planning of mitigation measures in the existing building stock, and for the development of appropriate building regulations; in all of these applications estimates of casualty numbers are essential. But there are many questions about casualty estimation which are still poorly understood. These questions relate to the causes and nature of the injuries and deaths, and the extent to which they can be quantified. This paper looks at the evidence on these questions from recent studies. It then reviews casualty estimation models available, and finally compares the performance of some casualty models in making rapid post-event casualty estimates in recent earthquakes.
Foreshock and aftershocks in simple earthquake models.
Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R
2015-02-27
Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.
Ampuero, J. P.; Meng, L.; Hough, S. E.; Martin, S. S.; Asimaki, D.
2015-12-01
Two salient features of the 2015 Gorkha, Nepal, earthquake provide new opportunities to evaluate models of earthquake cycle and dynamic rupture. The Gorkha earthquake broke only partially across the seismogenic depth of the Main Himalayan Thrust: its slip was confined in a narrow depth range near the bottom of the locked zone. As indicated by the belt of background seismicity and decades of geodetic monitoring, this is an area of stress concentration induced by deep fault creep. Previous conceptual models attribute such intermediate-size events to rheological segmentation along-dip, including a fault segment with intermediate rheology in between the stable and unstable slip segments. We will present results from earthquake cycle models that, in contrast, highlight the role of stress loading concentration, rather than frictional segmentation. These models produce "super-cycles" comprising recurrent characteristic events interspersed by deep, smaller non-characteristic events of overall increasing magnitude. Because the non-characteristic events are an intrinsic component of the earthquake super-cycle, the notion of Coulomb triggering or time-advance of the "big one" is ill-defined. The high-frequency (HF) ground motions produced in Kathmandu by the Gorkha earthquake were weaker than expected for such a magnitude and such close distance to the rupture, as attested by strong motion recordings and by macroseismic data. Static slip reached close to Kathmandu but had a long rise time, consistent with control by the along-dip extent of the rupture. Moreover, the HF (1 Hz) radiation sources, imaged by teleseismic back-projection of multiple dense arrays calibrated by aftershock data, was deep and far from Kathmandu. We argue that HF rupture imaging provided a better predictor of shaking intensity than finite source inversion. The deep location of HF radiation can be attributed to rupture over heterogeneous initial stresses left by the background seismic activity
Zielke, Olaf
2015-01-01
Understanding earthquake (EQ) recurrence relies on information about the timing and size of past EQ ruptures along a given fault. Knowledge of a fault\\'s rupture history provides valuable information on its potential future behavior, enabling seismic hazard estimates and loss mitigation. Stratigraphic and geomorphic evidence of faulting is used to constrain the recurrence of surface rupturing EQs. Analysis of the latter data sets culminated during the mid-1980s in the formulation of now classical EQ recurrence models, now routinely used to assess seismic hazard. Within the last decade, Light Detection and Ranging (lidar) surveying technology and other high-resolution data sets became increasingly available to tectono-geomorphic studies, promising to contribute to better-informed models of EQ recurrence and slip-accumulation patterns. After reviewing motivation and background, we outline requirements to successfully reconstruct a fault\\'s offset accumulation pattern from geomorphic evidence. We address sources of uncertainty affecting offset measurement and advocate approaches to minimize them. A number of recent studies focus on single-EQ slip distributions and along-fault slip accumulation patterns. We put them in context with paleoseismic studies along the respective faults by comparing coefficients of variation CV for EQ inter-event time and slip-per-event and find that a) single-event offsets vary over a wide range of length-scales and the sources for offset variability differ with length-scale, b) at fault-segment length-scales, single-event offsets are essentially constant, c) along-fault offset accumulation as resolved in the geomorphic record is dominated by essentially same-size, large offset increments, and d) there is generally no one-to-one correlation between the offset accumulation pattern constrained in the geomorphic record and EQ occurrence as identified in the stratigraphic record, revealing the higher resolution and preservation potential of
The failure of earthquake failure models
Gomberg, J.
2001-01-01
In this study I show that simple heuristic models and numerical calculations suggest that an entire class of commonly invoked models of earthquake failure processes cannot explain triggering of seismicity by transient or "dynamic" stress changes, such as stress changes associated with passing seismic waves. The models of this class have the common feature that the physical property characterizing failure increases at an accelerating rate when a fault is loaded (stressed) at a constant rate. Examples include models that invoke rate state friction or subcritical crack growth, in which the properties characterizing failure are slip or crack length, respectively. Failure occurs when the rate at which these grow accelerates to values exceeding some critical threshold. These accelerating failure models do not predict the finite durations of dynamically triggered earthquake sequences (e.g., at aftershock or remote distances). Some of the failure models belonging to this class have been used to explain static stress triggering of aftershocks. This may imply that the physical processes underlying dynamic triggering differs or that currently applied models of static triggering require modification. If the former is the case, we might appeal to physical mechanisms relying on oscillatory deformations such as compaction of saturated fault gouge leading to pore pressure increase, or cyclic fatigue. However, if dynamic and static triggering mechanisms differ, one still needs to ask why static triggering models that neglect these dynamic mechanisms appear to explain many observations. If the static and dynamic triggering mechanisms are the same, perhaps assumptions about accelerating failure and/or that triggering advances the failure times of a population of inevitable earthquakes are incorrect.
Modeling fast and slow earthquakes at various scales.
Ide, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.
Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.
2012-04-01
Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the
Seismic quiescence in a frictional earthquake model
Braun, Oleg M.; Peyrard, Michel
2018-04-01
We investigate the origin of seismic quiescence with a generalized version of the Burridge-Knopoff model for earthquakes and show that it can be generated by a multipeaked probability distribution of the thresholds at which contacts break. Such a distribution is not assumed a priori but naturally results from the aging of the contacts. We show that the model can exhibit quiescence as well as enhanced foreshock activity, depending on the value of some parameters. This provides a generic understanding for seismic quiescence, which encompasses earlier specific explanations and could provide a pathway for a classification of faults.
Wechsler, N.; Rockwell, T. K.; Klinger, Y.; Agnon, A.; Marco, S.
2012-12-01
Models used to forecast future seismicity make fundamental assumptions about the behavior of faults and fault systems in the long term, but in many cases this long-term behavior is assumed using short-term and perhaps non-representative observations. The question arises - how long of a record is long enough to represent actual fault behavior, both in terms of recurrence of earthquakes and of moment release (aka slip-rate). We test earthquake recurrence and slip models via high-resolution three-dimensional trenching of the Beteiha (Bet-Zayda) site on the Dead Sea Transform (DST) in northern Israel. We extend the earthquake history of this simple plate boundary fault to establish slip rate for the past 3-4kyr, to determine the amount of slip per event and to study the fundamental behavior, thereby testing competing rupture models (characteristic, slip-patch, slip-loading, and Gutenberg Richter type distribution). To this end we opened more than 900m of trenches, mapped 8 buried channels and dated more than 80 radiocarbon samples. By mapping buried channels, offset by the DST on both sides of the fault, we obtained for each an estimate of displacement. Coupled with fault crossing trenches to determine event history, we construct earthquake and slip history for the fault for the past 2kyr. We observe evidence for a total of 9-10 surface-rupturing earthquakes with varying offset amounts. 6-7 events occurred in the 1st millennium, compared to just 2-3 in the 2nd millennium CE. From our observations it is clear that the fault is not behaving in a periodic fashion. A 4kyr old buried channel yields a slip rate of 3.5-4mm/yr, consistent with GPS rates for this segment. Yet in spite of the apparent agreement between GPS, Pleistocene to present slip rate, and the lifetime rate of the DST, the past 800-1000 year period appears deficit in strain release. Thus, in terms of moment release, most of the fault has remained locked and is accumulating elastic strain. In contrast, the
Neural Machine Translation with Recurrent Attention Modeling
Yang, Zichao; Hu, Zhiting; Deng, Yuntian; Dyer, Chris; Smola, Alex
2016-01-01
Knowing which words have been attended to in previous time steps while generating a translation is a rich source of information for predicting what words will be attended to in the future. We improve upon the attention model of Bahdanau et al. (2014) by explicitly modeling the relationship between previous and subsequent attention levels for each word using one recurrent network per input word. This architecture easily captures informative features, such as fertility and regularities in relat...
Sun, Y.; Luo, G.
2017-12-01
Seismicity in a region is usually characterized by earthquake clusters and earthquake migration along its major fault zones. However, we do not fully understand why and how earthquake clusters and spatio-temporal migration of earthquakes occur. The northeastern Tibetan Plateau is a good example for us to investigate these problems. In this study, we construct and use a three-dimensional viscoelastoplastic finite-element model to simulate earthquake cycles and spatio-temporal migration of earthquakes along major fault zones in northeastern Tibetan Plateau. We calculate stress evolution and fault interactions, and explore effects of topographic loading and viscosity of middle-lower crust and upper mantle on model results. Model results show that earthquakes and fault interactions increase Coulomb stress on the neighboring faults or segments, accelerating the future earthquakes in this region. Thus, earthquakes occur sequentially in a short time, leading to regional earthquake clusters. Through long-term evolution, stresses on some seismogenic faults, which are far apart, may almost simultaneously reach the critical state of fault failure, probably also leading to regional earthquake clusters and earthquake migration. Based on our model synthetic seismic catalog and paleoseismic data, we analyze probability of earthquake migration between major faults in northeastern Tibetan Plateau. We find that following the 1920 M 8.5 Haiyuan earthquake and the 1927 M 8.0 Gulang earthquake, the next big event (M≥7) in northeastern Tibetan Plateau would be most likely to occur on the Haiyuan fault.
Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty
Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon
2006-01-01
Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions
Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike
2011-01-01
Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.
Modelling end-glacial earthquakes at Olkiluoto
International Nuclear Information System (INIS)
Faelth, B.; Hoekmark, H.
2011-02-01
The objective of this study is to obtain estimates of the possible effects that post-glacial seismic events in three verified deformation zones (BFZ100, BFZ021/099 and BFZ214) at the Olkiluoto site may have on nearby fractures in terms of induced fracture shear displacement. The study is carried out by use of large-scale models analysed dynamically with the three dimensional distinct element code 3DEC. Earthquakes are simulated in a schematic way; large planar discontinuities representing earthquake faults are surrounded by a number of smaller discontinuities which represent rock fractures in which shear displacements potentially could be induced by the effects of the slipping fault. Initial stresses, based on best estimates of the present-day in situ stresses and on state-of-the-art calculations of glacially-induced stresses, are applied. The fault rupture is then initiated at a pre-defined hypocentre and programmed to propagate outward along the fault plane with a specified rupture velocity until it is arrested at the boundary of the prescribed rupture area. Fault geometries, fracture orientations, in situ stress model and material property parameter values are based on data obtained from the Olkiluoto site investigations. Glacially-induced stresses are obtained from state-of-the-art ice-crust/mantle finite element analyses. The response of the surrounding smaller discontinuities, i.e. the induced fracture shear displacement, is the main output from the simulations
Modelling the elements of country vulnerability to earthquake disasters.
Asef, M R
2008-09-01
Earthquakes have probably been the most deadly form of natural disaster in the past century. Diversity of earthquake specifications in terms of magnitude, intensity and frequency at the semicontinental scale has initiated various kinds of disasters at a regional scale. Additionally, diverse characteristics of countries in terms of population size, disaster preparedness, economic strength and building construction development often causes an earthquake of a certain characteristic to have different impacts on the affected region. This research focuses on the appropriate criteria for identifying the severity of major earthquake disasters based on some key observed symptoms. Accordingly, the article presents a methodology for identification and relative quantification of severity of earthquake disasters. This has led to an earthquake disaster vulnerability model at the country scale. Data analysis based on this model suggested a quantitative, comparative and meaningful interpretation of the vulnerability of concerned countries, and successfully explained which countries are more vulnerable to major disasters.
Modelling earth current precursors in earthquake prediction
Directory of Open Access Journals (Sweden)
R. Di Maio
1997-06-01
Full Text Available This paper deals with the theory of earth current precursors of earthquake. A dilatancy-diffusion-polarization model is proposed to explain the anomalies of the electric potential, which are observed on the ground surface prior to some earthquakes. The electric polarization is believed to be the electrokinetic effect due to the invasion of fluids into new pores, which are opened inside a stressed-dilated rock body. The time and space variation of the distribution of the electric potential in a layered earth as well as in a faulted half-space is studied in detail. It results that the surface response depends on the underground conductivity distribution and on the relative disposition of the measuring dipole with respect to the buried bipole source. A field procedure based on the use of an areal layout of the recording sites is proposed, in order to obtain the most complete information on the time and space evolution of the precursory phenomena in any given seismic region.
Strong motion modeling at the Paducah Diffusion Facility for a large New Madrid earthquake
International Nuclear Information System (INIS)
Herrmann, R.B.
1991-01-01
The Paducah Diffusion Facility is within 80 kilometers of the location of the very large New Madrid earthquakes which occurred during the winter of 1811-1812. Because of their size, seismic moment of 2.0 x 10 27 dyne-cm or moment magnitude M w = 7.5, the possible recurrence of these earthquakes is a major element in the assessment of seismic hazard at the facility. Probabilistic hazard analysis can provide uniform hazard response spectra estimates for structure evaluation, but a deterministic modeling of a such a large earthquake can provide strong constraints on the expected duration of motion. The large earthquake is modeled by specifying the earthquake fault and its orientation with respect to the site, and by specifying the rupture process. Synthetic time histories, based on forward modeling of the wavefield, from each subelement are combined to yield a three component time history at the site. Various simulations are performed to sufficiently exercise possible spatial and temporal distributions of energy release on the fault. Preliminary results demonstrate the sensitivity of the method to various assumptions, and also indicate strongly that the total duration of ground motion at the site is controlled primarily by the length of the rupture process on the fault
Singular limit analysis of a model for earthquake faulting
DEFF Research Database (Denmark)
Bossolini, Elena; Brøns, Morten; Kristiansen, Kristian Uldall
2017-01-01
In this paper we consider the one dimensional spring-block model describing earthquake faulting. By using geometric singular perturbation theory and the blow-up method we provide a detailed description of the periodicity of the earthquake episodes. In particular, the limit cycles arise from...
About Block Dynamic Model of Earthquake Source.
Gusev, G. A.; Gufeld, I. L.
One may state the absence of a progress in the earthquake prediction papers. The short-term prediction (diurnal period, localisation being also predicted) has practical meaning. Failure is due to the absence of the adequate notions about geological medium, particularly, its block structure and especially in the faults. Geological and geophysical monitoring gives the basis for the notion about geological medium as open block dissipative system with limit energy saturation. The variations of the volume stressed state close to critical states are associated with the interaction of the inhomogeneous ascending stream of light gases (helium and hydrogen) with solid phase, which is more expressed in the faults. In the background state small blocks of the fault medium produce the sliding of great blocks in the faults. But for the considerable variations of ascending gas streams the formation of bound chains of small blocks is possible, so that bound state of great blocks may result (earthquake source). Recently using these notions we proposed a dynamical earthquake source model, based on the generalized chain of non-linear bound oscillators of Fermi-Pasta-Ulam type (FPU). The generalization concerns its in homogeneity and different external actions, imitating physical processes in the real source. Earlier weak inhomogeneous approximation without dissipation was considered. Last has permitted to study the FPU return (return to initial state). Probabilistic properties in quasi periodic movement were found. The chain decay problem due to non-linearity and external perturbations was posed. The thresholds and dependence of life- time of the chain are studied. The great fluctuations of life-times are discovered. In the present paper the rigorous consideration of the inhomogeneous chain including the dissipation is considered. For the strong dissipation case, when the oscillation movements are suppressed, specific effects are discovered. For noise action and constantly arising
Vaca, Sandro; Vallée, Martin; Nocquet, Jean-Mathieu; Battaglia, Jean; Régnier, Marc
2018-01-01
The northern Ecuador segment of the Nazca/South America subduction zone shows spatially heterogeneous interseismic coupling. Two highly coupled zones (0.4° S-0.35° N and 0.8° N-4.0° N) are separated by a low coupled area, hereafter referred to as the Punta Galera-Mompiche Zone (PGMZ). Large interplate earthquakes repeatedly occurred within the coupled zones in 1958 (Mw 7.7) and 1979 (Mw 8.1) for the northern patch and in 1942 (Mw 7.8) and 2016 (Mw 7.8) for the southern patch, while the whole segment is thought to have rupture during the 1906 Mw 8.4-8.8 great earthquake. We find that during the last decade, the PGMZ has experienced regular and frequent seismic swarms. For the best documented sequence (December 2013-January 2014), a joint seismological and geodetic analysis reveals a six-week-long Slow Slip Event (SSE) associated with a seismic swarm. During this period, the microseismicity is organized into families of similar earthquakes spatially and temporally correlated with the evolution of the aseismic slip. The moment release (3.4 × 1018 Nm, Mw 6.3), over a 60 × 40 km area, is considerably larger than the moment released by earthquakes (5.8 × 1015 Nm, Mw 4.4) during the same time period. In 2007-2008, a similar seismic-aseismic episode occurred, with higher magnitudes both for the seismic and aseismic processes. Cross-correlation analyses of the seismic waveforms over a 15 years-long period further suggest a 2-year repeat time for seismic swarms, which also implies that SSEs recurrently affect this area. Such SSEs contribute to release the accumulated stress, likely explaining why the 2016 Pedernales earthquake did not propagate northward into the PGMZ.
Earthquake Source Spectral Study beyond the Omega-Square Model
Uchide, T.; Imanishi, K.
2017-12-01
Earthquake source spectra have been used for characterizing earthquake source processes quantitatively and, at the same time, simply, so that we can analyze the source spectra for many earthquakes, especially for small earthquakes, at once and compare them each other. A standard model for the source spectra is the omega-square model, which has the flat spectrum and the falloff inversely proportional to the square of frequencies at low and high frequencies, respectively, which are bordered by a corner frequency. The corner frequency has often been converted to the stress drop under the assumption of circular crack models. However, recent studies claimed the existence of another corner frequency [Denolle and Shearer, 2016; Uchide and Imanishi, 2016] thanks to the recent development of seismic networks. We have found that many earthquakes in areas other than the area studied by Uchide and Imanishi [2016] also have source spectra deviating from the omega-square model. Another part of the earthquake spectra we now focus on is the falloff rate at high frequencies, which will affect the seismic energy estimation [e.g., Hirano and Yagi, 2017]. In June, 2016, we deployed seven velocity seismometers in the northern Ibaraki prefecture, where the shallow crustal seismicity mainly with normal-faulting events was activated by the 2011 Tohoku-oki earthquake. We have recorded seismograms at 1000 samples per second and at a short distance from the source, so that we can investigate the high-frequency components of the earthquake source spectra. Although we are still in the stage of discovery and confirmation of the deviation from the standard omega-square model, the update of the earthquake source spectrum model will help us systematically extract more information on the earthquake source process.
Earthquake cycles and physical modeling of the process leading up to a large earthquake
Ohnaka, Mitiyasu
2004-08-01
A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.
Butterfly, Recurrence, and Predictability in Lorenz Models
Shen, B. W.
2017-12-01
Over the span of 50 years, the original three-dimensional Lorenz model (3DLM; Lorenz,1963) and its high-dimensional versions (e.g., Shen 2014a and references therein) have been used for improving our understanding of the predictability of weather and climate with a focus on chaotic responses. Although the Lorenz studies focus on nonlinear processes and chaotic dynamics, people often apply a "linear" conceptual model to understand the nonlinear processes in the 3DLM. In this talk, we present examples to illustrate the common misunderstandings regarding butterfly effect and discuss the importance of solutions' recurrence and boundedness in the 3DLM and high-dimensional LMs. The first example is discussed with the following folklore that has been widely used as an analogy of the butterfly effect: "For want of a nail, the shoe was lost.For want of a shoe, the horse was lost.For want of a horse, the rider was lost.For want of a rider, the battle was lost.For want of a battle, the kingdom was lost.And all for the want of a horseshoe nail."However, in 2008, Prof. Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability; and that the verse implicitly suggests that subsequent small events will not reverse the outcome (Lorenz, 2008). Lorenz's comments suggest that the verse neither describes negative (nonlinear) feedback nor indicates recurrence, the latter of which is required for the appearance of a butterfly pattern. The second example is to illustrate that the divergence of two nearby trajectories should be bounded and recurrent, as shown in Figure 1. Furthermore, we will discuss how high-dimensional LMs were derived to illustrate (1) negative nonlinear feedback that stabilizes the system within the five- and seven-dimensional LMs (5D and 7D LMs; Shen 2014a; 2015a; 2016); (2) positive nonlinear feedback that destabilizes the system within the 6D and 8D LMs (Shen 2015b; 2017); and (3
Irregular recurrence of large earthquakes along the san andreas fault: evidence from trees.
Jacoby, G C; Sheppard, P R; Sieh, K E
1988-07-08
Old trees growing along the San Andreas fault near Wrightwood, California, record in their annual ring-width patterns the effects of a major earthquake in the fall or winter of 1812 to 1813. Paleoseismic data and historical information indicate that this event was the "San Juan Capistrano" earthquake of 8 December 1812, with a magnitude of 7.5. The discovery that at least 12 kilometers of the Mojave segment of the San Andreas fault ruptured in 1812, only 44 years before the great January 1857 rupture, demonstrates that intervals between large earthquakes on this part of the fault are highly variable. This variability increases the uncertainty of forecasting destructive earthquakes on the basis of past behavior and accentuates the need for a more fundamental knowledge of San Andreas fault dynamics.
Time series modelling of the Kobe-Osaka earthquake recordings
Directory of Open Access Journals (Sweden)
N. Singh
2002-01-01
generated by an earthquake. With a view of comparing these two types of waveforms, Singh (1992 developed a technique for identifying a model in time domain. Fortunately this technique has been found useful in modelling the recordings of the killer earthquake occurred in the Kobe-Osaka region of Japan at 5.46 am on 17 January, 1995. The aim of the present study is to show how well the method for identifying a model (developed by Singh (1992 can be used for describing the vibrations of the above mentioned earthquake recorded at Charters Towers in Queensland, Australia.
Zielke, Olaf; Klinger, Yann; Arrowsmith, J. Ramon
2015-01-01
to contribute to better-informed models of EQ recurrence and slip-accumulation patterns. After reviewing motivation and background, we outline requirements to successfully reconstruct a fault's offset accumulation pattern from geomorphic evidence. We address
An interdisciplinary approach for earthquake modelling and forecasting
Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.
2016-12-01
Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.
Biopsychosocial model of chronic recurrent pain
Directory of Open Access Journals (Sweden)
Zlatka Rakovec-Felser
2009-07-01
Full Text Available Pain is not merely a symptom of disease but a complex independent phenomenon where psychological factors are always present (Sternberg, 1973. Especially by chronic, recurrent pain it's more constructive to think of chronic pain as a syndrome that evolves over time, involving a complex interaction of physiological/organic, psychological, and behavioural processes. Study of chronic recurrent functional pain covers tension form of headache. 50 suffering persons were accidentally chosen among those who had been seeking medical help over more than year ago. We tested their pain intensity and duration, extent of subjective experience of accommodation efforts, temperament characteristics, coping strategies, personal traits, the role of pain in intra- and interpersonal communication. At the end we compared this group with control group (without any manifest physical disorders and with analyse of variance (MANOVA. The typical person who suffers and expects medical help is mostly a woman, married, has elementary or secondary education, is about 40. Pain, seems to appear in the phase of stress-induced psychophysical fatigue, by persons with lower constitutional resistance to different influences, greater irritability and number of physiologic correlates of emotional tensions. Because of their ineffective style of coping, it seems they quickly exhausted their adaptation potential too. Through their higher level of social–field dependence, reactions of other persons (doctor, spouse could be important factors of reinforcement and social learning processes. In managing of chronic pain, especially such as tension headache is, it's very important to involve bio-psychosocial model of pain and integrative model of treatment. Intra- and inter-subjective psychological functions of pain must be recognised as soon as possible.
Toward a comprehensive areal model of earthquake-induced landslides
Miles, S.B.; Keefer, D.K.
2009-01-01
This paper provides a review of regional-scale modeling of earthquake-induced landslide hazard with respect to the needs for disaster risk reduction and sustainable development. Based on this review, it sets out important research themes and suggests computing with words (CW), a methodology that includes fuzzy logic systems, as a fruitful modeling methodology for addressing many of these research themes. A range of research, reviewed here, has been conducted applying CW to various aspects of earthquake-induced landslide hazard zonation, but none facilitate comprehensive modeling of all types of earthquake-induced landslides. A new comprehensive areal model of earthquake-induced landslides (CAMEL) is introduced here that was developed using fuzzy logic systems. CAMEL provides an integrated framework for modeling all types of earthquake-induced landslides using geographic information systems. CAMEL is designed to facilitate quantitative and qualitative representation of terrain conditions and knowledge about these conditions on the likely areal concentration of each landslide type. CAMEL is highly modifiable and adaptable; new knowledge can be easily added, while existing knowledge can be changed to better match local knowledge and conditions. As such, CAMEL should not be viewed as a complete alternative to other earthquake-induced landslide models. CAMEL provides an open framework for incorporating other models, such as Newmark's displacement method, together with previously incompatible empirical and local knowledge. ?? 2009 ASCE.
Bayesian Recurrent Neural Network for Language Modeling.
Chien, Jen-Tzung; Ku, Yuan-Chu
2016-02-01
A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.
Rizza, M.; Dubois, C.; Fleury, J.; Abdrakhmatov, K.; Pousse, L.; Baikulov, S.; Vezinet, A.
2017-12-01
In the western Tien-Shan Range, the largest intracontinental strike-slip fault is the Karatau-Talas Fergana Fault system. This dextral fault system is subdivided into two main segments: the Karatau fault to the north and the Talas-Fergana fault (TFF) to the south. Kinematics and rates of deformation for the TFF during the Quaternary period are still debated and are poorly constrained. Only a few paleoseismological investigations are availabe along the TFF (Burtman et al., 1996; Korjenkov et al., 2010) and no systematic quantifications of the dextral displacements along the TFF has been undertaken. As such, the appraisal of the TFF behavior demands new tectonic information. In this study, we present the first detailed analysis of the morphology and the segmentation of the TFF and an offset inventory of morphological markers along the TFF. To discuss temporal and spatial recurrence patterns of slip accumulated over multiple seismic events, our study focused on a 60 km-long section of the TFF (Chatkal segment). Using tri-stereo Pleiades satellite images, high-resolution DEMs (1*1 m pixel size) have been generated in order to (i) analyze the fine-scale fault geometry and (ii) thoroughly measure geomorphic offsets. Photogrammetry data obtained from our drone survey on high interest sites, provide higher-resolution DEMs of 0.5 * 0.5 m pixel size.Our remote sensing mapping allows an unprecedented subdivision - into five distinct segments - of the study area. About 215 geomorphic markers have been measured and offsets range from 4.5m to 180 m. More than 80% of these offsets are smaller than 60 m, suggesting landscape reset during glacial maximum. Calculations of Cumulative Offset Probability Density (COPD) for the whole 60 km-long section as well as for each segments support distinct behavior from a segment to another and thus variability in slip-accumulation patterns. Our data argue for uniform slip model behavior along this section of the TFF. Moreover, we excavated a
Modeling landslide recurrence in Seattle, Washington, USA
Salciarini, Diana; Godt, Jonathan W.; Savage, William Z.; Baum, Rex L.; Conversini, Pietro
2008-01-01
To manage the hazard associated with shallow landslides, decision makers need an understanding of where and when landslides may occur. A variety of approaches have been used to estimate the hazard from shallow, rainfall-triggered landslides, such as empirical rainfall threshold methods or probabilistic methods based on historical records. The wide availability of Geographic Information Systems (GIS) and digital topographic data has led to the development of analytic methods for landslide hazard estimation that couple steady-state hydrological models with slope stability calculations. Because these methods typically neglect the transient effects of infiltration on slope stability, results cannot be linked with historical or forecasted rainfall sequences. Estimates of the frequency of conditions likely to cause landslides are critical for quantitative risk and hazard assessments. We present results to demonstrate how a transient infiltration model coupled with an infinite slope stability calculation may be used to assess shallow landslide frequency in the City of Seattle, Washington, USA. A module called CRF (Critical RainFall) for estimating deterministic rainfall thresholds has been integrated in the TRIGRS (Transient Rainfall Infiltration and Grid-based Slope-Stability) model that combines a transient, one-dimensional analytic solution for pore-pressure response to rainfall infiltration with an infinite slope stability calculation. Input data for the extended model include topographic slope, colluvial thickness, initial water-table depth, material properties, and rainfall durations. This approach is combined with a statistical treatment of rainfall using a GEV (General Extreme Value) probabilistic distribution to produce maps showing the shallow landslide recurrence induced, on a spatially distributed basis, as a function of rainfall duration and hillslope characteristics.
A mathematical model for predicting earthquake occurrence ...
African Journals Online (AJOL)
We consider the continental crust under damage. We use the observed results of microseism in many seismic stations of the world which was established to study the time series of the activities of the continental crust with a view to predicting possible time of occurrence of earthquake. We consider microseism time series ...
Visible Earthquakes: a web-based tool for visualizing and modeling InSAR earthquake data
Funning, G. J.; Cockett, R.
2012-12-01
InSAR (Interferometric Synthetic Aperture Radar) is a technique for measuring the deformation of the ground using satellite radar data. One of the principal applications of this method is in the study of earthquakes; in the past 20 years over 70 earthquakes have been studied in this way, and forthcoming satellite missions promise to enable the routine and timely study of events in the future. Despite the utility of the technique and its widespread adoption by the research community, InSAR does not feature in the teaching curricula of most university geoscience departments. This is, we believe, due to a lack of accessibility to software and data. Existing tools for the visualization and modeling of interferograms are often research-oriented, command line-based and/or prohibitively expensive. Here we present a new web-based interactive tool for comparing real InSAR data with simple elastic models. The overall design of this tool was focused on ease of access and use. This tool should allow interested nonspecialists to gain a feel for the use of such data and greatly facilitate integration of InSAR into upper division geoscience courses, giving students practice in comparing actual data to modeled results. The tool, provisionally named 'Visible Earthquakes', uses web-based technologies to instantly render the displacement field that would be observable using InSAR for a given fault location, geometry, orientation, and slip. The user can adjust these 'source parameters' using a simple, clickable interface, and see how these affect the resulting model interferogram. By visually matching the model interferogram to a real earthquake interferogram (processed separately and included in the web tool) a user can produce their own estimates of the earthquake's source parameters. Once satisfied with the fit of their models, users can submit their results and see how they compare with the distribution of all other contributed earthquake models, as well as the mean and median
Razafindrakoto, Hoby
2015-04-22
Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for
Razafindrakoto, Hoby; Mai, Paul Martin; Genton, Marc G.; Zhang, Ling; Thingbaijam, Kiran Kumar
2015-01-01
Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for
Yu, Hongyu; Liu, Yajing; Yang, Hongfeng; Ning, Jieyuan
2018-05-01
To assess the potential of catastrophic megathrust earthquakes (MW > 8) along the Manila Trench, the eastern boundary of the South China Sea, we incorporate a 3D non-planar fault geometry in the framework of rate-state friction to simulate earthquake rupture sequences along the fault segment between 15°N-19°N of northern Luzon. Our simulation results demonstrate that the first-order fault geometry heterogeneity, the transitional-segment (possibly related to the subducting Scarborough seamount chain) connecting the steeper south segment and the flatter north segment, controls earthquake rupture behaviors. The strong along-strike curvature at the transitional-segment typically leads to partial ruptures of MW 8.3 and MW 7.8 along the southern and northern segments respectively. The entire fault occasionally ruptures in MW 8.8 events when the cumulative stress in the transitional-segment is sufficiently high to overcome the geometrical inhibition. Fault shear stress evolution, represented by the S-ratio, is clearly modulated by the width of seismogenic zone (W). At a constant plate convergence rate, a larger W indicates on average lower interseismic stress loading rate and longer rupture recurrence period, and could slow down or sometimes stop ruptures that initiated from a narrower portion. Moreover, the modeled interseismic slip rate before whole-fault rupture events is comparable with the coupling state that was inferred from the interplate seismicity distribution, suggesting the Manila trench could potentially rupture in a M8+ earthquake.
Probabilistic Models For Earthquakes With Large Return Periods In Himalaya Region
Chaudhary, Chhavi; Sharma, Mukat Lal
2017-12-01
Determination of the frequency of large earthquakes is of paramount importance for seismic risk assessment as large events contribute to significant fraction of the total deformation and these long return period events with low probability of occurrence are not easily captured by classical distributions. Generally, with a small catalogue these larger events follow different distribution function from the smaller and intermediate events. It is thus of special importance to use statistical methods that analyse as closely as possible the range of its extreme values or the tail of the distributions in addition to the main distributions. The generalised Pareto distribution family is widely used for modelling the events which are crossing a specified threshold value. The Pareto, Truncated Pareto, and Tapered Pareto are the special cases of the generalised Pareto family. In this work, the probability of earthquake occurrence has been estimated using the Pareto, Truncated Pareto, and Tapered Pareto distributions. As a case study, the Himalayas whose orogeny lies in generation of large earthquakes and which is one of the most active zones of the world, has been considered. The whole Himalayan region has been divided into five seismic source zones according to seismotectonic and clustering of events. Estimated probabilities of occurrence of earthquakes have also been compared with the modified Gutenberg-Richter distribution and the characteristics recurrence distribution. The statistical analysis reveals that the Tapered Pareto distribution better describes seismicity for the seismic source zones in comparison to other distributions considered in the present study.
Analysing earthquake slip models with the spatial prediction comparison test
Zhang, L.; Mai, Paul Martin; Thingbaijam, Kiran Kumar; Razafindrakoto, H. N. T.; Genton, Marc G.
2014-01-01
Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.
Analysing earthquake slip models with the spatial prediction comparison test
Zhang, L.
2014-11-10
Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.
Characteristics of broadband slow earthquakes explained by a Brownian model
Ide, S.; Takeo, A.
2017-12-01
Brownian slow earthquake (BSE) model (Ide, 2008; 2010) is a stochastic model for the temporal change of seismic moment release by slow earthquakes, which can be considered as a broadband phenomena including tectonic tremors, low frequency earthquakes, and very low frequency (VLF) earthquakes in the seismological frequency range, and slow slip events in geodetic range. Although the concept of broadband slow earthquake may not have been widely accepted, most of recent observations are consistent with this concept. Then, we review the characteristics of slow earthquakes and how they are explained by BSE model. In BSE model, the characteristic size of slow earthquake source is represented by a random variable, changed by a Gaussian fluctuation added at every time step. The model also includes a time constant, which divides the model behavior into short- and long-time regimes. In nature, the time constant corresponds to the spatial limit of tremor/SSE zone. In the long-time regime, the seismic moment rate is constant, which explains the moment-duration scaling law (Ide et al., 2007). For a shorter duration, the moment rate increases with size, as often observed for VLF earthquakes (Ide et al., 2008). The ratio between seismic energy and seismic moment is constant, as shown in Japan, Cascadia, and Mexico (Maury et al., 2017). The moment rate spectrum has a section of -1 slope, limited by two frequencies corresponding to the above time constant and the time increment of the stochastic process. Such broadband spectra have been observed for slow earthquakes near the trench axis (Kaneko et al., 2017). This spectrum also explains why we can obtain VLF signals by stacking broadband seismograms relative to tremor occurrence (e.g., Takeo et al., 2010; Ide and Yabe, 2014). The fluctuation in BSE model can be non-Gaussian, as far as the variance is finite, as supported by the central limit theorem. Recent observations suggest that tremors and LFEs are spatially characteristic
Self-exciting point process in modeling earthquake occurrences
International Nuclear Information System (INIS)
Pratiwi, H.; Slamet, I.; Respatiwulan; Saputro, D. R. S.
2017-01-01
In this paper, we present a procedure for modeling earthquake based on spatial-temporal point process. The magnitude distribution is expressed as truncated exponential and the event frequency is modeled with a spatial-temporal point process that is characterized uniquely by its associated conditional intensity process. The earthquakes can be regarded as point patterns that have a temporal clustering feature so we use self-exciting point process for modeling the conditional intensity function. The choice of main shocks is conducted via window algorithm by Gardner and Knopoff and the model can be fitted by maximum likelihood method for three random variables. (paper)
An Overview of Soil Models for Earthquake Response Analysis
Directory of Open Access Journals (Sweden)
Halida Yunita
2015-01-01
Full Text Available Earthquakes can damage thousands of buildings and infrastructure as well as cause the loss of thousands of lives. During an earthquake, the damage to buildings is mostly caused by the effect of local soil conditions. Depending on the soil type, the earthquake waves propagating from the epicenter to the ground surface will result in various behaviors of the soil. Several studies have been conducted to accurately obtain the soil response during an earthquake. The soil model used must be able to characterize the stress-strain behavior of the soil during the earthquake. This paper compares equivalent linear and nonlinear soil model responses. Analysis was performed on two soil types, Site Class D and Site Class E. An equivalent linear soil model leads to a constant value of shear modulus, while in a nonlinear soil model, the shear modulus changes constantly,depending on the stress level, and shows inelastic behavior. The results from a comparison of both soil models are displayed in the form of maximum acceleration profiles and stress-strain curves.
Wedmore, L. N. J.; Faure Walker, J. P.; Roberts, G. P.; Sammonds, P. R.; McCaffrey, K. J. W.; Cowie, P. A.
2017-07-01
Current studies of fault interaction lack sufficiently long earthquake records and measurements of fault slip rates over multiple seismic cycles to fully investigate the effects of interseismic loading and coseismic stress changes on the surrounding fault network. We model elastic interactions between 97 faults from 30 earthquakes since 1349 A.D. in central Italy to investigate the relative importance of co-seismic stress changes versus interseismic stress accumulation for earthquake occurrence and fault interaction. This region has an exceptionally long, 667 year record of historical earthquakes and detailed constraints on the locations and slip rates of its active normal faults. Of 21 earthquakes since 1654, 20 events occurred on faults where combined coseismic and interseismic loading stresses were positive even though 20% of all faults are in "stress shadows" at any one time. Furthermore, the Coulomb stress on the faults that experience earthquakes is statistically different from a random sequence of earthquakes in the region. We show how coseismic Coulomb stress changes can alter earthquake interevent times by 103 years, and fault length controls the intensity of this effect. Static Coulomb stress changes cause greater interevent perturbations on shorter faults in areas characterized by lower strain (or slip) rates. The exceptional duration and number of earthquakes we model enable us to demonstrate the importance of combining long earthquake records with detailed knowledge of fault geometries, slip rates, and kinematics to understand the impact of stress changes in complex networks of active faults.
Tensile earthquakes: Theory, modeling and inversion
Czech Academy of Sciences Publication Activity Database
Vavryčuk, Václav
2011-01-01
Roč. 116, B12 (2011), B12320/1-B12320/14 ISSN 0148-0227 R&D Projects: GA AV ČR IAA300120801; GA ČR GAP210/10/2063; GA MŠk LM2010008 Institutional research plan: CEZ:AV0Z30120515 Keywords : earthquake * focal mechanism * moment tensor * non-double-couple component Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 3.021, year: 2011
Modeling of earthquake ground motion in the frequency domain
Thrainsson, Hjortur
In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation
Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro
2017-08-01
Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.
Standards for Documenting Finite‐Fault Earthquake Rupture Models
Mai, Paul Martin
2016-04-06
In this article, we propose standards for documenting and disseminating finite‐fault earthquake rupture models, and related data and metadata. A comprehensive documentation of the rupture models, a detailed description of the data processing steps, and facilitating the access to the actual data that went into the earthquake source inversion are required to promote follow‐up research and to ensure interoperability, transparency, and reproducibility of the published slip‐inversion solutions. We suggest a formatting scheme that describes the kinematic rupture process in an unambiguous way to support subsequent research. We also provide guidelines on how to document the data, metadata, and data processing. The proposed standards and formats represent a first step to establishing best practices for comprehensively documenting input and output of finite‐fault earthquake source studies.
Standards for Documenting Finite‐Fault Earthquake Rupture Models
Mai, Paul Martin; Shearer, Peter; Ampuero, Jean‐Paul; Lay, Thorne
2016-01-01
In this article, we propose standards for documenting and disseminating finite‐fault earthquake rupture models, and related data and metadata. A comprehensive documentation of the rupture models, a detailed description of the data processing steps, and facilitating the access to the actual data that went into the earthquake source inversion are required to promote follow‐up research and to ensure interoperability, transparency, and reproducibility of the published slip‐inversion solutions. We suggest a formatting scheme that describes the kinematic rupture process in an unambiguous way to support subsequent research. We also provide guidelines on how to document the data, metadata, and data processing. The proposed standards and formats represent a first step to establishing best practices for comprehensively documenting input and output of finite‐fault earthquake source studies.
Statistical modelling for recurrent events: an application to sports injuries.
Ullah, Shahid; Gabbett, Tim J; Finch, Caroline F
2014-09-01
Injuries are often recurrent, with subsequent injuries influenced by previous occurrences and hence correlation between events needs to be taken into account when analysing such data. This paper compares five different survival models (Cox proportional hazards (CoxPH) model and the following generalisations to recurrent event data: Andersen-Gill (A-G), frailty, Wei-Lin-Weissfeld total time (WLW-TT) marginal, Prentice-Williams-Peterson gap time (PWP-GT) conditional models) for the analysis of recurrent injury data. Empirical evaluation and comparison of different models were performed using model selection criteria and goodness-of-fit statistics. Simulation studies assessed the size and power of each model fit. The modelling approach is demonstrated through direct application to Australian National Rugby League recurrent injury data collected over the 2008 playing season. Of the 35 players analysed, 14 (40%) players had more than 1 injury and 47 contact injuries were sustained over 29 matches. The CoxPH model provided the poorest fit to the recurrent sports injury data. The fit was improved with the A-G and frailty models, compared to WLW-TT and PWP-GT models. Despite little difference in model fit between the A-G and frailty models, in the interest of fewer statistical assumptions it is recommended that, where relevant, future studies involving modelling of recurrent sports injury data use the frailty model in preference to the CoxPH model or its other generalisations. The paper provides a rationale for future statistical modelling approaches for recurrent sports injury. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Combining multiple earthquake models in real time for earthquake early warning
Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.
2017-01-01
The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.
Optimization of recurrent neural networks for time series modeling
DEFF Research Database (Denmark)
Pedersen, Morten With
1997-01-01
The present thesis is about optimization of recurrent neural networks applied to time series modeling. In particular is considered fully recurrent networks working from only a single external input, one layer of nonlinear hidden units and a li near output unit applied to prediction of discrete time...... series. The overall objective s are to improve training by application of second-order methods and to improve generalization ability by architecture optimization accomplished by pruning. The major topics covered in the thesis are: 1. The problem of training recurrent networks is analyzed from a numerical...... of solution obtained as well as computation time required. 3. A theoretical definition of the generalization error for recurrent networks is provided. This definition justifies a commonly adopted approach for estimating generalization ability. 4. The viability of pruning recurrent networks by the Optimal...
Attention-based Memory Selection Recurrent Network for Language Modeling
Liu, Da-Rong; Chuang, Shun-Po; Lee, Hung-yi
2016-01-01
Recurrent neural networks (RNNs) have achieved great success in language modeling. However, since the RNNs have fixed size of memory, their memory cannot store all the information about the words it have seen before in the sentence, and thus the useful long-term information may be ignored when predicting the next words. In this paper, we propose Attention-based Memory Selection Recurrent Network (AMSRN), in which the model can review the information stored in the memory at each previous time ...
Recurrence relations in the three-dimensional Ising model
International Nuclear Information System (INIS)
Yukhnovskij, I.R.; Kozlovskij, M.P.
1977-01-01
Recurrence relations between the coefficients asub(2)sup((i)), asub(4)sup((i)) and Psub(2)sup((i)), Psub(4)sup((i)) which characterize the probabilities of distribution for the three-dimensional Ising model are studied. It is shown that for large arguments z of the Makdonald functions Ksub(ν)(z) the recurrence relations correspond to the known Wilson relations. But near the critical point for small values of the transfer momentum k this limit case does not take place. In the pointed region the argument z tends to zero, and new recurrence relations take place
Cascadia Onshore-Offshore Site Response, Submarine Sediment Mobilization, and Earthquake Recurrence
Gomberg, J.
2018-02-01
Local geologic structure and topography may modify arriving seismic waves. This inherent variation in shaking, or "site response," may affect the distribution of slope failures and redistribution of submarine sediments. I used seafloor seismic data from the 2011 to 2015 Cascadia Initiative and permanent onshore seismic networks to derive estimates of site response, denoted Sn, in low- and high-frequency (0.02-1 and 1-10 Hz) passbands. For three shaking metrics (peak velocity and acceleration and energy density) Sn varies similarly throughout Cascadia and changes primarily in the direction of convergence, roughly east-west. In the two passbands, Sn patterns offshore are nearly opposite and range over an order of magnitude or more across Cascadia. Sn patterns broadly may be attributed to sediment resonance and attenuation. This and an abrupt step in the east-west trend of Sn suggest that changes in topography and structure at the edge of the continental margin significantly impact shaking. These patterns also correlate with gravity lows diagnostic of marginal basins and methane plumes channeled within shelf-bounding faults. Offshore Sn exceeds that onshore in both passbands, and the steepest slopes and shelf coincide with the relatively greatest and smallest Sn estimates at low and high frequencies, respectively; these results should be considered in submarine shaking-triggered slope stability failure studies. Significant north-south Sn variations are not apparent, but sparse sampling does not permit rejection of the hypothesis that the southerly decrease in intervals between shaking-triggered turbidites and great earthquakes inferred by Goldfinger et al. (2012, 2013, 2016) and Priest et al. (2017) is due to inherently stronger shaking southward.
Directory of Open Access Journals (Sweden)
Francesco Visini
2010-11-01
Full Text Available The Collaboratory for the Study of Earthquake Predictability (CSEP selected Italy as a testing region for probabilistic earthquake forecast models in October, 2008. The model we have submitted for the two medium-term forecast periods of 5 and 10 years (from 2009 is a time-dependent, geologically based earthquake rupture forecast that is defined for central Italy only (11-15˚ E; 41-45˚ N. The model took into account three separate layers of seismogenic sources: background seismicity; seismotectonic provinces; and individual faults that can produce major earthquakes (seismogenic boxes. For CSEP testing purposes, the background seismicity layer covered a range of magnitudes from 5.0 to 5.3 and the seismicity rates were obtained by truncated Gutenberg-Richter relationships for cells centered on the CSEP grid. Then the seismotectonic provinces layer returned the expected rates of medium-to-large earthquakes following a traditional Cornell-type approach. Finally, for the seismogenic boxes layer, the rates were based on the geometry and kinematics of the faults that different earthquake recurrence models have been assigned to, ranging from pure Gutenberg-Richter behavior to characteristic events, with the intermediate behavior named as the hybrid model. The results for different magnitude ranges highlight the contribution of each of the three layers to the total computation. The expected rates for M >6.0 on April 1, 2009 (thus computed before the L'Aquila, 2009, MW= 6.3 earthquake are of particular interest. They showed local maxima in the two seismogenic-box sources of Paganica and Sulmona, one of which was activated by the L'Aquila earthquake of April 6, 2009. Earthquake rates as of August 1, 2009, (now under test also showed a maximum close to the Sulmona source for MW ~6.5; significant seismicity rates (10-4 to 10-3 in 5 years for destructive events (magnitude up to 7.0 were located in other individual sources identified as being capable of such
García, Alicia; De la Cruz-Reyna, Servando; Marrero, José M.; Ortiz, Ramón
2016-05-01
Under certain conditions, volcano-tectonic (VT) earthquakes may pose significant hazards to people living in or near active volcanic regions, especially on volcanic islands; however, hazard arising from VT activity caused by localized volcanic sources is rarely addressed in the literature. The evolution of VT earthquakes resulting from a magmatic intrusion shows some orderly behaviour that may allow the occurrence and magnitude of major events to be forecast. Thus governmental decision makers can be supplied with warnings of the increased probability of larger-magnitude earthquakes on the short-term timescale. We present here a methodology for forecasting the occurrence of large-magnitude VT events during volcanic crises; it is based on a mean recurrence time (MRT) algorithm that translates the Gutenberg-Richter distribution parameter fluctuations into time windows of increased probability of a major VT earthquake. The MRT forecasting algorithm was developed after observing a repetitive pattern in the seismic swarm episodes occurring between July and November 2011 at El Hierro (Canary Islands). From then on, this methodology has been applied to the consecutive seismic crises registered at El Hierro, achieving a high success rate in the real-time forecasting, within 10-day time windows, of volcano-tectonic earthquakes.
Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro
2017-01-01
Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a dept...
McNamara, D. E.; Yeck, W. L.; Barnhart, W. D.; Schulte-Pelkum, V.; Bergman, E.; Adhikari, L. B.; Dixit, A.; Hough, S. E.; Benz, H. M.; Earle, P. S.
2017-09-01
The Gorkha earthquake on April 25th, 2015 was a long anticipated, low-angle thrust-faulting event on the shallow décollement between the India and Eurasia plates. We present a detailed multiple-event hypocenter relocation analysis of the Mw 7.8 Gorkha Nepal earthquake sequence, constrained by local seismic stations, and a geodetic rupture model based on InSAR and GPS data. We integrate these observations to place the Gorkha earthquake sequence into a seismotectonic context and evaluate potential earthquake hazard. Major results from this study include (1) a comprehensive catalog of calibrated hypocenters for the Gorkha earthquake sequence; (2) the Gorkha earthquake ruptured a 150 × 60 km patch of the Main Himalayan Thrust (MHT), the décollement defining the plate boundary at depth, over an area surrounding but predominantly north of the capital city of Kathmandu (3) the distribution of aftershock seismicity surrounds the mainshock maximum slip patch; (4) aftershocks occur at or below the mainshock rupture plane with depths generally increasing to the north beneath the higher Himalaya, possibly outlining a 10-15 km thick subduction channel between the overriding Eurasian and subducting Indian plates; (5) the largest Mw 7.3 aftershock and the highest concentration of aftershocks occurred to the southeast the mainshock rupture, on a segment of the MHT décollement that was positively stressed towards failure; (6) the near surface portion of the MHT south of Kathmandu shows no aftershocks or slip during the mainshock. Results from this study characterize the details of the Gorkha earthquake sequence and provide constraints on where earthquake hazard remains high, and thus where future, damaging earthquakes may occur in this densely populated region. Up-dip segments of the MHT should be considered to be high hazard for future damaging earthquakes.
Prediction of strong earthquake motions on rock surface using evolutionary process models
International Nuclear Information System (INIS)
Kameda, H.; Sugito, M.
1984-01-01
Stochastic process models are developed for prediction of strong earthquake motions for engineering design purposes. Earthquake motions with nonstationary frequency content are modeled by using the concept of evolutionary processes. Discussion is focused on the earthquake motions on bed rocks which are important for construction of nuclear power plants in seismic regions. On this basis, two earthquake motion prediction models are developed, one (EMP-IB Model) for prediction with given magnitude and epicentral distance, and the other (EMP-IIB Model) to account for the successive fault ruptures and the site location relative to the fault of great earthquakes. (Author) [pt
Zschau, J.
2009-04-01
Earthquake risk, like natural risks in general, has become a highly dynamic and globally interdependent phenomenon. Due to the "urban explosion" in the Third World, an increasingly complex cross linking of critical infrastructure and lifelines in the industrial nations and a growing globalisation of the world's economies, we are presently facing a dramatic increase of our society's vulnerability to earthquakes in practically all seismic regions on our globe. Such fast and global changes cannot be captured with conventional earthquake risk models anymore. The sciences in this field are, therefore, asked to come up with new solutions that are no longer exclusively aiming at the best possible quantification of the present risks but also keep an eye on their changes with time and allow to project these into the future. This does not apply to the vulnerablity component of earthquake risk alone, but also to its hazard component which has been realized to be time-dependent, too. The challenges of earthquake risk dynamics and -globalisation have recently been accepted by the Global Science Forum of the Organisation for Economic Co-operation and Development (OECD - GSF) who initiated the "Global Earthquake Model (GEM)", a public-private partnership for establishing an independent standard to calculate, monitor and communicate earthquake risk globally, raise awareness and promote mitigation.
Recursive Bayesian recurrent neural networks for time-series modeling.
Mirikitani, Derrick T; Nikolaev, Nikolay
2010-02-01
This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.
Wang, Lifeng
2015-11-11
The long-term slip on faults has to follow, on average, the plate motion, while slip deficit is accumulated over shorter time scales (e.g., between the large earthquakes). Accumulated slip deficits eventually have to be released by earthquakes and aseismic processes. In this study, we propose a new inversion approach for coseismic slip, taking interseismic slip deficit as prior information. We assume a linear correlation between coseismic slip and interseismic slip deficit, and invert for the coefficients that link the coseismic displacements to the required strain accumulation time and seismic release level of the earthquake. We apply our approach to the 2011 M9 Tohoku-Oki earthquake and the 2004 M6 Parkfield earthquake. Under the assumption that the largest slip almost fully releases the local strain (as indicated by borehole measurements, Lin et al., 2013), our results suggest that the strain accumulated along the Tohoku-Oki earthquake segment has been almost fully released during the 2011 M9 rupture. The remaining slip deficit can be attributed to the postseismic processes. Similar conclusions can be drawn for the 2004 M6 Parkfield earthquake. We also estimate the required time of strain accumulation for the 2004 M6 Parkfield earthquake to be ~25 years (confidence interval of [17, 43] years), consistent with the observed average recurrence time of ~22 years for M6 earthquakes in Parkfield. For the Tohoku-Oki earthquake, we estimate the recurrence time of~500-700 years. This new inversion approach for evaluating slip balance can be generally applied to any earthquake for which dense geodetic measurements are available.
Wang, Lifeng; Hainzl, Sebastian; Mai, Paul Martin
2015-01-01
The long-term slip on faults has to follow, on average, the plate motion, while slip deficit is accumulated over shorter time scales (e.g., between the large earthquakes). Accumulated slip deficits eventually have to be released by earthquakes and aseismic processes. In this study, we propose a new inversion approach for coseismic slip, taking interseismic slip deficit as prior information. We assume a linear correlation between coseismic slip and interseismic slip deficit, and invert for the coefficients that link the coseismic displacements to the required strain accumulation time and seismic release level of the earthquake. We apply our approach to the 2011 M9 Tohoku-Oki earthquake and the 2004 M6 Parkfield earthquake. Under the assumption that the largest slip almost fully releases the local strain (as indicated by borehole measurements, Lin et al., 2013), our results suggest that the strain accumulated along the Tohoku-Oki earthquake segment has been almost fully released during the 2011 M9 rupture. The remaining slip deficit can be attributed to the postseismic processes. Similar conclusions can be drawn for the 2004 M6 Parkfield earthquake. We also estimate the required time of strain accumulation for the 2004 M6 Parkfield earthquake to be ~25 years (confidence interval of [17, 43] years), consistent with the observed average recurrence time of ~22 years for M6 earthquakes in Parkfield. For the Tohoku-Oki earthquake, we estimate the recurrence time of~500-700 years. This new inversion approach for evaluating slip balance can be generally applied to any earthquake for which dense geodetic measurements are available.
Burkhard, L. M.; Smith-Konter, B. R.
2017-12-01
4D simulations of stress evolution provide a rare insight into earthquake cycle crustal stress variations at seismogenic depths where earthquake ruptures nucleate. Paleoseismic estimates of earthquake offset and chronology, spanning multiple earthquakes cycles, are available for many well-studied segments of the San Andreas Fault System (SAFS). Here we construct new 4D earthquake cycle time-series simulations to further study the temporally and spatially varying stress threshold conditions of the SAFS throughout the paleoseismic record. Interseismic strain accumulation, co-seismic stress drop, and postseismic viscoelastic relaxation processes are evaluated as a function of variable slip and locking depths along 42 major fault segments. Paleoseismic earthquake rupture histories provide a slip chronology dating back over 1000 years. Using GAGE Facility GPS and new Sentinel-1A InSAR data, we tune model locking depths and slip rates to compute the 4D stress accumulation within the seismogenic crust. Revised estimates of stress accumulation rate are most significant along the Imperial (2.8 MPa/100yr) and Coachella (1.2 MPa/100yr) faults, with a maximum change in stress rate along some segments of 11-17% in comparison with our previous estimates. Revised estimates of earthquake cycle stress accumulation are most significant along the Imperial (2.25 MPa), Coachella (2.9 MPa), and Carrizo (3.2 MPa) segments, with a 15-29% decrease in stress due to locking depth and slip rate updates, and also postseismic relaxation from the El Mayor-Cucapah earthquake. Because stress drops of major strike-slip earthquakes rarely exceed 10 MPa, these models may provide a lower bound on estimates of stress evolution throughout the historical era, and perhaps an upper bound on the expected recurrence interval of a particular fault segment. Furthermore, time-series stress models reveal temporally varying stress concentrations at 5-10 km depths, due to the interaction of neighboring fault
Major earthquakes occur regularly on an isolated plate boundary fault.
Berryman, Kelvin R; Cochran, Ursula A; Clark, Kate J; Biasi, Glenn P; Langridge, Robert M; Villamor, Pilar
2012-06-29
The scarcity of long geological records of major earthquakes, on different types of faults, makes testing hypotheses of regular versus random or clustered earthquake recurrence behavior difficult. We provide a fault-proximal major earthquake record spanning 8000 years on the strike-slip Alpine Fault in New Zealand. Cyclic stratigraphy at Hokuri Creek suggests that the fault ruptured to the surface 24 times, and event ages yield a 0.33 coefficient of variation in recurrence interval. We associate this near-regular earthquake recurrence with a geometrically simple strike-slip fault, with high slip rate, accommodating a high proportion of plate boundary motion that works in isolation from other faults. We propose that it is valid to apply time-dependent earthquake recurrence models for seismic hazard estimation to similar faults worldwide.
Stochastic dynamic modeling of regular and slow earthquakes
Aso, N.; Ando, R.; Ide, S.
2017-12-01
Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal
Earthquake Early Warning Beta Users: Java, Modeling, and Mobile Apps
Strauss, J. A.; Vinci, M.; Steele, W. P.; Allen, R. M.; Hellweg, M.
2014-12-01
Earthquake Early Warning (EEW) is a system that can provide a few to tens of seconds warning prior to ground shaking at a user's location. The goal and purpose of such a system is to reduce, or minimize, the damage, costs, and casualties resulting from an earthquake. A demonstration earthquake early warning system (ShakeAlert) is undergoing testing in the United States by the UC Berkeley Seismological Laboratory, Caltech, ETH Zurich, University of Washington, the USGS, and beta users in California and the Pacific Northwest. The beta users receive earthquake information very rapidly in real-time and are providing feedback on their experiences of performance and potential uses within their organization. Beta user interactions allow the ShakeAlert team to discern: which alert delivery options are most effective, what changes would make the UserDisplay more useful in a pre-disaster situation, and most importantly, what actions users plan to take for various scenarios. Actions could include: personal safety approaches, such as drop cover, and hold on; automated processes and procedures, such as opening elevator or fire stations doors; or situational awareness. Users are beginning to determine which policy and technological changes may need to be enacted, and funding requirements to implement their automated controls. The use of models and mobile apps are beginning to augment the basic Java desktop applet. Modeling allows beta users to test their early warning responses against various scenarios without having to wait for a real event. Mobile apps are also changing the possible response landscape, providing other avenues for people to receive information. All of these combine to improve business continuity and resiliency.
Spatial Distribution of the Coefficient of Variation for the Paleo-Earthquakes in Japan
Nomura, S.; Ogata, Y.
2015-12-01
Renewal processes, point prccesses in which intervals between consecutive events are independently and identically distributed, are frequently used to describe this repeating earthquake mechanism and forecast the next earthquakes. However, one of the difficulties in applying recurrent earthquake models is the scarcity of the historical data. Most studied fault segments have few, or only one observed earthquake that often have poorly constrained historic and/or radiocarbon ages. The maximum likelihood estimate from such a small data set can have a large bias and error, which tends to yield high probability for the next event in a very short time span when the recurrence intervals have similar lengths. On the other hand, recurrence intervals at a fault depend on the long-term slip rate caused by the tectonic motion in average. In addition, recurrence times are also fluctuated by nearby earthquakes or fault activities which encourage or discourage surrounding seismicity. These factors have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus, this paper introduces a spatial structure on the key parameters of renewal processes for recurrent earthquakes and estimates it by using spatial statistics. Spatial variation of mean and variance parameters of recurrence times are estimated in Bayesian framework and the next earthquakes are forecasted by Bayesian predictive distributions. The proposal model is applied for recurrent earthquake catalog in Japan and its result is compared with the current forecast adopted by the Earthquake Research Committee of Japan.
Forecasting characteristic earthquakes in a minimalist model
DEFF Research Database (Denmark)
Vázquez-Prada, M.; Pacheco, A.; González, Á.
2003-01-01
-dimensional numerical exploration of the loss function. This first strategy is then refined by considering a classification of the seismic cycles of the model according to the presence, or not, of some factors related to the seismicity observed in the cycle. These factors, statistically speaking, enlarge or shorten...
A way to synchronize models with seismic faults for earthquake forecasting
DEFF Research Database (Denmark)
González, Á.; Gómez, J.B.; Vázquez-Prada, M.
2006-01-01
Numerical models are starting to be used for determining the future behaviour of seismic faults and fault networks. Their final goal would be to forecast future large earthquakes. In order to use them for this task, it is necessary to synchronize each model with the current status of the actual....... Earthquakes, though, provide indirect but measurable clues of the stress and strain status in the lithosphere, which should be helpful for the synchronization of the models. The rupture area is one of the measurable parameters of earthquakes. Here we explore how it can be used to at least synchronize fault...... models between themselves and forecast synthetic earthquakes. Our purpose here is to forecast synthetic earthquakes in a simple but stochastic (random) fault model. By imposing the rupture area of the synthetic earthquakes of this model on other models, the latter become partially synchronized...
Katsube, Aya; Kondo, Hisao; Kurosawa, Hideki
2017-06-01
Surface rupturing earthquakes produced by intraplate active faults generally have long recurrence intervals of a few thousands to tens of thousands of years. We here report the first evidence for an extremely short recurrence interval of 300 years for surface rupturing earthquakes on an intraplate system in Japan. The Kamishiro fault of the Itoigawa-Shizuoka Tectonic Line (ISTL) active fault system generated a Mw 6.2 earthquake in 2014. A paleoseismic trench excavation across the 2014 surface rupture showed the evidence for the 2014 event and two prior paleoearthquakes. The slip of the penultimate earthquake was similar to that of 2014 earthquake, and its timing was constrained to be after A.D. 1645. Judging from the timing, the damaged area, and the amount of slip, the penultimate earthquake most probably corresponds to a historical earthquake in A.D. 1714. The recurrence interval of the two most recent earthquakes is thus extremely short compared with intervals on other active faults known globally. Furthermore, the slip repetition during the last three earthquakes is in accordance with the time-predictable recurrence model rather than the characteristic earthquake model. In addition, the spatial extent of the 2014 surface rupture accords with the distribution of a serpentinite block, suggesting that the relatively low coefficient of friction may account for the unusually frequent earthquakes. These findings would affect long-term forecast of earthquake probability and seismic hazard assessment on active faults.
International Nuclear Information System (INIS)
Ward, P.L.
1978-01-01
The state of the art of earthquake prediction is summarized, the possible responses to such prediction are examined, and some needs in the present prediction program and in research related to use of this new technology are reviewed. Three basic aspects of earthquake prediction are discussed: location of the areas where large earthquakes are most likely to occur, observation within these areas of measurable changes (earthquake precursors) and determination of the area and time over which the earthquake will occur, and development of models of the earthquake source in order to interpret the precursors reliably. 6 figures
International Nuclear Information System (INIS)
Panza, G.F.; Romanelli, F.; Vaccari. F.; . E-mails: Luis.Decanini@uniroma1.it; Fabrizio.Mollaioli@uniroma1.it)
2002-07-01
The input for the seismic risk analysis can be expressed with a description of 'roundshaking scenarios', or with probabilistic maps of perhaps relevant parameters. The probabilistic approach, unavoidably based upon rough assumptions and models (e.g. recurrence and attenuation laws), can be misleading, as it cannot take into account, with satisfactory accuracy, some of the most important aspects like rupture process, directivity and site effects. This is evidenced by the comparison of recent recordings with the values predicted by the probabilistic methods. We prefer a scenario-based, deterministic approach in view of the limited seismological data, of the local irregularity of the occurrence of strong earthquakes, and of the multiscale seismicity model, that is capable to reconcile two apparently conflicting ideas: the Characteristic Earthquake concept and the Self Organized Criticality paradigm. Where the numerical modeling is successfully compared with records, the synthetic seismograms permit the microzoning, based upon a set of possible scenario earthquakes. Where no recordings are available the synthetic signals can be used to estimate the ground motion without having to wait for a strong earthquake to occur (pre-disaster microzonation). In both cases the use of modeling is necessary since the so-called local site effects can be strongly dependent upon the properties of the seismic source and can be properly defined only by means of envelopes. The joint use of reliable synthetic signals and observations permits the computation of advanced hazard indicators (e.g. damaging potential) that take into account local soil properties. The envelope of synthetic elastic energy spectra reproduces the distribution of the energy demand in the most relevant frequency range for seismic engineering. The synthetic accelerograms can be fruitfully used for design and strengthening of structures, also when innovative techniques, like seismic isolation, are employed. For these
Self-Organized Criticality in an Anisotropic Earthquake Model
Li, Bin-Quan; Wang, Sheng-Jun
2018-03-01
We have made an extensive numerical study of a modified model proposed by Olami, Feder, and Christensen to describe earthquake behavior. Two situations were considered in this paper. One situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly not averagely and keeps itself to zero. The other situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly and keeps some energy for itself instead of reset to zero. Different boundary conditions were considered as well. By analyzing the distribution of earthquake sizes, we found that self-organized criticality can be excited only in the conservative case or the approximate conservative case in the above situations. Some evidence indicated that the critical exponent of both above situations and the original OFC model tend to the same result in the conservative case. The only difference is that the avalanche size in the original model is bigger. This result may be closer to the real world, after all, every crust plate size is different. Supported by National Natural Science Foundation of China under Grant Nos. 11675096 and 11305098, the Fundamental Research Funds for the Central Universities under Grant No. GK201702001, FPALAB-SNNU under Grant No. 16QNGG007, and Interdisciplinary Incubation Project of SNU under Grant No. 5
A recurrent dynamic model for correspondence-based face recognition.
Wolfrum, Philipp; Wolff, Christian; Lücke, Jörg; von der Malsburg, Christoph
2008-12-29
Our aim here is to create a fully neural, functionally competitive, and correspondence-based model for invariant face recognition. By recurrently integrating information about feature similarities, spatial feature relations, and facial structure stored in memory, the system evaluates face identity ("what"-information) and face position ("where"-information) using explicit representations for both. The network consists of three functional layers of processing, (1) an input layer for image representation, (2) a middle layer for recurrent information integration, and (3) a gallery layer for memory storage. Each layer consists of cortical columns as functional building blocks that are modeled in accordance with recent experimental findings. In numerical simulations we apply the system to standard benchmark databases for face recognition. We find that recognition rates of our biologically inspired approach lie in the same range as recognition rates of recent and purely functionally motivated systems.
International Nuclear Information System (INIS)
Urrutia, J D; Bautista, L A; Baccay, E B
2014-01-01
The aim of this study was to develop mathematical models for estimating earthquake casualties such as death, number of injured persons, affected families and total cost of damage. To quantify the direct damages from earthquakes to human beings and properties given the magnitude, intensity, depth of focus, location of epicentre and time duration, the regression models were made. The researchers formulated models through regression analysis using matrices and used α = 0.01. The study considered thirty destructive earthquakes that hit the Philippines from the inclusive years 1968 to 2012. Relevant data about these said earthquakes were obtained from Philippine Institute of Volcanology and Seismology. Data on damages and casualties were gathered from the records of National Disaster Risk Reduction and Management Council. This study will be of great value in emergency planning, initiating and updating programs for earthquake hazard reduction in the Philippines, which is an earthquake-prone country.
The Global Earthquake Model and Disaster Risk Reduction
Smolka, A. J.
2015-12-01
Advanced, reliable and transparent tools and data to assess earthquake risk are inaccessible to most, especially in less developed regions of the world while few, if any, globally accepted standards currently allow a meaningful comparison of risk between places. The Global Earthquake Model (GEM) is a collaborative effort that aims to provide models, datasets and state-of-the-art tools for transparent assessment of earthquake hazard and risk. As part of this goal, GEM and its global network of collaborators have developed the OpenQuake engine (an open-source software for hazard and risk calculations), the OpenQuake platform (a web-based portal making GEM's resources and datasets freely available to all potential users), and a suite of tools to support modelers and other experts in the development of hazard, exposure and vulnerability models. These resources are being used extensively across the world in hazard and risk assessment, from individual practitioners to local and national institutions, and in regional projects to inform disaster risk reduction. Practical examples for how GEM is bridging the gap between science and disaster risk reduction are: - Several countries including Switzerland, Turkey, Italy, Ecuador, Papua-New Guinea and Taiwan (with more to follow) are computing national seismic hazard using the OpenQuake-engine. In some cases these results are used for the definition of actions in building codes. - Technical support, tools and data for the development of hazard, exposure, vulnerability and risk models for regional projects in South America and Sub-Saharan Africa. - Going beyond physical risk, GEM's scorecard approach evaluates local resilience by bringing together neighborhood/community leaders and the risk reduction community as a basis for designing risk reduction programs at various levels of geography. Actual case studies are Lalitpur in the Kathmandu Valley in Nepal and Quito/Ecuador. In agreement with GEM's collaborative approach, all
Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes
Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.
2014-01-01
Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467
A new model on the cause of Tangshan earthquakes in 1976
Wang, Jian
2001-09-01
In this paper the shortages of explanations on the cause of Tangshan earthquakes in 1976 are pointed out. Earthquake phenomena around Tangshan earthquakes are analyzed synthetically, it is noticed that the most prominent seismic phenomenon are seismic denseness of M L=4, but M L=3 and M L=2 is not active in the same temporal-spatial interval, which occurred from 1973 to 1975. We think that the phenomenon should correspond to relative integrity of the crust medium under higher regional stress. Assuming that the seismicity in circumjacent region could reflect the jostling extent of surrounding plates toward the Chinese mainland, it is inferred that there are multi-dynamical processes in North China region in 1970s, which supply the basic dynamical source to Tangshan earthquakes. A model of multi-dynamical processes and local weakening of the crust is proposed to explain the cause of Tangshan earthquakes. This model could unpuzzle many seismic phenomena related to Tangshan earthquakes.
Phase response curves for models of earthquake fault dynamics
Energy Technology Data Exchange (ETDEWEB)
Franović, Igor, E-mail: franovic@ipb.ac.rs [Scientific Computing Laboratory, Institute of Physics Belgrade, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Kostić, Srdjan [Institute for the Development of Water Resources “Jaroslav Černi,” Jaroslava Černog 80, 11226 Belgrade (Serbia); Perc, Matjaž [Faculty of Natural Sciences and Mathematics, University of Maribor, Koroška cesta 160, SI-2000 Maribor (Slovenia); CAMTP—Center for Applied Mathematics and Theoretical Physics, University of Maribor, Krekova 2, SI-2000 Maribor (Slovenia); Klinshov, Vladimir [Institute of Applied Physics of the Russian Academy of Sciences, 46 Ulyanov Street, 603950 Nizhny Novgorod (Russian Federation); Nekorkin, Vladimir [Institute of Applied Physics of the Russian Academy of Sciences, 46 Ulyanov Street, 603950 Nizhny Novgorod (Russian Federation); University of Nizhny Novgorod, 23 Prospekt Gagarina, 603950 Nizhny Novgorod (Russian Federation); Kurths, Jürgen [Institute of Applied Physics of the Russian Academy of Sciences, 46 Ulyanov Street, 603950 Nizhny Novgorod (Russian Federation); Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Institute of Physics, Humboldt University Berlin, 12489 Berlin (Germany)
2016-06-15
We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.
Phase response curves for models of earthquake fault dynamics
International Nuclear Information System (INIS)
Franović, Igor; Kostić, Srdjan; Perc, Matjaž; Klinshov, Vladimir; Nekorkin, Vladimir; Kurths, Jürgen
2016-01-01
We systematically study effects of external perturbations on models describing earthquake fault dynamics. The latter are based on the framework of the Burridge-Knopoff spring-block system, including the cases of a simple mono-block fault, as well as the paradigmatic complex faults made up of two identical or distinct blocks. The blocks exhibit relaxation oscillations, which are representative for the stick-slip behavior typical for earthquake dynamics. Our analysis is carried out by determining the phase response curves of first and second order. For a mono-block fault, we consider the impact of a single and two successive pulse perturbations, further demonstrating how the profile of phase response curves depends on the fault parameters. For a homogeneous two-block fault, our focus is on the scenario where each of the blocks is influenced by a single pulse, whereas for heterogeneous faults, we analyze how the response of the system depends on whether the stimulus is applied to the block having a shorter or a longer oscillation period.
DEFF Research Database (Denmark)
Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.
The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....
Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.
2008-01-01
We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.
Models and detection of spontaneous recurrent seizures in laboratory rodents
Directory of Open Access Journals (Sweden)
Bin Gu
2017-07-01
Full Text Available Epilepsy, characterized by spontaneous recurrent seizures (SRS, is a serious and common neurological disorder afflicting an estimated 1% of the population worldwide. Animal experiments, especially those utilizing small laboratory rodents, remain essential to understanding the fundamental mechanisms underlying epilepsy and to prevent, diagnose, and treat this disease. While much attention has been focused on epileptogenesis in animal models of epilepsy, there is little discussion on SRS, the hallmark of epilepsy. This is in part due to the technical difficulties of rigorous SRS detection. In this review, we comprehensively summarize both genetic and acquired models of SRS and discuss the methodology used to monitor and detect SRS in mice and rats.
Optimizing Markovian modeling of chaotic systems with recurrent neural networks
International Nuclear Information System (INIS)
Cechin, Adelmo L.; Pechmann, Denise R.; Oliveira, Luiz P.L. de
2008-01-01
In this paper, we propose a methodology for optimizing the modeling of an one-dimensional chaotic time series with a Markov Chain. The model is extracted from a recurrent neural network trained for the attractor reconstructed from the data set. Each state of the obtained Markov Chain is a region of the reconstructed state space where the dynamics is approximated by a specific piecewise linear map, obtained from the network. The Markov Chain represents the dynamics of the time series in its statistical essence. An application to a time series resulted from Lorenz system is included
Rockwell, T. K.
2010-12-01
A long paleoseismic record at Hog Lake on the central San Jacinto fault (SJF) in southern California documents evidence for 18 surface ruptures in the past 3.8-4 ka. This yields a long-term recurrence interval of about 210 years, consistent with its slip rate of ~16 mm/yr and field observations of 3-4 m of displacement per event. However, during the past 3800 years, the fault has switched from a quasi-periodic mode of earthquake production, during which the recurrence interval is similar to the long-term average, to clustered behavior with the inter-event periods as short as a few decades. There are also some periods as long as 450 years during which there were no surface ruptures, and these periods are commonly followed by one to several closely-timed ruptures. The coefficient of variation (CV) for the timing of these earthquakes is about 0.6 for the past 4000 years (17 intervals). Similar behavior has been observed on the San Andreas Fault (SAF) south of the Transverse Ranges where clusters of earthquakes have been followed by periods of lower seismic production, and the CV is as high as 0.7 for some portions of the fault. In contrast, the central North Anatolian Fault (NAF) in Turkey, which ruptured in 1944, appears to have produced ruptures with similar displacement at fairly regular intervals for the past 1600 years. With a CV of 0.16 for timing, and close to 0.1 for displacement, the 1944 rupture segment near Gerede appears to have been both periodic and characteristic. The SJF and SAF are part of a broad plate boundary system with multiple parallel strands with significant slip rates. Additional faults lay to the east (Eastern California shear zone) and west (faults of the LA basin and southern California Borderland), which makes the southern SAF system a complex and broad plate boundary zone. In comparison, the 1944 rupture section of the NAF is simple, straight and highly localized, which contrasts with the complex system of parallel faults in southern
Earthquake correlations and networks: A comparative study
Krishna Mohan, T. R.; Revathi, P. G.
2011-04-01
We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E EULEEJ1539-375510.1103/PhysRevE.69.06610669, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.
Earthquake correlations and networks: A comparative study
International Nuclear Information System (INIS)
Krishna Mohan, T. R.; Revathi, P. G.
2011-01-01
We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E 69, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.
Sobolev, Stephan V.; Muldashev, Iskander A.
2017-12-01
Subduction is substantially multiscale process where the stresses are built by long-term tectonic motions, modified by sudden jerky deformations during earthquakes, and then restored by following multiple relaxation processes. Here we develop a cross-scale thermomechanical model aimed to simulate the subduction process from 1 min to million years' time scale. The model employs elasticity, nonlinear transient viscous rheology, and rate-and-state friction. It generates spontaneous earthquake sequences and by using an adaptive time step algorithm, recreates the deformation process as observed naturally during the seismic cycle and multiple seismic cycles. The model predicts that viscosity in the mantle wedge drops by more than three orders of magnitude during the great earthquake with a magnitude above 9. As a result, the surface velocities just an hour or day after the earthquake are controlled by viscoelastic relaxation in the several hundred km of mantle landward of the trench and not by the afterslip localized at the fault as is currently believed. Our model replicates centuries-long seismic cycles exhibited by the greatest earthquakes and is consistent with the postseismic surface displacements recorded after the Great Tohoku Earthquake. We demonstrate that there is no contradiction between extremely low mechanical coupling at the subduction megathrust in South Chile inferred from long-term geodynamic models and appearance of the largest earthquakes, like the Great Chile 1960 Earthquake.
Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes
Cheong, S.A.; Tan, T.L.; Chen, C.-C.; Chang, W.-L.; Liu, Z.; Chew, L.Y.; Sloot, P.M.A.; Johnson, N.F.
2014-01-01
Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting
Gu, C.; Toksoz, M. N.; Marzouk, Y.; Al-Enezi, A.; Al-Jeri, F.; Buyukozturk, O.
2016-12-01
The increasing seismic activity in the regions of oil/gas fields due to fluid injection/extraction and hydraulic fracturing has drawn new attention in both academia and industry. Source mechanism and triggering stress of these induced earthquakes are of great importance for understanding the physics of the seismic processes in reservoirs, and predicting ground motion in the vicinity of oil/gas fields. The induced seismicity data in our study are from Kuwait National Seismic Network (KNSN). Historically, Kuwait has low local seismicity; however, in recent years the KNSN has monitored more and more local earthquakes. Since 1997, the KNSN has recorded more than 1000 earthquakes (Mw Institutions for Seismology (IRIS) and KNSN, and widely felt by people in Kuwait. These earthquakes happen repeatedly in the same locations close to the oil/gas fields in Kuwait (see the uploaded image). The earthquakes are generally small (Mw stress of these earthquakes was calculated based on the source mechanisms results. In addition, we modeled the ground motion in Kuwait due to these local earthquakes. Our results show that most likely these local earthquakes occurred on pre-existing faults and were triggered by oil field activities. These events are generally smaller than Mw 5; however, these events, occurring in the reservoirs, are very shallow with focal depths less than about 4 km. As a result, in Kuwait, where oil fields are close to populated areas, these induced earthquakes could produce ground accelerations high enough to cause damage to local structures without using seismic design criteria.
Energy Technology Data Exchange (ETDEWEB)
Aagaard, B; Brocher, T; Dreger, D; Frankel, A; Graves, R; Harmsen, S; Hartzell, S; Larsen, S; McCandless, K; Nilsson, S; Petersson, N A; Rodgers, A; Sjogreen, B; Tkalcic, H; Zoback, M L
2007-02-09
We estimate the ground motions produced by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.
Yetirmishli, G. C.; Kazimova, S. E.; Kazimov, I. E.
2011-09-01
We present the method for determining the velocity model of the Earth's crust and the parameters of earthquakes in the Middle Kura Depression from the data of network telemetry in Azerbaijan. Application of this method allowed us to recalculate the main parameters of the hypocenters of the earthquake, to compute the corrections to the arrival times of P and S waves at the observation station, and to significantly improve the accuracy in determining the coordinates of the earthquakes. The model was constructed using the VELEST program, which calculates one-dimensional minimal velocity models from the travel times of seismic waves.
Embedding recurrent neural networks into predator-prey models.
Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon
1999-03-01
We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.
Kobayashi, R.; Koketsu, K.
2008-12-01
Great earthquakes along the Sagami trough, where the Philippine Sea slab is subducting, have repeatedly occurred. The 1703 Genroku and 1923 (Taisho) Kanto earthquakes (M 8.2 and M 7.9, respectively) are known as typical ones, and cause severe damages in the metropolitan area. The recurrence periods of Genroku- and Taisho-type earthquakes inferred from studies of wave cut terraces are about 200-400 and 2000 years, respectively (e.g., Earthquake Research Committee, 2004). We have inferred the source process of the 1923 Kanto earthquake from geodetic, teleseismic, and strong motion data (Kobayashi and Koketsu, 2005). Two asperities of the 1923 Kanto earthquake are located around the western part of Kanagawa prefecture (the base of the Izu peninsula) and around the Miura peninsula. After we adopted an updated fault plane model, which is based on a recent model of the Philippine Sea slab, the asperity around the Miura peninsula moves to the north (Sato et al., 2005). We have also investigated the slip distribution of the 1703 Genroku earthquake. We used crustal uplift and subsidence data investigated by Shishikura (2003), and inferred the slip distribution by using the same geometry of the fault as the 1923 Kanto earthquake. The peak of slip of 16 m is located the southern part of the Boso peninsula. Shape of the upper surface of the Philippine Sea slab is important to constrain extent of the asperities well. Sato et al. (2005) presented the shape in inland part, but less information in oceanic part except for the Tokyo bay. Kimura (2006) and Takeda et al. (2007) presented the shape in oceanic part. In this study, we compiled these slab models, and planed to reanalyze the slip distributions of the 1703 and 1923 earthquakes. We developed a new curved fault plane on the plate boundary between the Philippine Sea slab and inland plate. The curved fault plane was divided into 56 triangle subfaults. Point sources for the Green's function calculations are located at centroids
Research and development of earthquake-resistant structure model for nuclear fuel facility
Energy Technology Data Exchange (ETDEWEB)
Uryu, Mitsuru; Terada, S.; Shioya, I. [and others
1999-05-01
It is important for a nuclear fuel facility to reduce an input intensity of earthquake on the upper part of the building. To study of a response of the building caused by earthquake, an earthquake-resistant structure model is constructed. The weight of the structure model is 90 ton, and is supported by multiple layers of natural ruber and steel. And a weight support device which is called 'softlanding' is also installed to prevent the structure model from loosing the function at excess deformation. The softlanding device consists of Teflon. Dynamic response characteristics of the structure model caused by sine wave and simulated seismic waves are measured and analyzed. Soil tests of the fourth geologic stratum on which the structure model is sited are made to confirm the safety of soil-structure interactions caused by earthquake. (M. Suetake)
A model of return intervals between earthquake events
Zhou, Yu; Chechkin, Aleksei; Sokolov, Igor M.; Kantz, Holger
2016-06-01
Application of the diffusion entropy analysis and the standard deviation analysis to the time sequence of the southern California earthquake events from 1976 to 2002 uncovered scaling behavior typical for anomalous diffusion. However, the origin of such behavior is still under debate. Some studies attribute the scaling behavior to the correlations in the return intervals, or waiting times, between aftershocks or mainshocks. To elucidate a nature of the scaling, we applied specific reshulffling techniques to eliminate correlations between different types of events and then examined how it affects the scaling behavior. We demonstrate that the origin of the scaling behavior observed is the interplay between mainshock waiting time distribution and the structure of clusters of aftershocks, but not correlations in waiting times between the mainshocks and aftershocks themselves. Our findings are corroborated by numerical simulations of a simple model showing a very similar behavior. The mainshocks are modeled by a renewal process with a power-law waiting time distribution between events, and aftershocks follow a nonhomogeneous Poisson process with the rate governed by Omori's law.
Depths of Intraplate Indian Ocean Earthquakes from Waveform Modeling
Baca, A. J.; Polet, J.
2014-12-01
The Indian Ocean is a region of complex tectonics and anomalous seismicity. The ocean floor in this region exhibits many bathymetric features, most notably the multiple inactive fracture zones within the Wharton Basin and the Ninetyeast Ridge. The 11 April 2012 MW 8.7 and 8.2 strike-slip events that took place in this area are unique because their rupture appears to have extended to a depth where brittle failure, and thus seismic activity, was considered to be impossible. We analyze multiple intraplate earthquakes that have occurred throughout the Indian Ocean to better constrain their focal depths in order to enhance our understanding of how deep intraplate events are occurring and more importantly determine if the ruptures are originating within a ductile regime. Selected events are located within the Indian Ocean away from major plate boundaries. A majority are within the deforming Indo-Australian tectonic plate. Events primarily display thrust mechanisms with some strike-slip or a combination of the two. All events are between MW5.5-6.5. Event selections were handled this way in order to facilitate the analysis of teleseismic waveforms using a point source approximation. From these criteria we gathered a suite of 15 intraplate events. Synthetic seismograms of direct P-waves and depth phases are computed using a 1-D propagator matrix approach and compared with global teleseismic waveform data to determine a best depth for each event. To generate our synthetic seismograms we utilized the CRUST1.0 software, a global crustal model that generates velocity values at the hypocenter of our events. Our waveform analysis results reveal that our depths diverge from the Global Centroid Moment Tensor (GCMT) depths, which underestimate our deep lithosphere events and overestimate our shallow depths by as much as 17 km. We determined a depth of 45km for our deepest event. We will show a comparison of our final earthquake depths with the lithospheric thickness based on
Initial Earthquake Centrifuge Model Experiments for the Study of Liquefaction
National Research Council Canada - National Science Library
Steedman, R
1998-01-01
.... These are intended to gather data suitable for the development of improved design approaches for the prediction of liquefaction under earthquake loading using the new centrifuge facility at the WES...
Stochastic finite-fault modelling of strong earthquakes in Narmada ...
Indian Academy of Sciences (India)
The prevailing hazard evidenced by the earthquake-related fatalities in the region imparts significance to the investigations .... tures and sudden fault movement due to stress concentration (Kayal 2008). ..... nificantly improved the present work.
Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion
DEFF Research Database (Denmark)
Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.
1997-01-01
This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modeling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial Valley earthquake in California (U .S .A.). The results of the study indicate that while all three approaches can successfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....
Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion
DEFF Research Database (Denmark)
Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.
This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modelling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial valley earthquake in California (USA). The results of the study indicate that while all three approaches can succesfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....
Earthquake occurrence as stochastic event: (1) theoretical models
Energy Technology Data Exchange (ETDEWEB)
Basili, A.; Basili, M.; Cagnetti, V.; Colombino, A.; Jorio, V.M.; Mosiello, R.; Norelli, F.; Pacilio, N.; Polinari, D.
1977-01-01
The present article intends liaisoning the stochastic approach in the description of earthquake processes suggested by Lomnitz with the experimental evidence reached by Schenkova that the time distribution of some earthquake occurrence is better described by a Negative Bionomial distribution than by a Poisson distribution. The final purpose of the stochastic approach might be a kind of new way for labeling a given area in terms of seismic risk.
Short-term and long-term earthquake occurrence models for Italy: ETES, ERS and LTST
Directory of Open Access Journals (Sweden)
Maura Murru
2010-11-01
Full Text Available This study describes three earthquake occurrence models as applied to the whole Italian territory, to assess the occurrence probabilities of future (M ≥5.0 earthquakes: two as short-term (24 hour models, and one as long-term (5 and 10 years. The first model for short-term forecasts is a purely stochastic epidemic type earthquake sequence (ETES model. The second short-term model is an epidemic rate-state (ERS forecast based on a model that is physically constrained by the application to the earthquake clustering of the Dieterich rate-state constitutive law. The third forecast is based on a long-term stress transfer (LTST model that considers the perturbations of earthquake probability for interacting faults by static Coulomb stress changes. These models have been submitted to the Collaboratory for the Study of Earthquake Predictability (CSEP for forecast testing for Italy (ETH-Zurich, and they were locked down to test their validity on real data in a future setting starting from August 1, 2009.
The finite-difference and finite-element modeling of seismic wave propagation and earthquake motion
International Nuclear Information System (INIS)
Moszo, P.; Kristek, J.; Galis, M.; Pazak, P.; Balazovijech, M.
2006-01-01
Numerical modeling of seismic wave propagation and earthquake motion is an irreplaceable tool in investigation of the Earth's structure, processes in the Earth, and particularly earthquake phenomena. Among various numerical methods, the finite-difference method is the dominant method in the modeling of earthquake motion. Moreover, it is becoming more important in the seismic exploration and structural modeling. At the same time we are convinced that the best time of the finite-difference method in seismology is in the future. This monograph provides tutorial and detailed introduction to the application of the finite-difference, finite-element, and hybrid finite-difference-finite-element methods to the modeling of seismic wave propagation and earthquake motion. The text does not cover all topics and aspects of the methods. We focus on those to which we have contributed. (Author)
Variability of dynamic source parameters inferred from kinematic models of past earthquakes
Causse, M.; Dalguer, L. A.; Mai, Paul Martin
2013-01-01
We analyse the scaling and distribution of average dynamic source properties (fracture energy, static, dynamic and apparent stress drops) using 31 kinematic inversion models from 21 crustal earthquakes. Shear-stress histories are computed by solving
Recurrent Neural Network Model for Constructive Peptide Design.
Müller, Alex T; Hiss, Jan A; Schneider, Gisbert
2018-02-26
We present a generative long short-term memory (LSTM) recurrent neural network (RNN) for combinatorial de novo peptide design. RNN models capture patterns in sequential data and generate new data instances from the learned context. Amino acid sequences represent a suitable input for these machine-learning models. Generative models trained on peptide sequences could therefore facilitate the design of bespoke peptide libraries. We trained RNNs with LSTM units on pattern recognition of helical antimicrobial peptides and used the resulting model for de novo sequence generation. Of these sequences, 82% were predicted to be active antimicrobial peptides compared to 65% of randomly sampled sequences with the same amino acid distribution as the training set. The generated sequences also lie closer to the training data than manually designed amphipathic helices. The results of this study showcase the ability of LSTM RNNs to construct new amino acid sequences within the applicability domain of the model and motivate their prospective application to peptide and protein design without the need for the exhaustive enumeration of sequence libraries.
So, Emily; Spence, Robin
2013-01-01
Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.
A Model for Low-Frequency Earthquake Slip
Chestler, S. R.; Creager, K. C.
2017-12-01
Using high-resolution relative low-frequency earthquake (LFE) locations, we calculate the patch areas (Ap) of LFE families. During episodic tremor and slip (ETS) events, we define AT as the area that slips during LFEs and ST as the total amount of summed LFE slip. Using observed and calculated values for AP, AT, and ST, we evaluate two end-member models for LFE slip within an LFE family patch. In the ductile matrix model, LFEs produce 100% of the observed ETS slip (SETS) in distinct subpatches (i.e., AT ≪ AP). In the connected patch model, AT = AP, but ST ≪ SETS. LFEs cluster into 45 LFE families. Spatial gaps (˜10 to 20 km) between LFE family clusters and smaller gaps within LFE family clusters serve as evidence that LFE slip is heterogeneous on multiple spatial scales. We find that LFE slip only accounts for ˜0.2% of the slip within the slow slip zone. There are depth-dependent trends in the characteristic (mean) moment and in the number of LFEs during both ETS events (only) and the entire ETS cycle (Mcets and NTets and Mcall and NTall, respectively). During ETS, Mc decreases with downdip distance but NT does not change. Over the entire ETS cycle, Mc decreases with downdip distance, but NT increases. These observations indicate that deeper LFE slip occurs through a larger number (800-1,200) of small LFEs, while updip LFE slip occurs primarily during ETS events through a smaller number (200-600) of larger LFEs. This could indicate that the plate interface is stronger and has a higher stress threshold updip.
REVIEW One-Dimensional Dynamical Modeling of Earthquakes: A Review
Directory of Open Access Journals (Sweden)
Jeen-Hwa Wang
2008-01-01
Full Text Available Studies of the power-law relations of seismicity and earthquake source parameters based on the one-dimensional (1-D Burridge-Knopoff¡¦s (BK dynamical lattice model, especially those studies conducted by Taiwan¡¦s scientists, are reviewed in this article. In general, velocity- and/or state-dependent friction is considered to control faulting. A uniform distribution of breaking strengths (i.e., the static friction strength is taken into account in some studies, and inhomogeneous distributions in others. The scaling relations in these studies include: Omori¡¦s law, the magnitude-frequency or energy-frequency relation, the relation between source duration time and seismic moment, the relation between rupture length and seismic moment, the frequency-length relation, and the source power spectra. The main parameters of the one-dimensional (1-D Burridge-Knopoff¡¦s (BK dynamical lattice model include: the decreasing rate (r of dynamic friction strength with sliding velocity; the type and degree of heterogeneous distribution of the breaking strengths, the stiffness ratio (i.e., the ratio between the stiffness of the coil spring connecting two mass elements and that of the leaf spring linking a mass element and the moving plate; the frictional drop ratio of the minimum dynamic friction strength to the breaking strength; and the maximum breaking strength. For some authors, the distribution of the breaking strengths was considered to be a fractal function. Hence, the fractal dimension of such a distribution is also a significant parameter. Comparison between observed scaling laws and simulation results shows that the 1-D BK dynamical lattice model acceptably approaches fault dynamics.
Seismomagnetic models for earthquakes in the eastern part of Izu Peninsula, Central Japan
Directory of Open Access Journals (Sweden)
Y. Ishikawa
1997-06-01
Full Text Available Seismomagnetic changes accompanied by four damaging earthquakes are explained by the piezomagnetic effect observed in the eastern part of Izu Peninsula, Central Japan. Most of the data were obtained by repeat surveys. Although these data suffered electric railway noise, significant magnetic changes were detected at points close to earthquake faults. Coseismic changes can be well interpreted by piezomagnetic models in the case of the 1978 Near Izu-Oshima (M 7.0 and the 1980 East Off Izu Peninsula (M 6.7 earthquakes. A large total intensity change up to 5 nT was observed at a survey point almost above the epicenter of the 1976 Kawazu (M 5.4 earthquake. This change is not explained by a single fault model; a 2-segment fault is suggested. Remarkable precursory and coseismic changes in the total force intensity were observed at KWZ station along with the 1978 Higashi-Izu (M 4.9 earthquake. KWZ station is located very close to a buried subsidiary fault of the M 7.0 Near Izu-Oshima earthquake, which moved aseismically at the time of the M 7.0 quake. The precursory magnetic change to the M 4.9 quake is ascribed to aseismic faulting of this buried fault, while the coseismic rebound to enlargement of the slipping surface at the time of M 4.9 quake. This implies that we observed the formation process of the earthquake nucleation zone via the magnetic field.
Evaluation of earthquake-triggered landslides in el Salvador using a Gis based newmark model
García Rodríguez, María José; Havenith, Hans; Benito Oterino, Belen
2008-01-01
In this work, a model for evaluating earthquake-triggered landslides hazard following the Newmark methodology is developed in a Geographical Information System (GIS). It is applied to El Salvador, one of the most seismically active regions in Central America, where the last severe destructive earthquakes occurred in January 13th and February 13th, 2001. The first of these earthquakes triggered more the 500 landslides and killed at least 844 people. This study is centred on the area (10x6km) w...
A ternary logic model for recurrent neuromime networks with delay.
Hangartner, R D; Cull, P
1995-07-01
In contrast to popular recurrent artificial neural network (RANN) models, biological neural networks have unsymmetric structures and incorporate significant delays as a result of axonal propagation. Consequently, biologically inspired neural network models are more accurately described by nonlinear differential-delay equations rather than nonlinear ordinary differential equations (ODEs), and the standard techniques for studying the dynamics of RANNs are wholly inadequate for these models. This paper develops a ternary-logic based method for analyzing these networks. Key to the technique is the realization that a nonzero delay produces a bounded stability region. This result significantly simplifies the construction of sufficient conditions for characterizing the network equilibria. If the network gain is large enough, each equilibrium can be classified as either asymptotically stable or unstable. To illustrate the analysis technique, the swim central pattern generator (CPG) of the sea slug Tritonia diomedea is examined. For wide range of reasonable parameter values, the ternary analysis shows that none of the network equilibria are stable, and thus the network must oscillate. The results show that complex synaptic dynamics are not necessary for pattern generation.
An Atlas of ShakeMaps and population exposure catalog for earthquake loss modeling
Allen, T.I.; Wald, D.J.; Earle, P.S.; Marano, K.D.; Hotovec, A.J.; Lin, K.; Hearne, M.G.
2009-01-01
We present an Atlas of ShakeMaps and a catalog of human population exposures to moderate-to-strong ground shaking (EXPO-CAT) for recent historical earthquakes (1973-2007). The common purpose of the Atlas and exposure catalog is to calibrate earthquake loss models to be used in the US Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER). The full ShakeMap Atlas currently comprises over 5,600 earthquakes from January 1973 through December 2007, with almost 500 of these maps constrained-to varying degrees-by instrumental ground motions, macroseismic intensity data, community internet intensity observations, and published earthquake rupture models. The catalog of human exposures is derived using current PAGER methodologies. Exposure to discrete levels of shaking intensity is obtained by correlating Atlas ShakeMaps with a global population database. Combining this population exposure dataset with historical earthquake loss data, such as PAGER-CAT, provides a useful resource for calibrating loss methodologies against a systematically-derived set of ShakeMap hazard outputs. We illustrate two example uses for EXPO-CAT; (1) simple objective ranking of country vulnerability to earthquakes, and; (2) the influence of time-of-day on earthquake mortality. In general, we observe that countries in similar geographic regions with similar construction practices tend to cluster spatially in terms of relative vulnerability. We also find little quantitative evidence to suggest that time-of-day is a significant factor in earthquake mortality. Moreover, earthquake mortality appears to be more systematically linked to the population exposed to severe ground shaking (Modified Mercalli Intensity VIII+). Finally, equipped with the full Atlas of ShakeMaps, we merge each of these maps and find the maximum estimated peak ground acceleration at any grid point in the world for the past 35 years. We subsequently compare this "composite ShakeMap" with existing global
Concerns over modeling and warning capabilities in wake of Tohoku Earthquake and Tsunami
Showstack, Randy
2011-04-01
Improved earthquake models, better tsunami modeling and warning capabilities, and a review of nuclear power plant safety are all greatly needed following the 11 March Tohoku earthquake and tsunami, according to scientists at the European Geosciences Union's (EGU) General Assembly, held 3-8 April in Vienna, Austria. EGU quickly organized a morning session of oral presentations and an afternoon panel discussion less than 1 month after the earthquake and the tsunami and the resulting crisis at Japan's Fukushima nuclear power plant, which has now been identified as having reached the same level of severity as the 1986 Chernobyl disaster. Many of the scientists at the EGU sessions expressed concern about the inability to have anticipated the size of the earthquake and the resulting tsunami, which appears likely to have caused most of the fatalities and damage, including damage to the nuclear plant.
Saki Malehi, Amal; Hajizadeh, Ebrahim; Ahmadi, Kambiz; Kholdi, Nahid
2014-01-01
This study aimes to evaluate the failure to thrive (FTT) recurrent event over time. This longitudinal study was conducted during February 2007 to July 2009. The primary outcome was growth failure. The analysis was done using 1283 children who had experienced FTT several times, based on recurrent events analysis. Fifty-nine percent of the children had experienced the FTT at least one time and 5.3% of them had experienced it up to four times. The Prentice-Williams-Peterson (PWP) model revealed significant relationship between diarrhea (HR=1.26), respiratory infections (HR=1.25), urinary tract infections (HR=1.51), discontinuation of breast-feeding (HR=1.96), teething (HR=1.18), initiation age of complementary feeding (HR=1.11) and hazard rate of the first FTT event. Recurrence nature of the FTT is a main problem, which taking it into account increases the accuracy in analysis of FTT event process and can lead to identify different risk factors for each FTT recurrences.
Spatial Evaluation and Verification of Earthquake Simulators
Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.
2017-06-01
In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.
Short- and Long-Term Earthquake Forecasts Based on Statistical Models
Console, Rodolfo; Taroni, Matteo; Murru, Maura; Falcone, Giuseppe; Marzocchi, Warner
2017-04-01
The epidemic-type aftershock sequences (ETAS) models have been experimentally used to forecast the space-time earthquake occurrence rate during the sequence that followed the 2009 L'Aquila earthquake and for the 2012 Emilia earthquake sequence. These forecasts represented the two first pioneering attempts to check the feasibility of providing operational earthquake forecasting (OEF) in Italy. After the 2009 L'Aquila earthquake the Italian Department of Civil Protection nominated an International Commission on Earthquake Forecasting (ICEF) for the development of the first official OEF in Italy that was implemented for testing purposes by the newly established "Centro di Pericolosità Sismica" (CPS, the seismic Hazard Center) at the Istituto Nazionale di Geofisica e Vulcanologia (INGV). According to the ICEF guidelines, the system is open, transparent, reproducible and testable. The scientific information delivered by OEF-Italy is shaped in different formats according to the interested stakeholders, such as scientists, national and regional authorities, and the general public. The communication to people is certainly the most challenging issue, and careful pilot tests are necessary to check the effectiveness of the communication strategy, before opening the information to the public. With regard to long-term time-dependent earthquake forecast, the application of a newly developed simulation algorithm to Calabria region provided typical features in time, space and magnitude behaviour of the seismicity, which can be compared with those of the real observations. These features include long-term pseudo-periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the moderate and higher magnitude range.
ARMA models for earthquake ground motions. Seismic Safety Margins Research Program
International Nuclear Information System (INIS)
Chang, Mark K.; Kwiatkowski, Jan W.; Nau, Robert F.; Oliver, Robert M.; Pister, Karl S.
1981-02-01
This report contains an analysis of four major California earthquake records using a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It has been possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters and test the residuals generated by these models. It has also been possible to show the connections, similarities and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed in this report is suitable for simulating earthquake ground motions in the time domain and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. (author)
3-D GRACE gravity model for the 2011 Japan earthquake
Indian Academy of Sciences (India)
Corresponding author. e-mail: rgss1fes@iitr.ac.in/rgssastry@gmail.com. The GRACE mission has contributed to the seismic characterization of major earthquakes in offshore ... −13μGal), fully accounting for co-seismic mass redistribution.
Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes
2017-04-01
In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite
A model of characteristic earthquakes and its implications for regional seismicity
DEFF Research Database (Denmark)
López-Ruiz, R.; Vázquez-Prada, M.; Pacheco, A.F.
2004-01-01
Regional seismicity (i.e. that averaged over large enough areas over long enough periods of time) has a size-frequency relationship, the Gutenberg-Richter law, which differs from that found for some seismic faults, the Characteristic Earthquake relationship. But all seismicity comes in the end from...... active faults, so the question arises of how one seismicity pattern could emerge from the other. The recently introduced Minimalist Model of Vázquez-Prada et al. of characteristic earthquakes provides a simple representation of the seismicity originating from a single fault. Here, we show...... that a Characteristic Earthquake relationship together with a fractal distribution of fault lengths can accurately describe the total seismicity produced in a region. The resulting earthquake catalogue accounts for the addition of both all the characteristic and all the non-characteristic events triggered in the faults...
The 1985 central chile earthquake: a repeat of previous great earthquakes in the region?
Comte, D; Eisenberg, A; Lorca, E; Pardo, M; Ponce, L; Saragoni, R; Singh, S K; Suárez, G
1986-07-25
A great earthquake (surface-wave magnitude, 7.8) occurred along the coast of central Chile on 3 March 1985, causing heavy damage to coastal towns. Intense foreshock activity near the epicenter of the main shock occurred for 11 days before the earthquake. The aftershocks of the 1985 earthquake define a rupture area of 170 by 110 square kilometers. The earthquake was forecast on the basis of the nearly constant repeat time (83 +/- 9 years) of great earthquakes in this region. An analysis of previous earthquakes suggests that the rupture lengths of great shocks in the region vary by a factor of about 3. The nearly constant repeat time and variable rupture lengths cannot be reconciled with time- or slip-predictable models of earthquake recurrence. The great earthquakes in the region seem to involve a variable rupture mode and yet, for unknown reasons, remain periodic. Historical data suggest that the region south of the 1985 rupture zone should now be considered a gap of high seismic potential that may rupture in a great earthquake in the next few tens of years.
On the agreement between small-world-like OFC model and real earthquakes
Energy Technology Data Exchange (ETDEWEB)
Ferreira, Douglas S.R., E-mail: douglas.ferreira@ifrj.edu.br [Instituto Federal de Educação, Ciência e Tecnologia do Rio de Janeiro, Paracambi, RJ (Brazil); Geophysics Department, Observatório Nacional, Rio de Janeiro, RJ (Brazil); Papa, Andrés R.R., E-mail: papa@on.br [Geophysics Department, Observatório Nacional, Rio de Janeiro, RJ (Brazil); Instituto de Física, Universidade do Estado do Rio de Janeiro, Rio de Janeiro, RJ (Brazil); Menezes, Ronaldo, E-mail: rmenezes@cs.fit.edu [BioComplex Laboratory, Computer Sciences, Florida Institute of Technology, Melbourne (United States)
2015-03-20
In this article we implemented simulations of the OFC model for earthquakes for two different topologies: regular and small-world, where in the latter the links are randomly rewired with probability p. In both topologies, we have studied the distribution of time intervals between consecutive earthquakes and the border effects present in each one. In addition, we also have characterized the influence that the probability p produces in certain characteristics of the lattice and in the intensity of border effects. From the two topologies, networks of consecutive epicenters were constructed, that allowed us to analyze the distribution of connectivities of each one. In our results distributions arise belonging to a family of non-traditional distributions functions, which agrees with previous studies using data from actual earthquakes. Our results reinforce the idea that the Earth is in a critical self-organized state and furthermore point towards temporal and spatial correlations between earthquakes in different places. - Highlights: • OFC model simulations for regular and small-world topologies. • For small-world topology distributions agree remarkably well with actual earthquakes. • Reinforce the idea of a critical self-organized state for the Earth's crust. • Point towards temporal and spatial correlations between far earthquakes in far places.
On the agreement between small-world-like OFC model and real earthquakes
International Nuclear Information System (INIS)
Ferreira, Douglas S.R.; Papa, Andrés R.R.; Menezes, Ronaldo
2015-01-01
In this article we implemented simulations of the OFC model for earthquakes for two different topologies: regular and small-world, where in the latter the links are randomly rewired with probability p. In both topologies, we have studied the distribution of time intervals between consecutive earthquakes and the border effects present in each one. In addition, we also have characterized the influence that the probability p produces in certain characteristics of the lattice and in the intensity of border effects. From the two topologies, networks of consecutive epicenters were constructed, that allowed us to analyze the distribution of connectivities of each one. In our results distributions arise belonging to a family of non-traditional distributions functions, which agrees with previous studies using data from actual earthquakes. Our results reinforce the idea that the Earth is in a critical self-organized state and furthermore point towards temporal and spatial correlations between earthquakes in different places. - Highlights: • OFC model simulations for regular and small-world topologies. • For small-world topology distributions agree remarkably well with actual earthquakes. • Reinforce the idea of a critical self-organized state for the Earth's crust. • Point towards temporal and spatial correlations between far earthquakes in far places
Numerical Modelling of the 1995 Dinar Earthquake Effects on Hydrodynamic Regime of Egirdir Lake
Directory of Open Access Journals (Sweden)
Murat Aksel
2017-11-01
Full Text Available The effect of earthquakes on closed and semi-closed water systems is a research topic that has been studied for many years. The lack of continuous measurement systems in closed and semi-closed water systems is insufficient to examine and investigate what has happened during the earthquake. The morphological structures of the lakes indicate the characteristics of the events that occurred during an earthquake. Due to the developing technology and research request, some monitoring, measurement stations are installed in some lakes, gulfs, estuaries, etc. systems. Today, both higher quality measurement and field data can be obtained, and more complicated processes are being explored with the help of faster computers. Computational fluid dynamics is currently used because of the difficulty in calculating hydrodynamic responses of lakes due to earthquake. Investigations on the effects of the earthquake condition on the desired closed / semi-closed water system studies by using numerical modelling have been continuing increasingly in recent years. Both the quality of the bathymetric data gathered from the field, the continuous acquisition of both dynamic water level measurement systems, and the use of new technologies and systems in the search for base materials contribute to the fact that we have more knowledge of the formation, behavior and effects of the sorts. In this study, 1995 Dinar Earthquake effects on Egirdir Lake hydrodynamic regime was investigated by numerical modelling approach.
Time-predictable model applicability for earthquake occurrence in northeast India and vicinity
Directory of Open Access Journals (Sweden)
A. Panthi
2011-03-01
Full Text Available Northeast India and its vicinity is one of the seismically most active regions in the world, where a few large and several moderate earthquakes have occurred in the past. In this study the region of northeast India has been considered for an earthquake generation model using earthquake data as reported by earthquake catalogues National Geophysical Data Centre, National Earthquake Information Centre, United States Geological Survey and from book prepared by Gupta et al. (1986 for the period 1906–2008. The events having a surface wave magnitude of M_{s}≥5.5 were considered for statistical analysis. In this region, nineteen seismogenic sources were identified by the observation of clustering of earthquakes. It is observed that the time interval between the two consecutive mainshocks depends upon the preceding mainshock magnitude (M_{p} and not on the following mainshock (M_{f}. This result corroborates the validity of time-predictable model in northeast India and its adjoining regions. A linear relation between the logarithm of repeat time (T of two consecutive events and the magnitude of the preceding mainshock is established in the form LogT = cM_{p}+a, where "c" is a positive slope of line and "a" is function of minimum magnitude of the earthquake considered. The values of the parameters "c" and "a" are estimated to be 0.21 and 0.35 in northeast India and its adjoining regions. The less value of c than the average implies that the earthquake occurrence in this region is different from those of plate boundaries. The result derived can be used for long term seismic hazard estimation in the delineated seismogenic regions.
Comparison of test and earthquake response modeling of a nuclear power plant containment building
Energy Technology Data Exchange (ETDEWEB)
Srinivasan, M.G.; Kot, C.A.; Hsieh, B.J.
1985-01-01
The reactor building of a BWR plant was subjected to dynamic testing, a minor earthquake, and a strong earthquake at different times. Analytical models simulating each of these events were devised by previous investigators. A comparison of the characteristics of these models is made in this paper. The different modeling assumptions involved in the different simulation analyses restrict the validity of the models for general use and also narrow the comparison down to only a few modes. The dynamic tests successfully identified the first mode of the soil-structure system.
Comparison of test and earthquake response modeling of a nuclear power plant containment building
International Nuclear Information System (INIS)
Srinivasan, M.G.; Kot, C.A.; Hsieh, B.J.
1985-01-01
The reactor building of a BWR plant was subjected to dynamic testing, a minor earthquake, and a strong earthquake at different times. Analytical models simulating each of these events were devised by previous investigators. A comparison of the characteristics of these models is made in this paper. The different modeling assumptions involved in the different simulation analyses restrict the validity of the models for general use and also narrow the comparison down to only a few modes. The dynamic tests successfully identified the first mode of the soil-structure system
A Model for Low-Frequency Earthquake Slip in Cascadia
Chestler, S.; Creager, K.
2017-12-01
Low-Frequency Earthquakes (LFEs) are commonly used to identify when and where slow slip occurred, especially for slow slip events that are too small to be observed geodetically. Yet, an understanding of how slip occurs within an LFE family patch, or patch on the plate interface where LFEs repeat, is limited. How much slip occurs per LFE and over what area? Do all LFEs within an LFE family rupture the exact same spot? To answer these questions, we implement a catalog of 39,966 LFEs, sorted into 45 LFE families, beneath the Olympic Peninsula, WA. LFEs were detected and located using data from approximately 100 3-component stations from the Array of Arrays experiment. We compare the LFE family patch area to the area within the LFE family patch that slips through LFEs during Cascadia Episodic Tremor and Slip (ETS) events. Patch area is calculated from relative LFE locations, solved for using the double difference method. Slip area is calculated from the characteristic moment (mean of the exponential moment-frequency distribution) and number LFEs for each family and geodetically measured ETS slip. We find that 0.5-5% of the area within an LFE family patch slips through LFEs. The rest must deform in some other manner (e.g., ductile deformation). We also explore LFE slip patterns throughout the entire slow slip zone. Is LFE slip uniform? Does LFE slip account for all geodetically observed slow slip? Double difference relocations reveal that LFE families are 2 km patches where LFE are clustered close together. Additionally, there are clusters of LFE families with diameters of 4-15 km. There are gaps with no observable, repeating LFEs between LFE families in clusters and between clusters of LFE families. Based on this observation, we present a model where LFE slip is heterogeneous on multiple spatial scales. Clusters of LFE families may represent patches with higher strength than the surrounding areas. Finally, we find that LFE slip only accounts for a small fraction ( 0
Modelling psychological responses to the Great East Japan earthquake and nuclear incident.
Goodwin, Robin; Takahashi, Masahito; Sun, Shaojing; Gaines, Stanley O
2012-01-01
The Great East Japan (Tōhoku/Kanto) earthquake of March 2011 was followed by a major tsunami and nuclear incident. Several previous studies have suggested a number of psychological responses to such disasters. However, few previous studies have modelled individual differences in the risk perceptions of major events, or the implications of these perceptions for relevant behaviours. We conducted a survey specifically examining responses to the Great Japan earthquake and nuclear incident, with data collected 11-13 weeks following these events. 844 young respondents completed a questionnaire in three regions of Japan; Miyagi (close to the earthquake and leaking nuclear plants), Tokyo/Chiba (approximately 220 km from the nuclear plants), and Western Japan (Yamaguchi and Nagasaki, some 1000 km from the plants). Results indicated significant regional differences in risk perception, with greater concern over earthquake risks in Tokyo than in Miyagi or Western Japan. Structural equation analyses showed that shared normative concerns about earthquake and nuclear risks, conservation values, lack of trust in governmental advice about the nuclear hazard, and poor personal control over the nuclear incident were positively correlated with perceived earthquake and nuclear risks. These risk perceptions further predicted specific outcomes (e.g. modifying homes, avoiding going outside, contemplating leaving Japan). The strength and significance of these pathways varied by region. Mental health and practical implications of these findings are discussed in the light of the continuing uncertainties in Japan following the March 2011 events.
Modelling psychological responses to the Great East Japan earthquake and nuclear incident.
Directory of Open Access Journals (Sweden)
Robin Goodwin
Full Text Available The Great East Japan (Tōhoku/Kanto earthquake of March 2011 was followed by a major tsunami and nuclear incident. Several previous studies have suggested a number of psychological responses to such disasters. However, few previous studies have modelled individual differences in the risk perceptions of major events, or the implications of these perceptions for relevant behaviours. We conducted a survey specifically examining responses to the Great Japan earthquake and nuclear incident, with data collected 11-13 weeks following these events. 844 young respondents completed a questionnaire in three regions of Japan; Miyagi (close to the earthquake and leaking nuclear plants, Tokyo/Chiba (approximately 220 km from the nuclear plants, and Western Japan (Yamaguchi and Nagasaki, some 1000 km from the plants. Results indicated significant regional differences in risk perception, with greater concern over earthquake risks in Tokyo than in Miyagi or Western Japan. Structural equation analyses showed that shared normative concerns about earthquake and nuclear risks, conservation values, lack of trust in governmental advice about the nuclear hazard, and poor personal control over the nuclear incident were positively correlated with perceived earthquake and nuclear risks. These risk perceptions further predicted specific outcomes (e.g. modifying homes, avoiding going outside, contemplating leaving Japan. The strength and significance of these pathways varied by region. Mental health and practical implications of these findings are discussed in the light of the continuing uncertainties in Japan following the March 2011 events.
Directory of Open Access Journals (Sweden)
R. M. Semenov
2018-01-01
Full Text Available In connection with changes in the stress-strain state of the Earth's crust, various physical and mechanical processes, including destruction, take place in the rocks and are accompanied by tectonic earthquakes. Different models have been proposed to describe earthquake preparation and occurrence, depending on the mechanisms and the rates of geodynamic processes. One of the models considers crustal stretching that is characteristic of formation of rift structures. The model uses the data on rock samples that are stretched until destruction in a special laboratory installation. Based on the laboratory modeling, it is established that the samples are destroyed in stages that are interpreted as stages of preparation and occurrence of an earthquake source. The preparation stage of underground tremors is generally manifested by a variety of temporal (long-, medium- and short-term precursors. The main shortcoming of micro-modeling is that, considering small sizes of the investigated samples, it is impossible to reveal a link between the plastic extension of rocks (taking place in the earthquake hypocenter and the rock rupture. Plasticity is the ability of certain rocks to change shape and size irreversibly, while the rock continuity is maintained, in response to applied external forces. In order to take into account the effect of plastic deformation of rocks on earthquake preparation and occurrence, we propose not to refer to the diagrams showing stretching of the rock samples, but use a typical diagram of metal stretching, which can be obtained when testing a metal rod for breakage (Fig. 1. The diagram of metal stretching as a function of the relative elongation (to some degree of approximation and taking into account the coefficient of plasticity can be considered as a model of preparation and occurrence of an earthquake source in case of rifting. The energy released in the period immediately preceding the earthquake contributes to the emergence of
Cocco, M.
2001-12-01
Earthquake stress changes can promote failures on favorably oriented faults and modify the seismicity pattern over broad regions around the causative faults. Because the induced stress perturbations modify the rate of production of earthquakes, they alter the probability of seismic events in a specified time window. Comparing the Coulomb stress changes with the seismicity rate changes and aftershock patterns can statistically test the role of stress transfer in earthquake occurrence. The interaction probability may represent a further tool to test the stress trigger or shadow model. The probability model, which incorporate stress transfer, has the main advantage to include the contributions of the induced stress perturbation (a static step in its present formulation), the loading rate and the fault constitutive properties. Because the mechanical conditions of the secondary faults at the time of application of the induced load are largely unkown, stress triggering can only be tested on fault populations and not on single earthquake pairs with a specified time delay. The interaction probability can represent the most suitable tool to test the interaction between large magnitude earthquakes. Despite these important implications and the stimulating perspectives, there exist problems in understanding earthquake interaction that should motivate future research but at the same time limit its immediate social applications. One major limitation is that we are unable to predict how and if the induced stress perturbations modify the ratio between small versus large magnitude earthquakes. In other words, we cannot distinguish between a change in this ratio in favor of small events or of large magnitude earthquakes, because the interaction probability is independent of magnitude. Another problem concerns the reconstruction of the stressing history. The interaction probability model is based on the response to a static step; however, we know that other processes contribute to
A Review of Current Machine Learning Methods Used for Cancer Recurrence Modeling and Prediction
Energy Technology Data Exchange (ETDEWEB)
Hemphill, Geralyn M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-09-27
Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type has become a necessity in cancer research. A major challenge in cancer management is the classification of patients into appropriate risk groups for better treatment and follow-up. Such risk assessment is critically important in order to optimize the patient’s health and the use of medical resources, as well as to avoid cancer recurrence. This paper focuses on the application of machine learning methods for predicting the likelihood of a recurrence of cancer. It is not meant to be an extensive review of the literature on the subject of machine learning techniques for cancer recurrence modeling. Other recent papers have performed such a review, and I will rely heavily on the results and outcomes from these papers. The electronic databases that were used for this review include PubMed, Google, and Google Scholar. Query terms used include “cancer recurrence modeling”, “cancer recurrence and machine learning”, “cancer recurrence modeling and machine learning”, and “machine learning for cancer recurrence and prediction”. The most recent and most applicable papers to the topic of this review have been included in the references. It also includes a list of modeling and classification methods to predict cancer recurrence.
Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.
Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus
2017-01-01
Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.
Interevent times in a new alarm-based earthquake forecasting model
Talbi, Abdelhak; Nanjo, Kazuyoshi; Zhuang, Jiancang; Satake, Kenji; Hamdache, Mohamed
2013-09-01
This study introduces a new earthquake forecasting model that uses the moment ratio (MR) of the first to second order moments of earthquake interevent times as a precursory alarm index to forecast large earthquake events. This MR model is based on the idea that the MR is associated with anomalous long-term changes in background seismicity prior to large earthquake events. In a given region, the MR statistic is defined as the inverse of the index of dispersion or Fano factor, with MR values (or scores) providing a biased estimate of the relative regional frequency of background events, here termed the background fraction. To test the forecasting performance of this proposed MR model, a composite Japan-wide earthquake catalogue for the years between 679 and 2012 was compiled using the Japan Meteorological Agency catalogue for the period between 1923 and 2012, and the Utsu historical seismicity records between 679 and 1922. MR values were estimated by sampling interevent times from events with magnitude M ≥ 6 using an earthquake random sampling (ERS) algorithm developed during previous research. Three retrospective tests of M ≥ 7 target earthquakes were undertaken to evaluate the long-, intermediate- and short-term performance of MR forecasting, using mainly Molchan diagrams and optimal spatial maps obtained by minimizing forecasting error defined by miss and alarm rate addition. This testing indicates that the MR forecasting technique performs well at long-, intermediate- and short-term. The MR maps produced during long-term testing indicate significant alarm levels before 15 of the 18 shallow earthquakes within the testing region during the past two decades, with an alarm region covering about 20 per cent (alarm rate) of the testing region. The number of shallow events missed by forecasting was reduced by about 60 per cent after using the MR method instead of the relative intensity (RI) forecasting method. At short term, our model succeeded in forecasting the
Bayesian model ensembling using meta-trained recurrent neural networks
Ambrogioni, L.; Berezutskaya, Y.; Gü ç lü , U.; Borne, E.W.P. van den; Gü ç lü tü rk, Y.; Gerven, M.A.J. van; Maris, E.G.G.
2017-01-01
In this paper we demonstrate that a recurrent neural network meta-trained on an ensemble of arbitrary classification tasks can be used as an approximation of the Bayes optimal classifier. This result is obtained by relying on the framework of e-free approximate Bayesian inference, where the Bayesian
Tsunami Source Modeling of the 2015 Volcanic Tsunami Earthquake near Torishima, South of Japan
Sandanbata, O.; Watada, S.; Satake, K.; Fukao, Y.; Sugioka, H.; Ito, A.; Shiobara, H.
2017-12-01
An abnormal earthquake occurred at a submarine volcano named Smith Caldera, near Torishima Island on the Izu-Bonin arc, on May 2, 2015. The earthquake, which hereafter we call "the 2015 Torishima earthquake," has a CLVD-type focal mechanism with a moderate seismic magnitude (M5.7) but generated larger tsunami waves with an observed maximum height of 50 cm at Hachijo Island [JMA, 2015], so that the earthquake can be regarded as a "tsunami earthquake." In the region, similar tsunami earthquakes were observed in 1984, 1996 and 2006, but their physical mechanisms are still not well understood. Tsunami waves generated by the 2015 earthquake were recorded by an array of ocean bottom pressure (OBP) gauges, 100 km northeastern away from the epicenter. The waves initiated with a small downward signal of 0.1 cm and reached peak amplitude (1.5-2.0 cm) of leading upward signals followed by continuous oscillations [Fukao et al., 2016]. For modeling its tsunami source, or sea-surface displacement, we perform tsunami waveform simulations, and compare synthetic and observed waveforms at the OBP gauges. The linear Boussinesq equations are adapted with the tsunami simulation code, JAGURS [Baba et al., 2015]. We first assume a Gaussian-shaped sea-surface uplift of 1.0 m with a source size comparable to Smith Caldera, 6-7 km in diameter. By shifting source location around the caldera, we found the uplift is probably located within the caldera rim, as suggested by Sandanbata et al. [2016]. However, synthetic waves show no initial downward signal that was observed at the OBP gauges. Hence, we add a ring of subsidence surrounding the main uplift, and examine sizes and amplitudes of the main uplift and the subsidence ring. As a result, the model of a main uplift of around 1.0 m with a radius of 4 km surrounded by a ring of small subsidence shows good agreement of synthetic and observed waveforms. The results yield two implications for the deformation process that help us to understanding
International Nuclear Information System (INIS)
Ferdowsi, B.
2014-01-01
Recent seismological observations based on new, more sensitive instrumentation show that seismic waves radiated from large earthquakes can trigger other earthquakes globally. This phenomenon is called dynamic earthquake triggering and is well-documented for over 30 of the largest earthquakes worldwide. Granular materials are at the core of mature earthquake faults and play a key role in fault triggering by exhibiting a rich nonlinear response to external perturbations. The stick-slip dynamics in sheared granular layers is analogous to the seismic cycle for earthquake fault systems. In this research effort, we characterize the macroscopic scale statistics and the grain-scale mechanisms of triggered slip in sheared granular layers. We model the granular fault gouge using three dimensional discrete element method simulations. The modeled granular system is put into stick-slip dynamics by applying a conning pressure and a shear load. The dynamic triggering is simulated by perturbing the spontaneous stick-slip dynamics using an external vibration applied to the boundary of the layer. The influences of the triggering consist in a frictional weakening during the vibration interval, a clock advance of the next expected large slip event and long term effects in the form of suppression and recovery of the energy released from the granular layer. Our study suggests that above a critical amplitude, vibration causes a significant clock advance of large slip events. We link this clock advance to a major decline in the slipping contact ratio as well as a decrease in shear modulus and weakening of the granular gouge layer. We also observe that shear vibration is less effective in perturbing the stick-slip dynamics of the granular layer. Our study suggests that in order to have an effective triggering, the input vibration must also explore the granular layer at length scales about or less than the average grain size. The energy suppression and the subsequent recovery and increased
Reading a 400,000-year record of earthquake frequency for an intraplate fault.
Williams, Randolph T; Goodwin, Laurel B; Sharp, Warren D; Mozley, Peter S
2017-05-09
Our understanding of the frequency of large earthquakes at timescales longer than instrumental and historical records is based mostly on paleoseismic studies of fast-moving plate-boundary faults. Similar study of intraplate faults has been limited until now, because intraplate earthquake recurrence intervals are generally long (10s to 100s of thousands of years) relative to conventional paleoseismic records determined by trenching. Long-term variations in the earthquake recurrence intervals of intraplate faults therefore are poorly understood. Longer paleoseismic records for intraplate faults are required both to better quantify their earthquake recurrence intervals and to test competing models of earthquake frequency (e.g., time-dependent, time-independent, and clustered). We present the results of U-Th dating of calcite veins in the Loma Blanca normal fault zone, Rio Grande rift, New Mexico, United States, that constrain earthquake recurrence intervals over much of the past ∼550 ka-the longest direct record of seismic frequency documented for any fault to date. The 13 distinct seismic events delineated by this effort demonstrate that for >400 ka, the Loma Blanca fault produced periodic large earthquakes, consistent with a time-dependent model of earthquake recurrence. However, this time-dependent series was interrupted by a cluster of earthquakes at ∼430 ka. The carbon isotope composition of calcite formed during this seismic cluster records rapid degassing of CO 2 , suggesting an interval of anomalous fluid source. In concert with U-Th dates recording decreased recurrence intervals, we infer seismicity during this interval records fault-valve behavior. These data provide insight into the long-term seismic behavior of the Loma Blanca fault and, by inference, other intraplate faults.
Reading a 400,000-year record of earthquake frequency for an intraplate fault
Williams, Randolph T.; Goodwin, Laurel B.; Sharp, Warren D.; Mozley, Peter S.
2017-05-01
Our understanding of the frequency of large earthquakes at timescales longer than instrumental and historical records is based mostly on paleoseismic studies of fast-moving plate-boundary faults. Similar study of intraplate faults has been limited until now, because intraplate earthquake recurrence intervals are generally long (10s to 100s of thousands of years) relative to conventional paleoseismic records determined by trenching. Long-term variations in the earthquake recurrence intervals of intraplate faults therefore are poorly understood. Longer paleoseismic records for intraplate faults are required both to better quantify their earthquake recurrence intervals and to test competing models of earthquake frequency (e.g., time-dependent, time-independent, and clustered). We present the results of U-Th dating of calcite veins in the Loma Blanca normal fault zone, Rio Grande rift, New Mexico, United States, that constrain earthquake recurrence intervals over much of the past ˜550 ka—the longest direct record of seismic frequency documented for any fault to date. The 13 distinct seismic events delineated by this effort demonstrate that for >400 ka, the Loma Blanca fault produced periodic large earthquakes, consistent with a time-dependent model of earthquake recurrence. However, this time-dependent series was interrupted by a cluster of earthquakes at ˜430 ka. The carbon isotope composition of calcite formed during this seismic cluster records rapid degassing of CO2, suggesting an interval of anomalous fluid source. In concert with U-Th dates recording decreased recurrence intervals, we infer seismicity during this interval records fault-valve behavior. These data provide insight into the long-term seismic behavior of the Loma Blanca fault and, by inference, other intraplate faults.
Jones, K. B., II; Saxton, P. T.
2013-12-01
Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After the 1989 Loma Prieta Earthquake, American earthquake investigators predetermined magnetometer use and a minimum earthquake magnitude necessary for EM detection. This action was set in motion, due to the extensive damage incurred and public outrage concerning earthquake forecasting; however, the magnetometers employed, grounded or buried, are completely subject to static and electric fields and have yet to correlate to an identifiable precursor. Secondly, there is neither a networked array for finding any epicentral locations, nor have there been any attempts to find even one. This methodology needs dismissal, because it is overly complicated, subject to continuous change, and provides no response time. As for the minimum magnitude threshold, which was set at M5, this is simply higher than what modern technological advances have gained. Detection can now be achieved at approximately M1, which greatly improves forecasting chances. A propagating precursor has now been detected in both the field and laboratory. Field antenna testing conducted outside the NE Texas town of Timpson in February, 2013, detected three strong EM sources along with numerous weaker signals. The antenna had mobility, and observations were noted for recurrence, duration, and frequency response. Next, two
GEM1: First-year modeling and IT activities for the Global Earthquake Model
Anderson, G.; Giardini, D.; Wiemer, S.
2009-04-01
GEM is a public-private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) to build an independent standard for modeling and communicating earthquake risk worldwide. GEM is aimed at providing authoritative, open information about seismic risk and decision tools to support mitigation. GEM will also raise risk awareness and help post-disaster economic development, with the ultimate goal of reducing the toll of future earthquakes. GEM will provide a unified set of seismic hazard, risk, and loss modeling tools based on a common global IT infrastructure and consensus standards. These tools, systems, and standards will be developed in partnership with organizations around the world, with coordination by the GEM Secretariat and its Secretary General. GEM partners will develop a variety of global components, including a unified earthquake catalog, fault database, and ground motion prediction equations. To ensure broad representation and community acceptance, GEM will include local knowledge in all modeling activities, incorporate existing detailed models where possible, and independently test all resulting tools and models. When completed in five years, GEM will have a versatile, penly accessible modeling environment that can be updated as necessary, and will provide the global standard for seismic hazard, risk, and loss models to government ministers, scientists and engineers, financial institutions, and the public worldwide. GEM is now underway with key support provided by private sponsors (Munich Reinsurance Company, Zurich Financial Services, AIR Worldwide Corporation, and Willis Group Holdings); countries including Belgium, Germany, Italy, Singapore, Switzerland, and Turkey; and groups such as the European Commission. The GEM Secretariat has been selected by the OECD and will be hosted at the Eucentre at the University of Pavia in Italy; the Secretariat is now formalizing the creation of the GEM Foundation. Some of GEM's global
Damping in building structures during earthquakes: test data and modeling
International Nuclear Information System (INIS)
Coats, D.W. Jr.
1982-01-01
A review and evaluation of the state-of-the-art of damping in building structures during earthquakes is presented. The primary emphasis is in the following areas: 1) the evaluation of commonly used mathematical techniques for incorporating damping effects in both simple and complex systems; 2) a compilation and interpretation of damping test data; and 3) an evaluation of structure testing methods, building instrumentation practices, and an investigation of rigid-body rotation effects on damping values from test data. A literature review provided the basis for evaluating mathematical techiques used to incorporate earthquake induced damping effects in simple and complex systems. A discussion on the effectiveness of damping, as a function of excitation type, is also included. Test data, from a wide range of sources, has been compiled and interpreted for buidings, nuclear power plant structures, piping, equipment, and isolated structural elements. Test methods used to determine damping and frequency parameters are discussed. In particular, the advantages and disadvantages associated with the normal mode and transfer function approaches are evaluated. Additionally, the effect of rigid-body rotations on damping values deduced from strong-motion building response records is investigated. A discussion of identification techniques typically used to determine building parameters (frequency and damping) from strong motion records is included. Finally, an analytical demonstration problem is presented to quantify the potential error in predicting fixed-base structural frequency and damping values from strong motion records, when rigid-body rotations are not properly accounted for
DEFF Research Database (Denmark)
Urup, Thomas; Dahlrot, Rikke Hedegaard; Grunnet, Kirsten
2016-01-01
Background Predictive markers and prognostic models are required in order to individualize treatment of recurrent glioblastoma (GBM) patients. Here, we sought to identify clinical factors able to predict response and survival in recurrent GBM patients treated with bevacizumab (BEV) and irinotecan....... Material and methods A total of 219 recurrent GBM patients treated with BEV plus irinotecan according to a previously published treatment protocol were included in the initial population. Prognostic models were generated by means of multivariate logistic and Cox regression analysis. Results In multivariate...
Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)
Crowell, B. W.; Bock, Y.; Squibb, M. B.
2010-12-01
Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.
Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination
Walter, W. R.; Pasyanos, M. E.; Matzel, E.; Gok, R.; Sweeney, J.; Ford, S. R.; Rodgers, A. J.
2008-12-01
Empirically explosions have been discriminated from natural earthquakes using regional amplitude ratio techniques such as P/S in a variety of frequency bands. We demonstrate that such ratios discriminate nuclear tests from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling. For example, regional waveform modeling shows strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East.
So, E.
2010-12-01
Earthquake casualty loss estimation, which depends primarily on building-specific casualty rates, has long suffered from a lack of cross-disciplinary collaboration in post-earthquake data gathering. An increase in our understanding of what contributes to casualties in earthquakes involve coordinated data-gathering efforts amongst disciplines; these are essential for improved global casualty estimation models. It is evident from examining past casualty loss models and reviewing field data collected from recent events, that generalized casualty rates cannot be applied globally for different building types, even within individual countries. For a particular structure type, regional and topographic building design effects, combined with variable material and workmanship quality all contribute to this multi-variant outcome. In addition, social factors affect building-specific casualty rates, including social status and education levels, and human behaviors in general, in that they modify egress and survivability rates. Without considering complex physical pathways, loss models purely based on historic casualty data, or even worse, rates derived from other countries, will be of very limited value. What’s more, as the world’s population, housing stock, and living and cultural environments change, methods of loss modeling must accommodate these variables, especially when considering casualties. To truly take advantage of observed earthquake losses, not only do damage surveys need better coordination of international and national reconnaissance teams, but these teams must integrate difference areas of expertise including engineering, public health and medicine. Research is needed to find methods to achieve consistent and practical ways of collecting and modeling casualties in earthquakes. International collaboration will also be necessary to transfer such expertise and resources to the communities in the cities which most need it. Coupling the theories and findings from
The spectral cell method in nonlinear earthquake modeling
Giraldo, Daniel; Restrepo, Doriam
2017-12-01
This study examines the applicability of the spectral cell method (SCM) to compute the nonlinear earthquake response of complex basins. SCM combines fictitious-domain concepts with the spectral-version of the finite element method to solve the wave equations in heterogeneous geophysical domains. Nonlinear behavior is considered by implementing the Mohr-Coulomb and Drucker-Prager yielding criteria. We illustrate the performance of SCM with numerical examples of nonlinear basins exhibiting physically and computationally challenging conditions. The numerical experiments are benchmarked with results from overkill solutions, and using MIDAS GTS NX, a finite element software for geotechnical applications. Our findings show good agreement between the two sets of results. Traditional spectral elements implementations allow points per wavelength as low as PPW = 4.5 for high-order polynomials. Our findings show that in the presence of nonlinearity, high-order polynomials (p ≥ 3) require mesh resolutions above of PPW ≥ 10 to ensure displacement errors below 10%.
Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment
Brietzke, G. B.; Hainzl, S.; Zöller, G.
2012-04-01
As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).
Brocher, Thomas M.; Blakely, Richard J.; Sherrod, Brian
2017-01-01
We investigate spatial and temporal relations between an ongoing and prolific seismicity cluster in central Washington, near Entiat, and the 14 December 1872 Entiat earthquake, the largest historic crustal earthquake in Washington. A fault scarp produced by the 1872 earthquake lies within the Entiat cluster; the locations and areas of both the cluster and the estimated 1872 rupture surface are comparable. Seismic intensities and the 1–2 m of coseismic displacement suggest a magnitude range between 6.5 and 7.0 for the 1872 earthquake. Aftershock forecast models for (1) the first several hours following the 1872 earthquake, (2) the largest felt earthquakes from 1900 to 1974, and (3) the seismicity within the Entiat cluster from 1976 through 2016 are also consistent with this magnitude range. Based on this aftershock modeling, most of the current seismicity in the Entiat cluster could represent aftershocks of the 1872 earthquake. Other earthquakes, especially those with long recurrence intervals, have long‐lived aftershock sequences, including the Mw">MwMw 7.5 1891 Nobi earthquake in Japan, with aftershocks continuing 100 yrs after the mainshock. Although we do not rule out ongoing tectonic deformation in this region, a long‐lived aftershock sequence can account for these observations.
Finite element models of earthquake cycles in mature strike-slip fault zones
Lynch, John Charles
The research presented in this dissertation is on the subject of strike-slip earthquakes and the stresses that build and release in the Earth's crust during earthquake cycles. Numerical models of these cycles in a layered elastic/viscoelastic crust are produced using the finite element method. A fault that alternately sticks and slips poses a particularly challenging problem for numerical implementation, and a new contact element dubbed the "Velcro" element was developed to address this problem (Appendix A). Additionally, the finite element code used in this study was bench-marked against analytical solutions for some simplified problems (Chapter 2), and the resolving power was tested for the fault region of the models (Appendix B). With the modeling method thus developed, there are two main questions posed. First, in Chapter 3, the effect of a finite-width shear zone is considered. By defining a viscoelastic shear zone beneath a periodically slipping fault, it is found that shear stress concentrates at the edges of the shear zone and thus causes the stress tensor to rotate into non-Andersonian orientations. Several methods are used to examine the stress patterns, including the plunge angles of the principal stresses and a new method that plots the stress tensor in a manner analogous to seismic focal mechanism diagrams. In Chapter 4, a simple San Andreas-like model is constructed, consisting of two great earthquake producing faults separated by a freely-slipping shorter fault. The model inputs of lower crustal viscosity, fault separation distance, and relative breaking strengths are examined for their effect on fault communication. It is found that with a lower crustal viscosity of 1018 Pa s (in the lower range of estimates for California), the two faults tend to synchronize their earthquake cycles, even in the cases where the faults have asymmetric breaking strengths. These models imply that postseismic stress transfer over hundreds of kilometers may play a
Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model
Thomas, Marion Y.; Bhat, Harsha S.
2018-05-01
Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.
Dynamic Evolution Of Off-Fault Medium During An Earthquake: A Micromechanics Based Model
Thomas, M. Y.; Bhat, H. S.
2017-12-01
Geophysical observations show a dramatic drop of seismic wave speeds in the shallow off-fault medium following earthquake ruptures. Seismic ruptures generate, or reactivate, damage around faults that alter the constitutive response of the surrounding medium, which in turn modifies the earthquake itself, the seismic radiation, and the near-fault ground motion. We present a micromechanics based constitutive model that accounts for dynamic evolution of elastic moduli at high-strain rates. We consider 2D in-plane models, with a 1D right lateral fault featuring slip-weakening friction law. The two scenarios studied here assume uniform initial off-fault damage and an observationally motivated exponential decay of initial damage with fault normal distance. Both scenarios produce dynamic damage that is consistent with geological observations. A small difference in initial damage actively impacts the final damage pattern. The second numerical experiment, in particular, highlights the complex feedback that exists between the evolving medium and the seismic event. We show that there is a unique off-fault damage pattern associated with supershear transition of an earthquake rupture that could be potentially seen as a geological signature of this transition. These scenarios presented here underline the importance of incorporating the complex structure of fault zone systems in dynamic models of earthquakes.
Rahpeyma, Sahar; Halldorsson, Benedikt; Hrafnkelsson, Birgir; Jonsson, Sigurjon
2018-01-01
Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.
Rahpeyma, Sahar
2018-04-17
Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.
Risk of Recurrence in Operated Parasagittal Meningiomas: A Logistic Binary Regression Model.
Escribano Mesa, José Alberto; Alonso Morillejo, Enrique; Parrón Carreño, Tesifón; Huete Allut, Antonio; Narro Donate, José María; Méndez Román, Paddy; Contreras Jiménez, Ascensión; Pedrero García, Francisco; Masegosa González, José
2018-02-01
Parasagittal meningiomas arise from the arachnoid cells of the angle formed between the superior sagittal sinus (SSS) and the brain convexity. In this retrospective study, we focused on factors that predict early recurrence and recurrence times. We reviewed 125 patients with parasagittal meningiomas operated from 1985 to 2014. We studied the following variables: age, sex, location, laterality, histology, surgeons, invasion of the SSS, Simpson removal grade, follow-up time, angiography, embolization, radiotherapy, recurrence and recurrence time, reoperation, neurologic deficit, degree of dependency, and patient status at the end of follow-up. Patients ranged in age from 26 to 81 years (mean 57.86 years; median 60 years). There were 44 men (35.2%) and 81 women (64.8%). There were 57 patients with neurologic deficits (45.2%). The most common presenting symptom was motor deficit. World Health Organization grade I tumors were identified in 104 patients (84.6%), and the majority were the meningothelial type. Recurrence was detected in 34 cases. Time of recurrence was 9 to 336 months (mean: 84.4 months; median: 79.5 months). Male sex was identified as an independent risk for recurrence with relative risk 2.7 (95% confidence interval 1.21-6.15), P = 0.014. Kaplan-Meier curves for recurrence had statistically significant differences depending on sex, age, histologic type, and World Health Organization histologic grade. A binary logistic regression was made with the Hosmer-Lemeshow test with P > 0.05; sex, tumor size, and histologic type were used in this model. Male sex is an independent risk factor for recurrence that, associated with other factors such tumor size and histologic type, explains 74.5% of all cases in a binary regression model. Copyright © 2017 Elsevier Inc. All rights reserved.
Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.
2017-12-01
We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the
Dempsey, David; Suckale, Jenny
2016-05-01
Induced seismicity is of increasing concern for oil and gas, geothermal, and carbon sequestration operations, with several M > 5 events triggered in recent years. Modeling plays an important role in understanding the causes of this seismicity and in constraining seismic hazard. Here we study the collective properties of induced earthquake sequences and the physics underpinning them. In this first paper of a two-part series, we focus on the directivity ratio, which quantifies whether fault rupture is dominated by one (unilateral) or two (bilateral) propagating fronts. In a second paper, we focus on the spatiotemporal and magnitude-frequency distributions of induced seismicity. We develop a model that couples a fracture mechanics description of 1-D fault rupture with fractal stress heterogeneity and the evolving pore pressure distribution around an injection well that triggers earthquakes. The extent of fault rupture is calculated from the equations of motion for two tips of an expanding crack centered at the earthquake hypocenter. Under tectonic loading conditions, our model exhibits a preference for unilateral rupture and a normal distribution of hypocenter locations, two features that are consistent with seismological observations. On the other hand, catalogs of induced events when injection occurs directly onto a fault exhibit a bias toward ruptures that propagate toward the injection well. This bias is due to relatively favorable conditions for rupture that exist within the high-pressure plume. The strength of the directivity bias depends on a number of factors including the style of pressure buildup, the proximity of the fault to failure and event magnitude. For injection off a fault that triggers earthquakes, the modeled directivity bias is small and may be too weak for practical detection. For two hypothetical injection scenarios, we estimate the number of earthquake observations required to detect directivity bias.
Directory of Open Access Journals (Sweden)
T. R. Robinson
2017-09-01
Full Text Available Current methods to identify coseismic landslides immediately after an earthquake using optical imagery are too slow to effectively inform emergency response activities. Issues with cloud cover, data collection and processing, and manual landslide identification mean even the most rapid mapping exercises are often incomplete when the emergency response ends. In this study, we demonstrate how traditional empirical methods for modelling the total distribution and relative intensity (in terms of point density of coseismic landsliding can be successfully undertaken in the hours and days immediately after an earthquake, allowing the results to effectively inform stakeholders during the response. The method uses fuzzy logic in a GIS (Geographic Information Systems to quickly assess and identify the location-specific relationships between predisposing factors and landslide occurrence during the earthquake, based on small initial samples of identified landslides. We show that this approach can accurately model both the spatial pattern and the number density of landsliding from the event based on just several hundred mapped landslides, provided they have sufficiently wide spatial coverage, improving upon previous methods. This suggests that systematic high-fidelity mapping of landslides following an earthquake is not necessary for informing rapid modelling attempts. Instead, mapping should focus on rapid sampling from the entire affected area to generate results that can inform the modelling. This method is therefore suited to conditions in which imagery is affected by partial cloud cover or in which the total number of landslides is so large that mapping requires significant time to complete. The method therefore has the potential to provide a quick assessment of landslide hazard after an earthquake and may therefore inform emergency operations more effectively compared to current practice.
Jibson, Randall W.; Jibson, Matthew W.
2003-01-01
Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.
Rashidi, Amin; Shomali, Zaher Hossein; Keshavarz Farajkhah, Nasser
2018-03-01
The western segment of Makran subduction zone is characterized with almost no major seismicity and no large earthquake for several centuries. A possible episode for this behavior is that this segment is currently locked accumulating energy to generate possible great future earthquakes. Taking into account this assumption, a hypothetical rupture area is considered in the western Makran to set different tsunamigenic scenarios. Slip distribution models of four recent tsunamigenic earthquakes, i.e. 2015 Chile M w 8.3, 2011 Tohoku-Oki M w 9.0 (using two different scenarios) and 2006 Kuril Islands M w 8.3, are scaled into the rupture area in the western Makran zone. The numerical modeling is performed to evaluate near-field and far-field tsunami hazards. Heterogeneity in slip distribution results in higher tsunami amplitudes. However, its effect reduces from local tsunamis to regional and distant tsunamis. Among all considered scenarios for the western Makran, only a similar tsunamigenic earthquake to the 2011 Tohoku-Oki event can re-produce a significant far-field tsunami and is considered as the worst case scenario. The potential of a tsunamigenic source is dominated by the degree of slip heterogeneity and the location of greatest slip on the rupture area. For the scenarios with similar slip patterns, the mean slip controls their relative power. Our conclusions also indicate that along the entire Makran coasts, the southeastern coast of Iran is the most vulnerable area subjected to tsunami hazard.
Rashidi, Amin; Shomali, Zaher Hossein; Keshavarz Farajkhah, Nasser
2018-04-01
The western segment of Makran subduction zone is characterized with almost no major seismicity and no large earthquake for several centuries. A possible episode for this behavior is that this segment is currently locked accumulating energy to generate possible great future earthquakes. Taking into account this assumption, a hypothetical rupture area is considered in the western Makran to set different tsunamigenic scenarios. Slip distribution models of four recent tsunamigenic earthquakes, i.e. 2015 Chile M w 8.3, 2011 Tohoku-Oki M w 9.0 (using two different scenarios) and 2006 Kuril Islands M w 8.3, are scaled into the rupture area in the western Makran zone. The numerical modeling is performed to evaluate near-field and far-field tsunami hazards. Heterogeneity in slip distribution results in higher tsunami amplitudes. However, its effect reduces from local tsunamis to regional and distant tsunamis. Among all considered scenarios for the western Makran, only a similar tsunamigenic earthquake to the 2011 Tohoku-Oki event can re-produce a significant far-field tsunami and is considered as the worst case scenario. The potential of a tsunamigenic source is dominated by the degree of slip heterogeneity and the location of greatest slip on the rupture area. For the scenarios with similar slip patterns, the mean slip controls their relative power. Our conclusions also indicate that along the entire Makran coasts, the southeastern coast of Iran is the most vulnerable area subjected to tsunami hazard.
From Data-Sharing to Model-Sharing: SCEC and the Development of Earthquake System Science (Invited)
Jordan, T. H.
2009-12-01
Earthquake system science seeks to construct system-level models of earthquake phenomena and use them to predict emergent seismic behavior—an ambitious enterprise that requires high degree of interdisciplinary, multi-institutional collaboration. This presentation will explore model-sharing structures that have been successful in promoting earthquake system science within the Southern California Earthquake Center (SCEC). These include disciplinary working groups to aggregate data into community models; numerical-simulation working groups to investigate system-specific phenomena (process modeling) and further improve the data models (inverse modeling); and interdisciplinary working groups to synthesize predictive system-level models. SCEC has developed a cyberinfrastructure, called the Community Modeling Environment, that can distribute the community models; manage large suites of numerical simulations; vertically integrate the hardware, software, and wetware needed for system-level modeling; and promote the interactions among working groups needed for model validation and refinement. Various socio-scientific structures contribute to successful model-sharing. Two of the most important are “communities of trust” and collaborations between government and academic scientists on mission-oriented objectives. The latter include improvements of earthquake forecasts and seismic hazard models and the use of earthquake scenarios in promoting public awareness and disaster management.
Modified two-layer social force model for emergency earthquake evacuation
Zhang, Hao; Liu, Hong; Qin, Xin; Liu, Baoxi
2018-02-01
Studies of crowd behavior with related research on computer simulation provide an effective basis for architectural design and effective crowd management. Based on low-density group organization patterns, a modified two-layer social force model is proposed in this paper to simulate and reproduce a group gathering process. First, this paper studies evacuation videos from the Luan'xian earthquake in 2012, and extends the study of group organization patterns to a higher density. Furthermore, taking full advantage of the strength in crowd gathering simulations, a new method on grouping and guidance is proposed while using crowd dynamics. Second, a real-life grouping situation in earthquake evacuation is simulated and reproduced. Comparing with the fundamental social force model and existing guided crowd model, the modified model reduces congestion time and truly reflects group behaviors. Furthermore, the experiment result also shows that a stable group pattern and a suitable leader could decrease collision and allow a safer evacuation process.
REGIONAL SEISMIC AMPLITUDE MODELING AND TOMOGRAPHY FOR EARTHQUAKE-EXPLOSION DISCRIMINATION
Energy Technology Data Exchange (ETDEWEB)
Walter, W R; Pasyanos, M E; Matzel, E; Gok, R; Sweeney, J; Ford, S R; Rodgers, A J
2008-07-08
We continue exploring methodologies to improve earthquake-explosion discrimination using regional amplitude ratios such as P/S in a variety of frequency bands. Empirically we demonstrate that such ratios separate explosions from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are also examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling (e. g. Ford et al 2008). For example, regional waveform modeling shows strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East. Monitoring the world for potential nuclear explosions requires characterizing seismic
Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor
2016-01-01
A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities' preparedness and response capabilities and to mitigate future consequences. An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model's algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties.
International Nuclear Information System (INIS)
Luco, J.E.
1987-05-01
The solutions available for a number of dynamic dislocation fault models are examined in an attempt at establishing some of the expected characteristics of earthquake ground motion in the near-source region. In particular, solutions for two-dimensional anti-plane shear and plane-strain models as well as for three-dimensional fault models in full space, uniform half-space and layered half-space media are reviewed
The finite-difference and finite-element modeling of seismic wave propagation and earthquake motion
International Nuclear Information System (INIS)
Moczo, P.; Kristek, J.; Pazak, P.; Balazovjech, M.; Moczo, P.; Kristek, J.; Galis, M.
2007-01-01
Numerical modeling of seismic wave propagation and earthquake motion is an irreplaceable tool in investigation of the Earth's structure, processes in the Earth, and particularly earthquake phenomena. Among various numerical methods, the finite-difference method is the dominant method in the modeling of earthquake motion. Moreover, it is becoming more important in the seismic exploration and structural modeling. At the same time we are convinced that the best time of the finite-difference method in seismology is in the future. This monograph provides tutorial and detailed introduction to the application of the finite difference (FD), finite-element (FE), and hybrid FD-FE methods to the modeling of seismic wave propagation and earthquake motion. The text does not cover all topics and aspects of the methods. We focus on those to which we have contributed. We present alternative formulations of equation of motion for a smooth elastic continuum. We then develop alternative formulations for a canonical problem with a welded material interface and free surface. We continue with a model of an earthquake source. We complete the general theoretical introduction by a chapter on the constitutive laws for elastic and viscoelastic media, and brief review of strong formulations of the equation of motion. What follows is a block of chapters on the finite-difference and finite-element methods. We develop FD targets for the free surface and welded material interface. We then present various FD schemes for a smooth continuum, free surface, and welded interface. We focus on the staggered-grid and mainly optimally-accurate FD schemes. We also present alternative formulations of the FE method. We include the FD and FE implementations of the traction-at-split-nodes method for simulation of dynamic rupture propagation. The FD modeling is applied to the model of the deep sedimentary Grenoble basin, France. The FD and FE methods are combined in the hybrid FD-FE method. The hybrid
Mechanisms driving local breast cancer recurrence in a model of breast-conserving surgery.
LENUS (Irish Health Repository)
Smith, Myles J
2012-02-03
OBJECTIVE: We aimed to identify mechanisms driving local recurrence in a model of breast-conserving surgery (BCS) for breast cancer. BACKGROUND: Breast cancer recurrence after BCS remains a clinically significant, but poorly understood problem. We have previously reported that recurrent colorectal tumours demonstrate altered growth dynamics, increased metastatic burden and resistance to apoptosis, mediated by upregulation of phosphoinositide-3-kinase\\/Akt (PI3K\\/Akt). We investigated whether similar characteristics were evident in a model of locally recurrent breast cancer. METHODS: Tumours were generated by orthotopic inoculation of 4T1 cells in two groups of female Balb\\/c mice and cytoreductive surgery performed when mean tumour size was above 150 mm(3). Local recurrence was observed and gene expression was examined using Affymetrix GeneChips in primary and recurrent tumours. Differential expression was confirmed with quantitative real-time polymerase chain reaction (qRT-PCR). Phosphorylation of Akt was assessed using Western immunoblotting. An ex vivo heat shock protein (HSP)-loaded dendritic cell vaccine was administered in the perioperative period. RESULTS: We observed a significant difference in the recurrent 4T1 tumour volume and growth rate (p < 0.05). Gene expression studies suggested roles for the PI3K\\/Akt system and local immunosuppression driving the altered growth kinetics. We demonstrated that perioperative vaccination with an ex vivo HSP-loaded dendritic cell vaccine abrogated recurrent tumour growth in vivo (p = 0.003 at day 15). CONCLUSION: Investigating therapies which target tumour survival pathways such as PI3K\\/Akt and boost immune surveillance in the perioperative period may be useful adjuncts to contemporary breast cancer treatment.
Finite-Source Inversion for the 2004 Parkfield Earthquake using 3D Velocity Model Green's Functions
Kim, A.; Dreger, D.; Larsen, S.
2008-12-01
We determine finite fault models of the 2004 Parkfield earthquake using 3D Green's functions. Because of the dense station coverage and detailed 3D velocity structure model in this region, this earthquake provides an excellent opportunity to examine how the 3D velocity structure affects the finite fault inverse solutions. Various studies (e.g. Michaels and Eberhart-Phillips, 1991; Thurber et al., 2006) indicate that there is a pronounced velocity contrast across the San Andreas Fault along the Parkfield segment. Also the fault zone at Parkfield is wide as evidenced by mapped surface faults and where surface slip and creep occurred in the 1966 and the 2004 Parkfield earthquakes. For high resolution images of the rupture process"Ait is necessary to include the accurate 3D velocity structure for the finite source inversion. Liu and Aurchuleta (2004) performed finite fault inversions using both 1D and 3D Green's functions for 1989 Loma Prieta earthquake using the same source paramerization and data but different Green's functions and found that the models were quite different. This indicates that the choice of the velocity model significantly affects the waveform modeling at near-fault stations. In this study, we used the P-wave velocity model developed by Thurber et al (2006) to construct the 3D Green's functions. P-wave speeds are converted to S-wave speeds and density using by the empirical relationships of Brocher (2005). Using a finite difference method, E3D (Larsen and Schultz, 1995), we computed the 3D Green's functions numerically by inserting body forces at each station. Using reciprocity, these Green's functions are recombined to represent the ground motion at each station due to the slip on the fault plane. First we modeled the waveforms of small earthquakes to validate the 3D velocity model and the reciprocity of the Green"fs function. In the numerical tests we found that the 3D velocity model predicted the individual phases well at frequencies lower than 0
Field Investigations and a Tsunami Modeling for the 1766 Marmara Sea Earthquake, Turkey
Aykurt Vardar, H.; Altinok, Y.; Alpar, B.; Unlu, S.; Yalciner, A. C.
2016-12-01
Turkey is located on one of the world's most hazardous earthquake zones. The northern branch of the North Anatolian fault beneath the Sea of Marmara, where the population is most concentrated, is the most active fault branch at least since late Pliocene. The Sea of Marmara region has been affected by many large tsunamigenic earthquakes; the most destructive ones are 549, 553, 557, 740, 989, 1332, 1343, 1509, 1766, 1894, 1912 and 1999 events. In order to understand and determine the tsunami potential and their possible effects along the coasts of this inland sea, detailed documentary, geophysical and numerical modelling studies are needed on the past earthquakes and their associated tsunamis whose effects are presently unknown.On the northern coast of the Sea of Marmara region, the Kucukcekmece Lagoon has a high potential to trap and preserve tsunami deposits. Within the scope of this study, lithological content, composition and sources of organic matters in the lagoon's bottom sediments were studied along a 4.63 m-long piston core recovered from the SE margin of the lagoon. The sedimentary composition and possible sources of the organic matters along the core were analysed and their results were correlated with the historical events on the basis of dating results. Finally, a tsunami scenario was tested for May 22nd 1766 Marmara Sea Earthquake by using a widely used tsunami simulation model called NAMIDANCE. The results show that the candidate tsunami deposits at the depths of 180-200 cm below the lagoons bottom were related with the 1766 (May) earthquake. This work was supported by the Scientific Research Projects Coordination Unit of Istanbul University (Project 6384) and by the EU project TRANSFER for coring.
Castaldo, Raffaele; Tizzani, Pietro
2016-04-01
Many numerical models have been developed to simulate the deformation and stress changes associated to the faulting process. This aspect is an important topic in fracture mechanism. In the proposed study, we investigate the impact of the deep fault geometry and tectonic setting on the co-seismic ground deformation pattern associated to different earthquake phenomena. We exploit the impact of the structural-geological data in Finite Element environment through an optimization procedure. In this framework, we model the failure processes in a physical mechanical scenario to evaluate the kinematics associated to the Mw 6.1 L'Aquila 2009 earthquake (Italy), the Mw 5.9 Ferrara and Mw 5.8 Mirandola 2012 earthquake (Italy) and the Mw 8.3 Gorkha 2015 earthquake (Nepal). These seismic events are representative of different tectonic scenario: the normal, the reverse and thrust faulting processes, respectively. In order to simulate the kinematic of the analyzed natural phenomena, we assume, under the plane stress approximation (is defined to be a state of stress in which the normal stress, sz, and the shear stress sxz and syz, directed perpendicular to x-y plane are assumed to be zero), the linear elastic behavior of the involved media. The performed finite element procedure consist of through two stages: (i) compacting under the weight of the rock successions (gravity loading), the deformation model reaches a stable equilibrium; (ii) the co-seismic stage simulates, through a distributed slip along the active fault, the released stresses. To constrain the models solution, we exploit the DInSAR deformation velocity maps retrieved by satellite data acquired by old and new generation sensors, as ENVISAT, RADARSAT-2 and SENTINEL 1A, encompassing the studied earthquakes. More specifically, we first generate 2D several forward mechanical models, then, we compare these with the recorded ground deformation fields, in order to select the best boundaries setting and parameters. Finally
Swallowing sound detection using hidden markov modeling of recurrence plot features
International Nuclear Information System (INIS)
Aboofazeli, Mohammad; Moussavi, Zahra
2009-01-01
Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.
Swallowing sound detection using hidden markov modeling of recurrence plot features
Energy Technology Data Exchange (ETDEWEB)
Aboofazeli, Mohammad [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: umaboofa@cc.umanitoba.ca; Moussavi, Zahra [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: mousavi@ee.umanitoba.ca
2009-01-30
Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.
International Nuclear Information System (INIS)
Parlos, A.G.; Chong, K.T.; Atiya, A.F.
1994-01-01
A nonlinear multivariable empirical model is developed for a U-tube steam generator using the recurrent multilayer perceptron network as the underlying model structure. The recurrent multilayer perceptron is a dynamic neural network, very effective in the input-output modeling of complex process systems. A dynamic gradient descent learning algorithm is used to train the recurrent multilayer perceptron, resulting in an order of magnitude improvement in convergence speed over static learning algorithms. In developing the U-tube steam generator empirical model, the effects of actuator, process,and sensor noise on the training and testing sets are investigated. Learning and prediction both appear very effective, despite the presence of training and testing set noise, respectively. The recurrent multilayer perceptron appears to learn the deterministic part of a stochastic training set, and it predicts approximately a moving average response. Extensive model validation studies indicate that the empirical model can substantially generalize (extrapolate), though online learning becomes necessary for tracking transients significantly different than the ones included in the training set and slowly varying U-tube steam generator dynamics. In view of the satisfactory modeling accuracy and the associated short development time, neural networks based empirical models in some cases appear to provide a serious alternative to first principles models. Caution, however, must be exercised because extensive on-line validation of these models is still warranted
A Virtual Tour of the 1868 Hayward Earthquake in Google EarthTM
Lackey, H. G.; Blair, J. L.; Boatwright, J.; Brocher, T.
2007-12-01
The 1868 Hayward earthquake has been overshadowed by the subsequent 1906 San Francisco earthquake that destroyed much of San Francisco. Nonetheless, a modern recurrence of the 1868 earthquake would cause widespread damage to the densely populated Bay Area, particularly in the east Bay communities that have grown up virtually on top of the Hayward fault. Our concern is heightened by paleoseismic studies suggesting that the recurrence interval for the past five earthquakes on the southern Hayward fault is 140 to 170 years. Our objective is to build an educational web site that illustrates the cause and effect of the 1868 earthquake drawing on scientific and historic information. We will use Google EarthTM software to visually illustrate complex scientific concepts in a way that is understandable to a non-scientific audience. This web site will lead the viewer from a regional summary of the plate tectonics and faulting system of western North America, to more specific information about the 1868 Hayward earthquake itself. Text and Google EarthTM layers will include modeled shaking of the earthquake, relocations of historic photographs, reconstruction of damaged buildings as 3-D models, and additional scientific data that may come from the many scientific studies conducted for the 140th anniversary of the event. Earthquake engineering concerns will be stressed, including population density, vulnerable infrastructure, and lifelines. We will also present detailed maps of the Hayward fault, measurements of fault creep, and geologic evidence of its recurrence. Understanding the science behind earthquake hazards is an important step in preparing for the next significant earthquake. We hope to communicate to the public and students of all ages, through visualizations, not only the cause and effect of the 1868 earthquake, but also modern seismic hazards of the San Francisco Bay region.
Norbeck, J. H.; Rubinstein, J. L.
2017-12-01
The earthquake activity in Oklahoma and Kansas that began in 2008 reflects the most widespread instance of induced seismicity observed to date. In this work, we demonstrate that the basement fault stressing conditions that drive seismicity rate evolution are related directly to the operational history of 958 saltwater disposal wells completed in the Arbuckle aquifer. We developed a fluid pressurization model based on the assumption that pressure changes are dominated by reservoir compressibility effects. Using injection well data, we established a detailed description of the temporal and spatial variability in stressing conditions over the 21.5-year period from January 1995 through June 2017. With this stressing history, we applied a numerical model based on rate-and-state friction theory to generate seismicity rate forecasts across a broad range of spatial scales. The model replicated the onset of seismicity, the timing of the peak seismicity rate, and the reduction in seismicity following decreased disposal activity. The behavior of the induced earthquake sequence was consistent with the prediction from rate-and-state theory that the system evolves toward a steady seismicity rate depending on the ratio between the current and background stressing rates. Seismicity rate transients occurred over characteristic timescales inversely proportional to stressing rate. We found that our hydromechanical earthquake rate model outperformed observational and empirical forecast models for one-year forecast durations over the period 2008 through 2016.
Seismo-tectonic model regarding the genesis and occurrence of Vrancea (Romania) earthquakes
International Nuclear Information System (INIS)
Enescu, D.; Enescu, B.D.
1998-01-01
The first part of this paper contains a very short description of some previous attempts in seismo-tectonic modeling of Vrancea zone. The seismo-tectonic model developed by the authors of this work is presented in the second part of the paper. This model is based on the spatial distribution of hypo-centers and focal mechanism characteristics. Lithosphere structure and tectonics of the directly implied zones represent very important characteristics of the seismo-tectonic model. Some two-dimensional and three-dimensional sketches of the model, which satisfy all the above mentioned characteristics and give realistic explanations regarding the genesis and occurrence of Vrancea earthquakes are presented. (authors)
Hysteretic recurrent neural networks: a tool for modeling hysteretic materials and systems
International Nuclear Information System (INIS)
Veeramani, Arun S; Crews, John H; Buckner, Gregory D
2009-01-01
This paper introduces a novel recurrent neural network, the hysteretic recurrent neural network (HRNN), that is ideally suited to modeling hysteretic materials and systems. This network incorporates a hysteretic neuron consisting of conjoined sigmoid activation functions. Although similar hysteretic neurons have been explored previously, the HRNN is unique in its utilization of simple recurrence to 'self-select' relevant activation functions. Furthermore, training is facilitated by placing the network weights on the output side, allowing standard backpropagation of error training algorithms to be used. We present two- and three-phase versions of the HRNN for modeling hysteretic materials with distinct phases. These models are experimentally validated using data collected from shape memory alloys and ferromagnetic materials. The results demonstrate the HRNN's ability to accurately generalize hysteretic behavior with a relatively small number of neurons. Additional benefits lie in the network's ability to identify statistical information concerning the macroscopic material by analyzing the weights of the individual neurons
Folk music style modelling by recurrent neural networks with long short term memory units
Sturm, Bob; Santos, João Felipe; Korshunova, Iryna
2015-01-01
We demonstrate two generative models created by training a recurrent neural network (RNN) with three hidden layers of long short-term memory (LSTM) units. This extends past work in numerous directions, including training deeper models with nearly 24,000 high-level transcriptions of folk tunes. We discuss our on-going work.
Encoding of phonology in a recurrent neural model of grounded speech
Alishahi, Afra; Barking, Marie; Chrupala, Grzegorz; Levy, Roger; Specia, Lucia
2017-01-01
We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how
Modelling the phonotactic structure of natural language words with simple recurrent networks
Stoianov, [No Value; Nerbonne, J; Bouma, H; Coppen, PA; vanHalteren, H; Teunissen, L
1998-01-01
Simple Recurrent Networks (SRN) are Neural Network (connectionist) models able to process natural language. Phonotactics concerns the order of symbols in words. We continued an earlier unsuccessful trial to model the phonotactics of Dutch words with SRNs. In order to overcome the previously reported
An Agent Model of Temporal Dynamics in Relapse and Recurrence in Depression
Aziz, A.A.; Klein, M.C.A.; Treur, J.
2009-01-01
This paper presents a dynamic agent model of recurrences of a depression for an individual. Based on several personal characteristics and a representation of events (i.e. life events or daily hassles) the agent model can simulate whether a human agent that recovered from a depression will fall into
Mothers Coping With Bereavement in the 2008 China Earthquake: A Dual Process Model Analysis.
Chen, Lin; Fu, Fang; Sha, Wei; Chan, Cecilia L W; Chow, Amy Y M
2017-01-01
The purpose of this study is to explore the grief experiences of mothers after they lost their children in the 2008 China earthquake. Informed by the Dual Process Model, this study conducted in-depth interviews to explore how six bereaved mothers coped with such grief over a 2-year period. Right after the earthquake, these mothers suffered from intensive grief. They primarily coped with loss-oriented stressors. As time passed, these mothers began to focus on restoration-oriented stressors to face changes in life. This coping trajectory was a dynamic and integral process, which bereaved mothers oscillated between loss- and restoration-oriented stressors. This study offers insight in extending the existing empirical evidence of the Dual Process Model.
Borrero, Jose C.; Kalligeris, Nikos; Lynett, Patrick J.; Fritz, Hermann M.; Newman, Andrew V.; Convers, Jaime A.
2014-12-01
On 27 August 2012 (04:37 UTC, 26 August 10:37 p.m. local time) a magnitude M w = 7.3 earthquake occurred off the coast of El Salvador and generated surprisingly large local tsunami. Following the event, local and international tsunami teams surveyed the tsunami effects in El Salvador and northern Nicaragua. The tsunami reached a maximum height of ~6 m with inundation of up to 340 m inland along a 25 km section of coastline in eastern El Salvador. Less severe inundation was reported in northern Nicaragua. In the far-field, the tsunami was recorded by a DART buoy and tide gauges in several locations of the eastern Pacific Ocean but did not cause any damage. The field measurements and recordings are compared to numerical modeling results using initial conditions of tsunami generation based on finite-fault earthquake and tsunami inversions and a uniform slip model.
Pre-Clinical Model to Study Recurrent Venous Thrombosis in the Inferior Vena Cava.
Andraska, Elizabeth A; Luke, Catherine E; Elfline, Megan A; Henke, Samuel P; Madapoosi, Siddharth S; Metz, Allan K; Hoinville, Megan E; Wakefield, Thomas W; Henke, Peter K; Diaz, Jose A
2018-06-01
Patients undergoing deep vein thrombosis (VT) have over 30% recurrence, directly increasing their risk of post-thrombotic syndrome. Current murine models of inferior vena cava (IVC) VT model host one thrombosis event. We aimed to develop a murine model to study IVC recurrent VT in mice. An initial VT was induced using the electrolytic IVC model (EIM) with constant blood flow. This approach takes advantage of the restored vein lumen 21 days after a single VT event in the EIM demonstrated by ultrasound. We then induced a second VT 21 days later, using either EIM or an IVC ligation model for comparison. The control groups were a sham surgery and, 21 days later, either EIM or IVC ligation. IVC wall and thrombus were harvested 2 days after the second insult and analysed for IVC and thrombus size, gene expression of fibrotic markers, histology for collagen and Western blot for citrullinated histone 3 (Cit-H3) and fibrin. Ultrasound confirmed the first VT and its progressive resolution with an anatomical channel allowing room for the second thrombus by day 21. As compared with a primary VT, recurrent VT has heavier walls with significant up-regulation of transforming growth factor-β (TGF-β), elastin, interleukin (IL)-6, matrix metallopeptidase 9 (MMP9), MMP2 and a thrombus with high citrullinated histone-3 and fibrin content. Experimental recurrent thrombi are structurally and compositionally different from the primary VT, with a greater pro-fibrotic remodelling vein wall profile. This work provides a VT recurrence IVC model that will help to improve the current understanding of the biological mechanisms and directed treatment of recurrent VT. Schattauer GmbH Stuttgart.
Assessment of earthquake-induced tsunami hazard at a power plant site
International Nuclear Information System (INIS)
Ghosh, A.K.
2008-01-01
This paper presents a study of the tsunami hazard due to submarine earthquakes at a power plant site on the east coast of India. The paper considers various sources of earthquakes from the tectonic information, and records of past earthquakes and tsunamis. Magnitude-frequency relationship for earthquake occurrence rate and a simplified model for tsunami run-up height as a function of earthquake magnitude and the distance between the source and site have been developed. Finally, considering equal likelihood of generation of earthquakes anywhere on each of the faults, the tsunami hazard has been evaluated and presented as a relationship between tsunami height and its mean recurrence interval (MRI). Probability of exceedence of a certain wave height in a given period of time is also presented. These studies will be helpful in making an estimate of the tsunami-induced flooding potential at the site
Hysteretic MDOF Model to Quantify Damage for RC Shear Frames Subject to Earthquakes
DEFF Research Database (Denmark)
Köylüoglu, H. Ugur; Nielsen, Søren R.K.; Cakmak, Ahmet S.
A hysteretic mechanical formulation is derived to quantify local, modal and overall damage in reinforced concrete (RC) shear frames subject to seismic excitation. Each interstorey is represented by a Clough and Johnston (1966) hysteretic constitutive relation with degrading elastic fraction of th...... shear frame is subject to simulated earthquake excitations, which are modelled as a stationary Gaussian stochastic process with Kanai-Tajimi spectrum, multiplied by an envelope function. The relationship between local, modal and overall damage indices is investigated statistically....
Field, Edward; Milner, Kevin R.; Hardebeck, Jeanne L.; Page, Morgan T.; van der Elst, Nicholas; Jordan, Thomas H.; Michael, Andrew J.; Shaw, Bruce E.; Werner, Maximillan J.
2017-01-01
We, the ongoing Working Group on California Earthquake Probabilities, present a spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3), with the goal being to represent aftershocks, induced seismicity, and otherwise triggered events as a potential basis for operational earthquake forecasting (OEF). Specifically, we add an epidemic‐type aftershock sequence (ETAS) component to the previously published time‐independent and long‐term time‐dependent forecasts. This combined model, referred to as UCERF3‐ETAS, collectively represents a relaxation of segmentation assumptions, the inclusion of multifault ruptures, an elastic‐rebound model for fault‐based ruptures, and a state‐of‐the‐art spatiotemporal clustering component. It also represents an attempt to merge fault‐based forecasts with statistical seismology models, such that information on fault proximity, activity rate, and time since last event are considered in OEF. We describe several unanticipated challenges that were encountered, including a need for elastic rebound and characteristic magnitude–frequency distributions (MFDs) on faults, both of which are required to get realistic triggering behavior. UCERF3‐ETAS produces synthetic catalogs of M≥2.5 events, conditioned on any prior M≥2.5 events that are input to the model. We evaluate results with respect to both long‐term (1000 year) simulations as well as for 10‐year time periods following a variety of hypothetical scenario mainshocks. Although the results are very plausible, they are not always consistent with the simple notion that triggering probabilities should be greater if a mainshock is located near a fault. Important factors include whether the MFD near faults includes a significant characteristic earthquake component, as well as whether large triggered events can nucleate from within the rupture zone of the mainshock. Because UCERF3‐ETAS has many sources of uncertainty, as
Umeda, Yasuyuki; Ishida, Fujimaro; Tsuji, Masanori; Furukawa, Kazuhiro; Shiba, Masato; Yasuda, Ryuta; Toma, Naoki; Sakaida, Hiroshi; Suzuki, Hidenori
2017-01-01
This study aimed to predict recurrence after coil embolization of unruptured cerebral aneurysms with computational fluid dynamics (CFD) using porous media modeling (porous media CFD). A total of 37 unruptured cerebral aneurysms treated with coiling were analyzed using follow-up angiograms, simulated CFD prior to coiling (control CFD), and porous media CFD. Coiled aneurysms were classified into stable or recurrence groups according to follow-up angiogram findings. Morphological parameters, coil packing density, and hemodynamic variables were evaluated for their correlations with aneurysmal recurrence. We also calculated residual flow volumes (RFVs), a novel hemodynamic parameter used to quantify the residual aneurysm volume after simulated coiling, which has a mean fluid domain > 1.0 cm/s. Follow-up angiograms showed 24 aneurysms in the stable group and 13 in the recurrence group. Mann-Whitney U test demonstrated that maximum size, dome volume, neck width, neck area, and coil packing density were significantly different between the two groups (P CFD and larger RFVs in the porous media CFD. Multivariate logistic regression analyses demonstrated that RFV was the only independently significant factor (odds ratio, 1.06; 95% confidence interval, 1.01-1.11; P = 0.016). The study findings suggest that RFV collected under porous media modeling predicts the recurrence of coiled aneurysms.
Directory of Open Access Journals (Sweden)
S. Hergarten
2011-09-01
Full Text Available The Olami-Feder-Christensen model is probably the most studied model in the context of self-organized criticality and reproduces several statistical properties of real earthquakes. We investigate and explain synchronization and desynchronization of earthquakes in this model in the nonconservative regime and its relevance for the power-law distribution of the event sizes (Gutenberg-Richter law and for temporal clustering of earthquakes. The power-law distribution emerges from synchronization, and its scaling exponent can be derived as τ = 1.775 from the scaling properties of the rupture areas' perimeter. In contrast, the occurrence of foreshocks and aftershocks according to Omori's law is closely related to desynchronization. This mechanism of foreshock and aftershock generation differs strongly from the widespread idea of spontaneous triggering and gives an idea why some even large earthquakes are not preceded by any foreshocks in nature.
Identified EM Earthquake Precursors
Jones, Kenneth, II; Saxton, Patrick
2014-05-01
Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After a number of custom rock experiments, two hypotheses were formed which could answer the EM wave model. The first hypothesis concerned a sufficient and continuous electron movement either by surface or penetrative flow, and the second regarded a novel approach to radio transmission. Electron flow along fracture surfaces was determined to be inadequate in creating strong EM fields, because rock has a very high electrical resistance making it a high quality insulator. Penetrative flow could not be corroborated as well, because it was discovered that rock was absorbing and confining electrons to a very thin skin depth. Radio wave transmission and detection worked with every single test administered. This hypothesis was reviewed for propagating, long-wave generation with sufficient amplitude, and the capability of penetrating solid rock. Additionally, fracture spaces, either air or ion-filled, can facilitate this concept from great depths and allow for surficial detection. A few propagating precursor signals have been detected in the field occurring with associated phases using custom-built loop antennae. Field testing was conducted in Southern California from 2006-2011, and outside the NE Texas town of Timpson in February, 2013. The antennae have mobility and observations were noted for
A random energy model for size dependence : recurrence vs. transience
Külske, Christof
1998-01-01
We investigate the size dependence of disordered spin models having an infinite number of Gibbs measures in the framework of a simplified 'random energy model for size dependence'. We introduce two versions (involving either independent random walks or branching processes), that can be seen as
Recurrent Neural Network Modeling of Nearshore Sandbar Behavior
Pape, L.; Ruessink, B.G.; Wiering, M.A.; Turner, I.L.
2007-01-01
The temporal evolution of nearshore sandbars (alongshore ridges of sand fringing coasts in water depths less than 10 m and of paramount importance for coastal safety) is commonly predicted using process-based models. These models are autoregressive and require offshore wave characteristics as
Recurrent neural network modeling of nearshore sandbar behavior
Pape, Leo; Ruessink, B.G.; Wiering, Marco A.; Turner, Ian L.
2007-01-01
The temporal evolution of nearshore sandbars (alongshore ridges of sand fringing coasts in water depths less than 10 m and of paramount importance for coastal safety) is commonly predicted using process-based models. These models are autoregressive and require offshore wave characteristics as input,
A stochastic modeling of recurrent measles epidemic | Kassem ...
African Journals Online (AJOL)
A simple stochastic mathematical model is developed and investigated for the dynamics of measles epidemic. The model, which is a multi-dimensional diffusion process, includes susceptible individuals, latent (exposed), infected and removed individuals. Stochastic effects are assumed to arise in the process of infection of ...
Use of Kazakh nuclear explosions for testing dilatancy diffusion model of earthquake prediction
International Nuclear Information System (INIS)
Srivastava, H.N.
1979-01-01
P wave travel time anomalies from Kazakh explosions during the years 1965-1972 were studied with reference to Jeffreys Bullen (1952) and Herrin Travel time tables (1968) and discussed using F ratio test at seven stations in Himachal Pradesh. For these events, the temporal and spatial variations of travel time residuals were examined from the point of view of long term changes in velocity known to precede earthquakes and local geology. The results show perference for Herrin Travel time tables at these epicentral distances from Kazakh explosions. F ratio test indicated that variation between sample means of different stations in the network showed more variation than can be attributed to the sampling error. Although the spatial variation of mean residuals (1965-1972) could generally be explained on the basis of the local geology, the temporal variations of such residuals from Kazakh explosions offer limited application in the testing of dilatancy model of earthquake prediction. (auth.)
Lozos, J.
2017-12-01
The great San Andreas Fault (SAF) earthquake of 9 January 1857, estimated at M7.9, was one of California's largest historic earthquakes. Its 360 km rupture trace follows the Carrizo and Mojave segments of the SAF, including the 30° compressional Big Bend in the fault. If 1857 were a characteristic rupture, the hazard implications for southern California would be dire, especially given the inferred 150 year recurrence interval for this section of the fault. However, recent paleoseismic studies in this region suggest that 1857-type events occur less frequently than single-segment Carrizo or Mojave ruptures, and that the hinge of the Big Bend is a barrier to through-going rupture. Here, I use 3D dynamic rupture modeling to attempt to reproduce the rupture length and surface slip distribution of the 1857 earthquake, to determine which physical conditions allow rupture to negotiate the Big Bend of the SAF. These models incorporate the nonplanar geometry of the SAF, an observation-based heterogeneous regional velocity structure (SCEC CVM), and a regional stress field from seismicity literature. Under regional stress conditions, I am unable to produce model events that both match the observed surface slip on the Carrizo and Mojave segments of the SAF and include rupture through the hinge of the Big Bend. I suggest that accumulated stresses at the bend hinge from multiple smaller Carrizo or Mojave ruptures may be required to allow rupture through the bend — a concept consistent with paleoseismic observations. This study may contribute to understanding the cyclicity of hazard associated with the southern-central SAF.
International Nuclear Information System (INIS)
Skalozubov, V.I.; Gablaya, T.V.; Vashchenko, V.N.; Gerasimenko, T.V.; Kozlov, I.L.
2014-01-01
We propose a hydrodynamic model of possible flooding of the industrial site at Zaporozh'e NPP design basis earthquakes and hurricane. In contrast to the quasi-stationary approach of stress tests in the proposed model takes into account the dynamic nature of the processes of flooding, as well as a direct impact of external influences on extreme Kakhovske reservoir. As a result of hydrodynamic modeling, the possible conditions and criteria for the flooding of the industrial site at Zaporozhe extreme external influences
Tullis, Terry. E.; Richards-Dinger, Keith B.; Barall, Michael; Dieterich, James H.; Field, Edward H.; Heien, Eric M.; Kellogg, Louise; Pollitz, Fred F.; Rundle, John B.; Sachs, Michael K.; Turcotte, Donald L.; Ward, Steven N.; Yikilmaz, M. Burak
2012-01-01
In order to understand earthquake hazards we would ideally have a statistical description of earthquakes for tens of thousands of years. Unfortunately the ∼100‐year instrumental, several 100‐year historical, and few 1000‐year paleoseismological records are woefully inadequate to provide a statistically significant record. Physics‐based earthquake simulators can generate arbitrarily long histories of earthquakes; thus they can provide a statistically meaningful history of simulated earthquakes. The question is, how realistic are these simulated histories? This purpose of this paper is to begin to answer that question. We compare the results between different simulators and with information that is known from the limited instrumental, historic, and paleoseismological data.As expected, the results from all the simulators show that the observational record is too short to properly represent the system behavior; therefore, although tests of the simulators against the limited observations are necessary, they are not a sufficient test of the simulators’ realism. The simulators appear to pass this necessary test. In addition, the physics‐based simulators show similar behavior even though there are large differences in the methodology. This suggests that they represent realistic behavior. Different assumptions concerning the constitutive properties of the faults do result in enhanced capabilities of some simulators. However, it appears that the similar behavior of the different simulators may result from the fault‐system geometry, slip rates, and assumed strength drops, along with the shared physics of stress transfer.This paper describes the results of running four earthquake simulators that are described elsewhere in this issue of Seismological Research Letters. The simulators ALLCAL (Ward, 2012), VIRTCAL (Sachs et al., 2012), RSQSim (Richards‐Dinger and Dieterich, 2012), and ViscoSim (Pollitz, 2012) were run on our most recent all‐California fault
Seismic waves and earthquakes in a global monolithic model
Roubíček, Tomáš
2018-03-01
The philosophy that a single "monolithic" model can "asymptotically" replace and couple in a simple elegant way several specialized models relevant on various Earth layers is presented and, in special situations, also rigorously justified. In particular, global seismicity and tectonics is coupled to capture, e.g., (here by a simplified model) ruptures of lithospheric faults generating seismic waves which then propagate through the solid-like mantle and inner core both as shear (S) or pressure (P) waves, while S-waves are suppressed in the fluidic outer core and also in the oceans. The "monolithic-type" models have the capacity to describe all the mentioned features globally in a unified way together with corresponding interfacial conditions implicitly involved, only when scaling its parameters appropriately in different Earth's layers. Coupling of seismic waves with seismic sources due to tectonic events is thus an automatic side effect. The global ansatz is here based, rather for an illustration, only on a relatively simple Jeffreys' viscoelastic damageable material at small strains whose various scaling (limits) can lead to Boger's viscoelastic fluid or even to purely elastic (inviscid) fluid. Self-induced gravity field, Coriolis, centrifugal, and tidal forces are counted in our global model, as well. The rigorous mathematical analysis as far as the existence of solutions, convergence of the mentioned scalings, and energy conservation is briefly presented.
Application of the recurrent multilayer perceptron in modeling complex process dynamics.
Parlos, A G; Chong, K T; Atiya, A F
1994-01-01
A nonlinear dynamic model is developed for a process system, namely a heat exchanger, using the recurrent multilayer perceptron network as the underlying model structure. The perceptron is a dynamic neural network, which appears effective in the input-output modeling of complex process systems. Dynamic gradient descent learning is used to train the recurrent multilayer perceptron, resulting in an order of magnitude improvement in convergence speed over a static learning algorithm used to train the same network. In developing the empirical process model the effects of actuator, process, and sensor noise on the training and testing sets are investigated. Learning and prediction both appear very effective, despite the presence of training and testing set noise, respectively. The recurrent multilayer perceptron appears to learn the deterministic part of a stochastic training set, and it predicts approximately a moving average response of various testing sets. Extensive model validation studies with signals that are encountered in the operation of the process system modeled, that is steps and ramps, indicate that the empirical model can substantially generalize operational transients, including accurate prediction of instabilities not in the training set. However, the accuracy of the model beyond these operational transients has not been investigated. Furthermore, online learning is necessary during some transients and for tracking slowly varying process dynamics. Neural networks based empirical models in some cases appear to provide a serious alternative to first principles models.
Regression analysis of mixed recurrent-event and panel-count data with additive rate models.
Zhu, Liang; Zhao, Hui; Sun, Jianguo; Leisenring, Wendy; Robison, Leslie L
2015-03-01
Event-history studies of recurrent events are often conducted in fields such as demography, epidemiology, medicine, and social sciences (Cook and Lawless, 2007, The Statistical Analysis of Recurrent Events. New York: Springer-Verlag; Zhao et al., 2011, Test 20, 1-42). For such analysis, two types of data have been extensively investigated: recurrent-event data and panel-count data. However, in practice, one may face a third type of data, mixed recurrent-event and panel-count data or mixed event-history data. Such data occur if some study subjects are monitored or observed continuously and thus provide recurrent-event data, while the others are observed only at discrete times and hence give only panel-count data. A more general situation is that each subject is observed continuously over certain time periods but only at discrete times over other time periods. There exists little literature on the analysis of such mixed data except that published by Zhu et al. (2013, Statistics in Medicine 32, 1954-1963). In this article, we consider the regression analysis of mixed data using the additive rate model and develop some estimating equation-based approaches to estimate the regression parameters of interest. Both finite sample and asymptotic properties of the resulting estimators are established, and the numerical studies suggest that the proposed methodology works well for practical situations. The approach is applied to a Childhood Cancer Survivor Study that motivated this study. © 2014, The International Biometric Society.
Recurrence relations and time evolution in the three-dimensional Sawada model
International Nuclear Information System (INIS)
Lee, M.H.; Hong, J.
1984-01-01
Time-dependent behavior of the three-dimensional Sawada model is obtained by a method of recurrence relations. Exactly calculated quantities are the time evolution of the density-fluctuation operator and its random force. As an application, their linear coefficients, the relaxation and memory functions are used to obtain certain dynamic quantities, e.g., the mobility
Kumaran, Dharshan; McClelland, James L.
2012-01-01
In this article, we present a perspective on the role of the hippocampal system in generalization, instantiated in a computational model called REMERGE (recurrency and episodic memory results in generalization). We expose a fundamental, but neglected, tension between prevailing computational theories that emphasize the function of the hippocampus…
Directory of Open Access Journals (Sweden)
Joaquín F. Márquez Pérez
2010-10-01
Full Text Available The investigation shows the results of the application of an integrative psychotherapy model in the treatment of Depressive Recurrent disorder without psychotic symptoms. The use of a design of a qualitative Investigation - Action was necessary for finding the psychological mechanism that explain different levels of effectiveness.
Earthquake hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based model
Farahi Ghasre Aboonasr, Sedigheh; Zamani, Ahmad; Razavipour, Fatemeh; Boostani, Reza
2017-08-01
Producing accurate seismic hazard map and predicting hazardous areas is necessary for risk mitigation strategies. In this paper, a fuzzy logic inference system is utilized to estimate the earthquake potential and seismic zoning of Zagros Orogenic Belt. In addition to the interpretability, fuzzy predictors can capture both nonlinearity and chaotic behavior of data, where the number of data is limited. In this paper, earthquake pattern in the Zagros has been assessed for the intervals of 10 and 50 years using fuzzy rule-based model. The Molchan statistical procedure has been used to show that our forecasting model is reliable. The earthquake hazard maps for this area reveal some remarkable features that cannot be observed on the conventional maps. Regarding our achievements, some areas in the southern (Bandar Abbas), southwestern (Bandar Kangan) and western (Kermanshah) parts of Iran display high earthquake severity even though they are geographically far apart.
Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; McCandless, K.; Nilsson, S.; Petersson, N.A.; Rodgers, A.; Sjogreen, B.; Zoback, M.L.
2008-01-01
We estimate the ground motions produce by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.
A Markov deterioration model for predicting recurrent maintenance ...
African Journals Online (AJOL)
The parameters of the Markov chain model for predicting the condition of the road at a design · period for· the flexible pavement failures of wheel track rutting, cracks and pot holes were developed for the Niger State· road network . in Nigeria. Twelve sampled candidate roads were each subjected to standard inventory, traffic ...
Pulinets, S.; Ouzounov, D.
2010-01-01
The paper presents a conception of complex multidisciplinary approach to the problem of clarification the nature of short-term earthquake precursors observed in atmosphere, atmospheric electricity and in ionosphere and magnetosphere. Our approach is based on the most fundamental principles of tectonics giving understanding that earthquake is an ultimate result of relative movement of tectonic plates and blocks of different sizes. Different kind of gases: methane, helium, hydrogen, and carbon dioxide leaking from the crust can serve as carrier gases for radon including underwater seismically active faults. Radon action on atmospheric gases is similar to the cosmic rays effects in upper layers of atmosphere: it is the air ionization and formation by ions the nucleus of water condensation. Condensation of water vapor is accompanied by the latent heat exhalation is the main cause for observing atmospheric thermal anomalies. Formation of large ion clusters changes the conductivity of boundary layer of atmosphere and parameters of the global electric circuit over the active tectonic faults. Variations of atmospheric electricity are the main source of ionospheric anomalies over seismically active areas. Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model can explain most of these events as a synergy between different ground surface, atmosphere and ionosphere processes and anomalous variations which are usually named as short-term earthquake precursors. A newly developed approach of Interdisciplinary Space-Terrestrial Framework (ISTF) can provide also a verification of these precursory processes in seismically active regions. The main outcome of this paper is the unified concept for systematic validation of different types of earthquake precursors united by physical basis in one common theory.
Chakraborty, Suman; Sasmal, Sudipta; Basak, Tamal; Ghosh, Soujan; Palit, Sourav; Chakrabarti, Sandip K.; Ray, Suman
2017-10-01
We present perturbations due to seismo-ionospheric coupling processes in propagation characteristics of sub-ionospheric Very Low Frequency (VLF) signals received at Ionospheric & Earthquake Research Centre (IERC) (Lat. 22.50°N, Long. 87.48°E), India. The study is done during and prior to an earthquake of Richter scale magnitude M = 7.3 occurring at a depth of 18 km at southeast of Kodari, Nepal on 12 May 2015 at 12:35:19 IST (07:05:19 UT). The recorded VLF signal of Japanese transmitter JJI at frequency 22.2 kHz (Lat. 32.08°N, Long. 130.83°E) suffers from strong shifts in sunrise and sunset terminator times towards nighttime starting from three to four days prior to the earthquake. The signal shows a similar variation in terminator times during a major aftershock of magnitude M = 6.7 on 16 May, 2015 at 17:04:10 IST (11:34:10 UT). These shifts in terminator times is numerically modeled using Long Wavelength Propagation Capability (LWPC) Programme. The unperturbed VLF signal is simulated by using the day and night variation of reflection height (h‧) and steepness parameter (β) fed in LWPC for the entire path. The perturbed signal is obtained by additional variation of these parameters inside the earthquake preparation zone. It is found that the shift of the terminator time towards nighttime happens only when the reflection height is increased. We also calculate electron density profile by using the Wait's exponential formula for specified location over the propagation path.
Directory of Open Access Journals (Sweden)
Yongsheng Li
2016-06-01
Full Text Available Determining the relationship between crustal movement and faulting in thrust belts is essential for understanding the growth of geological structures and addressing the proposed models of a potential earthquake hazard. A Mw 5.9 earthquake occurred on 21 January 2016 in Menyuan, NE Qinghai Tibetan plateau. We combined satellite interferometry from Sentinel-1A Terrain Observation with Progressive Scans (TOPS images, historical earthquake records, aftershock relocations and geological data to determine fault seismogenic structural geometry and its relationship with the Lenglongling faults. The results indicate that the reverse slip of the 2016 earthquake is distributed on a southwest dipping shovel-shaped fault segment. The main shock rupture was initiated at the deeper part of the fault plane. The focal mechanism of the 2016 earthquake is quite different from that of a previous Ms 6.5 earthquake which occurred in 1986. Both earthquakes occurred at the two ends of a secondary fault. Joint analysis of the 1986 and 2016 earthquakes and aftershocks distribution of the 2016 event reveals an intense connection with the tectonic deformation of the Lenglongling faults. Both earthquakes resulted from the left-lateral strike-slip of the Lenglongling fault zone and showed distinct focal mechanism characteristics. Under the shearing influence, the normal component is formed at the releasing bend of the western end of the secondary fault for the left-order alignment of the fault zone, while the thrust component is formed at the restraining bend of the east end for the right-order alignment of the fault zone. Seismic activity of this region suggests that the left-lateral strike-slip of the Lenglongling fault zone plays a significant role in adjustment of the tectonic deformation in the NE Tibetan plateau.
Temporal properties of seismicity and largest earthquakes in SE Carpathians
Directory of Open Access Journals (Sweden)
S. Byrdina
2006-01-01
Full Text Available In order to estimate the hazard rate distribution of the largest seismic events in Vrancea, South-Eastern Carpathians, we study temporal properties of historical and instrumental catalogues of seismicity. First, on the basis of Generalized Extreme Value theory we estimate the average return period of the largest events. Then, following Bak et al. (2002 and Corral (2005a, we study scaling properties of recurrence times between earthquakes in appropriate spatial volumes. We come to the conclusion that the seismicity is temporally clustered, and that the distribution of recurrence times is significantly different from a Poisson process even for times largely exceeding corresponding periods of foreshock and aftershock activity. Modeling the recurrence times by a gamma distributed variable, we finally estimate hazard rates with respect to the time elapsed from the last large earthquake.
A model of seismic focus and related statistical distributions of earthquakes
International Nuclear Information System (INIS)
Apostol, Bogdan-Felix
2006-01-01
A growth model for accumulating seismic energy in a localized seismic focus is described, which introduces a fractional parameter r on geometrical grounds. The model is employed for deriving a power-type law for the statistical distribution in energy, where the parameter r contributes to the exponent, as well as corresponding time and magnitude distributions for earthquakes. The accompanying seismic activity of foreshocks and aftershocks is discussed in connection with this approach, as based on Omori distributions, and the rate of released energy is derived
International Nuclear Information System (INIS)
Soloviev, A.A.; Vorobieva, I.A.
1995-08-01
A seismically active region is represented as a system of absolutely rigid blocks divided by infinitely thin plane faults. The interaction of the blocks along the fault planes and with the underlying medium is viscous-elastic. The system of blocks moves as a consequence of prescribed motion of boundary blocks and the underlying medium. When for some part of a fault plane the stress surpasses a certain strength level a stress-drop (''a failure'') occurs. It can cause a failure for other parts of fault planes. The failures are considered as earthquakes. As a result of the numerical simulation a synthetic earthquake catalogue is produced. This procedure is applied for numerical modeling of dynamics of the block structure approximating the tectonic structure of the Vrancea region. By numerical experiments the values of the model parameters were obtained which supplied the synthetic earthquake catalog with the space distribution of epicenters close to the real distribution of the earthquake epicenters in the Vrancea region. The frequency-magnitude relations (Gutenberg-Richter curves) obtained for the synthetic and real catalogs have some common features. The sequences of earthquakes arising in the model are studied for some artificial structures. It is found that ''foreshocks'', ''main shocks'', and ''aftershocks'' could be detected among earthquakes forming the sequences. The features of aftershocks, foreshocks, and catalogs of main shocks are analysed. (author). 5 refs, 12 figs, 16 tabs
Directory of Open Access Journals (Sweden)
Jiancang Zhuang
2012-07-01
Full Text Available Based on the ETAS (epidemic-type aftershock sequence model, which is used for describing the features of short-term clustering of earthquake occurrence, this paper presents some theories and techniques related to evaluating the probability distribution of the maximum magnitude in a given space-time window, where the Gutenberg-Richter law for earthquake magnitude distribution cannot be directly applied. It is seen that the distribution of the maximum magnitude in a given space-time volume is determined in the longterm by the background seismicity rate and the magnitude distribution of the largest events in each earthquake cluster. The techniques introduced were applied to the seismicity in the Japan region in the period from 1926 to 2009. It was found that the regions most likely to have big earthquakes are along the Tohoku (northeastern Japan Arc and the Kuril Arc, both with much higher probabilities than the offshore Nankai and Tokai regions.
Uchide, Takahiko; Song, Seok Goo
2018-03-01
The 2016 Gyeongju earthquake (ML 5.8) was the largest instrumentally recorded inland event in South Korea. It occurred in the southeast of the Korean Peninsula and was preceded by a large ML 5.1 foreshock. The aftershock seismicity data indicate that these earthquakes occurred on two closely collocated parallel faults that are oblique to the surface trace of the Yangsan fault. We investigate the rupture properties of these earthquakes using finite-fault slip inversion analyses. The obtained models indicate that the ruptures propagated NNE-ward and SSW-ward for the main shock and the large foreshock, respectively. This indicates that these earthquakes occurred on right-step faults and were initiated around a fault jog. The stress drops were up to 62 and 43 MPa for the main shock and the largest foreshock, respectively. These high stress drops imply high strength excess, which may be overcome by the stress concentration around the fault jog.
International Nuclear Information System (INIS)
Depeursinge, Adrien; Yanagawa, Masahiro; Leung, Ann N.; Rubin, Daniel L.
2015-01-01
Purpose: To investigate the importance of presurgical computed tomography (CT) intensity and texture information from ground-glass opacities (GGO) and solid nodule components for the prediction of adenocarcinoma recurrence. Methods: For this study, 101 patients with surgically resected stage I adenocarcinoma were selected. During the follow-up period, 17 patients had disease recurrence with six associated cancer-related deaths. GGO and solid tumor components were delineated on presurgical CT scans by a radiologist. Computational texture models of GGO and solid regions were built using linear combinations of steerable Riesz wavelets learned with linear support vector machines (SVMs). Unlike other traditional texture attributes, the proposed texture models are designed to encode local image scales and directions that are specific to GGO and solid tissue. The responses of the locally steered models were used as texture attributes and compared to the responses of unaligned Riesz wavelets. The texture attributes were combined with CT intensities to predict tumor recurrence and patient hazard according to disease-free survival (DFS) time. Two families of predictive models were compared: LASSO and SVMs, and their survival counterparts: Cox-LASSO and survival SVMs. Results: The best-performing predictive model of patient hazard was associated with a concordance index (C-index) of 0.81 ± 0.02 and was based on the combination of the steered models and CT intensities with survival SVMs. The same feature group and the LASSO model yielded the highest area under the receiver operating characteristic curve (AUC) of 0.8 ± 0.01 for predicting tumor recurrence, although no statistically significant difference was found when compared to using intensity features solely. For all models, the performance was found to be significantly higher when image attributes were based on the solid components solely versus using the entire tumors (p < 3.08 × 10 −5 ). Conclusions: This study
Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model
Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,
2013-01-01
In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of
Evaluation of Seismic Rupture Models for the 2011 Tohoku-Oki Earthquake Using Tsunami Simulation
Directory of Open Access Journals (Sweden)
Ming-Da Chiou
2013-01-01
Full Text Available Developing a realistic, three-dimensional rupture model of the large offshore earthquake is difficult to accomplish directly through band-limited ground-motion observations. A potential indirect method is using a tsunami simulation to verify the rupture model in reverse because the initial conditions of the associated tsunamis are caused by a coseismic seafloor displacement correlating to the rupture pattern along the main faulting. In this study, five well-developed rupture models for the 2011 Tohoku-Oki earthquake were adopted to evaluate differences in simulated tsunamis and various rupture asperities. The leading wave of the simulated tsunamis triggered by the seafloor displacement in Yamazaki et al. (2011 model resulted in the smallest root-mean-squared difference (~0.082 m on average from the records of the eight DART (Deep-ocean Assessment and Reporting of Tsunamis stations. This indicates that the main seismic rupture during the 2011 Tohoku earthquake should occur in a large shallow slip in a narrow range adjacent to the Japan trench. This study also quantified the influences of ocean stratification and tides which are normally overlooked in tsunami simulations. The discrepancy between the simulations with and without stratification was less than 5% of the first peak wave height at the eight DART stations. The simulations, run with and without the presence of tides, resulted in a ~1% discrepancy in the height of the leading wave. Because simulations accounting for tides and stratification are time-consuming and their influences are negligible, particularly in the first tsunami wave, the two factors can be ignored in a tsunami prediction for practical purposes.
Campbell, Ian M; Stewart, Jonathan R; James, Regis A; Lupski, James R; Stankiewicz, Paweł; Olofsson, Peter; Shaw, Chad A
2014-10-02
Most new mutations are observed to arise in fathers, and increasing paternal age positively correlates with the risk of new variants. Interestingly, new mutations in X-linked recessive disease show elevated familial recurrence rates. In male offspring, these mutations must be inherited from mothers. We previously developed a simulation model to consider parental mosaicism as a source of transmitted mutations. In this paper, we extend and formalize the model to provide analytical results and flexible formulas. The results implicate parent of origin and parental mosaicism as central variables in recurrence risk. Consistent with empirical data, our model predicts that more transmitted mutations arise in fathers and that this tendency increases as fathers age. Notably, the lack of expansion later in the male germline determines relatively lower variance in the proportion of mutants, which decreases with paternal age. Subsequently, observation of a transmitted mutation has less impact on the expected risk for future offspring. Conversely, for the female germline, which arrests after clonal expansion in early development, variance in the mutant proportion is higher, and observation of a transmitted mutation dramatically increases the expected risk of recurrence in another pregnancy. Parental somatic mosaicism considerably elevates risk for both parents. These findings have important implications for genetic counseling and for understanding patterns of recurrence in transmission genetics. We provide a convenient online tool and source code implementing our analytical results. These tools permit varying the underlying parameters that influence recurrence risk and could be useful for analyzing risk in diverse family structures. Copyright © 2014 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Modelling end-glacial earthquakes at Olkiluoto. Expansion of the 2010 study
Energy Technology Data Exchange (ETDEWEB)
Faelth, B.; Hoekmark, H. [Clay Technology AB, Lund (Sweden)
2012-02-15
The present report is an extension of Posiva working report 2011-13: 'Modelling end-glacial earthquakes at Olkiluoto'. The modelling methodology and most parameter values are identical to those used in that report. The main objective is the same: to obtain conservative estimates of fracture shear displacements induced by end-glacial earthquakes occurring on verified deformation zones at the Olkiluoto site. The remotely activated rock fractures (with their fracture centres positioned at different distances around the potential earthquake fault being considered) are called 'target fractures'. As in the previous report, all target fractures were assumed to be perfectly planar and circular with a radius of 75 m. Compared to the previous study, the result catalogue is more complete. One additional deformation zone (i.e. potential earthquake fault) has been included (BFZ039), whereas one deformation zone that appeared to produce only insignificant target fracture disturbances (BFZ214) is omitted. For each of the three zones considered here (BFZ021, BFZ039, and BFZ100), four models, each with a different orientation of the target fractures surrounding the fault, are analysed. Three of these four sets were included in the previous report, however not as systematically as here where each of the four fracture orientations is tried in all fracture positions. As in the previous study, seismic moments and moment magnitudes are as high as reasonably possible, given the sizes and orientations of the zones, i.e., the earthquakes release the largest possible amount of strain energy. The strain energy release is restricted only by a low residual fault shear strength applied to suppress post-rupture fault oscillations. Moment magnitudes are: 5.8 (BFZ021), 3.9 (BFZ039) and 4.3 (BFZ100). For the BFZ100 model, the sensitivity of the results to variations in fracture shear strength is checked. The BFZ021 and BFZ100 models are analyzed for two additional in situ stress
Imai, K.; Sugawara, D.; Takahashi, T.
2017-12-01
A large flow caused by tsunami transports sediments from beach and forms tsunami deposits in land and coastal lakes. A tsunami deposit has been found in their undisturbed on coastal lakes especially. Okamura & Matsuoka (2012) found some tsunami deposits in the field survey of coastal lakes facing to the Nankai trough, and tsunami deposits due to the past eight Nankai Trough megathrust earthquakes they identified. The environment in coastal lakes is stably calm and suitable for tsunami deposits preservation compared to other topographical conditions such as plains. Therefore, there is a possibility that the recurrence interval of megathrust earthquakes and tsunamis will be discussed with high resolution. In addition, it has been pointed out that small events that cannot be detected in plains could be separated finely (Sawai, 2012). Various aspects of past tsunami is expected to be elucidated, in consideration of topographical conditions of coastal lakes by using the relationship between the erosion-and-sedimentation process of the lake bottom and the external force of tsunami. In this research, numerical examination based on tsunami sediment transport model (Takahashi et al., 1999) was carried out on the site Ryujin-ike pond of Ohita, Japan where tsunami deposit was identified, and deposit migration analysis was conducted on the tsunami deposit distribution process of historical Nankai Trough earthquakes. Furthermore, examination of tsunami source conditions is possibly investigated by comparison studies of the observed data and the computation of tsunami deposit distribution. It is difficult to clarify details of tsunami source from indistinct information of paleogeographical conditions. However, this result shows that it can be used as a constraint condition of the tsunami source scale by combining tsunami deposit distribution in lakes with computation data.
International Nuclear Information System (INIS)
Peng Yafu
2009-01-01
In this paper, a robust intelligent sliding model control (RISMC) scheme using an adaptive recurrent cerebellar model articulation controller (RCMAC) is developed for a class of uncertain nonlinear chaotic systems. This RISMC system offers a design approach to drive the state trajectory to track a desired trajectory, and it is comprised of an adaptive RCMAC and a robust controller. The adaptive RCMAC is used to mimic an ideal sliding mode control (SMC) due to unknown system dynamics, and a robust controller is designed to recover the residual approximation error for guaranteeing the stable characteristic. Moreover, the Taylor linearization technique is employed to derive the linearized model of the RCMAC. The all adaptation laws of the RISMC system are derived based on the Lyapunov stability analysis and projection algorithm, so that the stability of the system can be guaranteed. Finally, the proposed RISMC system is applied to control a Van der Pol oscillator, a Genesio chaotic system and a Chua's chaotic circuit. The effectiveness of the proposed control scheme is verified by some simulation results with unknown system dynamics and existence of external disturbance. In addition, the advantages of the proposed RISMC are indicated in comparison with a SMC system
Statistical distributions of earthquakes and related non-linear features in seismic waves
International Nuclear Information System (INIS)
Apostol, B.-F.
2006-01-01
A few basic facts in the science of the earthquakes are briefly reviewed. An accumulation, or growth, model is put forward for the focal mechanisms and the critical focal zone of the earthquakes, which relates the earthquake average recurrence time to the released seismic energy. The temporal statistical distribution for average recurrence time is introduced for earthquakes, and, on this basis, the Omori-type distribution in energy is derived, as well as the distribution in magnitude, by making use of the semi-empirical Gutenberg-Richter law relating seismic energy to earthquake magnitude. On geometric grounds, the accumulation model suggests the value r = 1/3 for the Omori parameter in the power-law of energy distribution, which leads to β = 1,17 for the coefficient in the Gutenberg-Richter recurrence law, in fair agreement with the statistical analysis of the empirical data. Making use of this value, the empirical Bath's law is discussed for the average magnitude of the aftershocks (which is 1.2 less than the magnitude of the main seismic shock), by assuming that the aftershocks are relaxation events of the seismic zone. The time distribution of the earthquakes with a fixed average recurrence time is also derived, the earthquake occurrence prediction is discussed by means of the average recurrence time and the seismicity rate, and application of this discussion to the seismic region Vrancea, Romania, is outlined. Finally, a special effect of non-linear behaviour of the seismic waves is discussed, by describing an exact solution derived recently for the elastic waves equation with cubic anharmonicities, its relevance, and its connection to the approximate quasi-plane waves picture. The properties of the seismic activity accompanying a main seismic shock, both like foreshocks and aftershocks, are relegated to forthcoming publications. (author)
DEFF Research Database (Denmark)
Chon, K H; Hoyer, D; Armoundas, A A
1999-01-01
In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...
Of overlapping Cantor sets and earthquakes: analysis of the discrete Chakrabarti-Stinchcombe model
Bhattacharyya, Pratip
2005-03-01
We report an exact analysis of a discrete form of the Chakrabarti-Stinchcombe model for earthquakes (Physica A 270 (1999) 27), which considers a pair of dynamically overlapping finite generations of the Cantor set as a prototype of geological faults. In this model the nth generation of the Cantor set shifts on its replica in discrete steps of the length of a line segment in that generation and periodic boundary conditions are assumed. We determine the general form of time sequences for the constant magnitude overlaps and, hence, obtain the complete time-series of overlaps by the superposition of these sequences for all overlap magnitudes. From the time-series we derive the exact frequency distribution of the overlap magnitudes. The corresponding probability distribution of the logarithm of overlap magnitudes for the nth generation is found to assume the form of the binomial distribution for n Bernoulli trials with probability {1}/{3} for the success of each trial. For an arbitrary pair of consecutive overlaps in the time-series where the magnitude of the earlier overlap is known, we find that the magnitude of the later overlap can be determined with a definite probability; the conditional probability for each possible magnitude of the later overlap follows the binomial distribution for k Bernoulli trials with probability {1}/{2} for the success of each trial and the number k is determined by the magnitude of the earlier overlap. Although this model does not produce the Gutenberg-Richter law for earthquakes, our results indicate that the fractal structure of faults admits a probabilistic prediction of earthquake magnitudes.
Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment
Rebbapragada, Umaa; Oommen, Thomas
2011-01-01
On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.
Variability of dynamic source parameters inferred from kinematic models of past earthquakes
Causse, M.
2013-12-24
We analyse the scaling and distribution of average dynamic source properties (fracture energy, static, dynamic and apparent stress drops) using 31 kinematic inversion models from 21 crustal earthquakes. Shear-stress histories are computed by solving the elastodynamic equations while imposing the slip velocity of a kinematic source model as a boundary condition on the fault plane. This is achieved using a 3-D finite difference method in which the rupture kinematics are modelled with the staggered-grid-split-node fault representation method of Dalguer & Day. Dynamic parameters are then estimated from the calculated stress-slip curves and averaged over the fault plane. Our results indicate that fracture energy, static, dynamic and apparent stress drops tend to increase with magnitude. The epistemic uncertainty due to uncertainties in kinematic inversions remains small (ϕ ∼ 0.1 in log10 units), showing that kinematic source models provide robust information to analyse the distribution of average dynamic source parameters. The proposed scaling relations may be useful to constrain friction law parameters in spontaneous dynamic rupture calculations for earthquake source studies, and physics-based near-source ground-motion prediction for seismic hazard and risk mitigation.
Kouteva, M; Paskaleva, I; Romanelli, F
2003-01-01
An analytical deterministic technique, based on the detailed knowledge of the seismic source process and of the propagation of seismic waves, has been applied to generate synthetic seismic signals at Russe, NE Bulgaria, associated to the strongest intermediate-depth Vrancea earthquakes, which occurred during the last century (1940, 1977, 1986 and 1990). The obtained results show that all ground motion components contribute significantly to the seismic loading and that the seismic source parameters influence the shape and the amplitude of the seismic signal. The approach we used proves that realistic seismic input (also at remote distances) can be constructed via waveform modelling, considering all the possible factors influencing the ground motion.
International Nuclear Information System (INIS)
Kouteva, M.; Paskaleva, I.; Panza, G.F.; Romanelli, F.
2003-06-01
An analytical deterministic technique, based on the detailed knowledge of the seismic source process and of the propagation of seismic waves, has been applied to generate synthetic seismic signals at Russe, NE Bulgaria, associated to the strongest intermediate-depth Vrancea earthquakes, which occurred during the last century (1940, 1977, 1986 and 1990). The obtained results show that all ground motion components contribute significantly to the seismic loading and that the seismic source parameters influence the shape and the amplitude of the seismic signal. The approach we used proves that realistic seismic input (also at remote distances) can be constructed via waveform modelling, considering all the possible factors influencing the ground motion. (author)
Marchetti, Igor; Koster, Ernst H W; Sonuga-Barke, Edmund J; De Raedt, Rudi
2012-09-01
A neurobiological account of cognitive vulnerability for recurrent depression is presented based on recent developments of resting state neural networks. We propose that alterations in the interplay between task positive (TP) and task negative (TN) elements of the Default Mode Network (DMN) act as a neurobiological risk factor for recurrent depression mediated by cognitive mechanisms. In the framework, depression is characterized by an imbalance between TN-TP components leading to an overpowering of TP by TN activity. The TN-TP imbalance is associated with a dysfunctional internally-focused cognitive style as well as a failure to attenuate TN activity in the transition from rest to task. Thus we propose the TN-TP imbalance as overarching neural mechanism involved in crucial cognitive risk factors for recurrent depression, namely rumination, impaired attentional control, and cognitive reactivity. During remission the TN-TP imbalance persists predisposing to vulnerability of recurrent depression. Empirical data to support this model is reviewed. Finally, we specify how this framework can guide future research efforts.
Lin, Yai-Tin; Kalhan, Ashish Chetan; Lin, Yng-Tzer Joseph; Kalhan, Tosha Ashish; Chou, Chein-Chin; Gao, Xiao Li; Hsu, Chin-Ying Stephen
2018-05-08
Oral rehabilitation under general anaesthesia (GA), commonly employed to treat high caries-risk children, has been associated with high economic and individual/family burden, besides high post-GA caries recurrence rates. As there is no caries prediction model available for paediatric GA patients, this study was performed to build caries risk assessment/prediction models using pre-GA data and to explore mid-term prognostic factors for early identification of high-risk children prone to caries relapse post-GA oral rehabilitation. Ninety-two children were identified and recruited with parental consent before oral rehabilitation under GA. Biopsychosocial data collection at baseline and the 6-month follow-up were conducted using questionnaire (Q), microbiological assessment (M) and clinical examination (C). The prediction models constructed using data collected from Q, Q + M and Q + M + C demonstrated an accuracy of 72%, 78% and 82%, respectively. Furthermore, of the 83 (90.2%) patients recalled 6 months after GA intervention, recurrent caries was identified in 54.2%, together with reduced bacterial counts, lower plaque index and increased percentage of children toothbrushing for themselves (all P < 0.05). Additionally, meal-time and toothbrushing duration were shown, through bivariate analyses, to be significant prognostic determinants for caries recurrence (both P < 0.05). Risk assessment/prediction models built using pre-GA data may be promising in identifying high-risk children prone to post-GA caries recurrence, although future internal and external validation of predictive models is warranted. © 2018 FDI World Dental Federation.
Wu, Yu-Chung; Wei, Nien-Chih; Hung, Jung-Jyh; Yeh, Yi-Chen; Su, Li-Jen; Hsu, Wen-Hu; Chou, Teh-Ying
2017-10-03
Lung cancer mortality remains high even after successful resection. Adjuvant treatment benefits stage II and III patients, but not stage I patients, and most studies fail to predict recurrence in stage I patients. Our study included 211 lung adenocarcinoma patients (stages I-IIIA; 81% stage I) who received curative resections at Taipei Veterans General Hospital between January 2001 and December 2012. We generated a prediction model using 153 samples, with validation using an additional 58 clinical outcome-blinded samples. Gene expression profiles were generated using formalin-fixed, paraffin-embedded tissue samples and microarrays. Data analysis was performed using a supervised clustering method. The prediction model generated from mixed stage samples successfully separated patients at high vs. low risk for recurrence. The validation tests hazard ratio (HR = 4.38) was similar to that of the training tests (HR = 4.53), indicating a robust training process. Our prediction model successfully distinguished high- from low-risk stage IA and IB patients, with a difference in 5-year disease-free survival between high- and low-risk patients of 42% for stage IA and 45% for stage IB ( p model for identifying lung adenocarcinoma patients at high risk for recurrence who may benefit from adjuvant therapy. Our prediction performance of the difference in disease free survival between high risk and low risk groups demonstrates more than two fold improvement over earlier published results.
Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network
Yao, Weigang; Liou, Meng-Sing
2012-01-01
The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis
Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.
2016-01-01
The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.
Stanley, Dal; Villaseñor, Antonio; Benz, Harley
1999-01-01
The Cascadia subduction zone is extremely complex in the western Washington region, involving local deformation of the subducting Juan de Fuca plate and complicated block structures in the crust. It has been postulated that the Cascadia subduction zone could be the source for a large thrust earthquake, possibly as large as M9.0. Large intraplate earthquakes from within the subducting Juan de Fuca plate beneath the Puget Sound region have accounted for most of the energy release in this century and future such large earthquakes are expected. Added to these possible hazards is clear evidence for strong crustal deformation events in the Puget Sound region near faults such as the Seattle fault, which passes through the southern Seattle metropolitan area. In order to understand the nature of these individual earthquake sources and their possible interrelationship, we have conducted an extensive seismotectonic study of the region. We have employed P-wave velocity models developed using local earthquake tomography as a key tool in this research. Other information utilized includes geological, paleoseismic, gravity, magnetic, magnetotelluric, deformation, seismicity, focal mechanism and geodetic data. Neotectonic concepts were tested and augmented through use of anelastic (creep) deformation models based on thin-plate, finite-element techniques developed by Peter Bird, UCLA. These programs model anelastic strain rate, stress, and velocity fields for given rheological parameters, variable crust and lithosphere thicknesses, heat flow, and elevation. Known faults in western Washington and the main Cascadia subduction thrust were incorporated in the modeling process. Significant results from the velocity models include delineation of a previously studied arch in the subducting Juan de Fuca plate. The axis of the arch is oriented in the direction of current subduction and asymmetrically deformed due to the effects of a northern buttress mapped in the velocity models. This
Luján, S; Santamaría, C; Pontones, J L; Ruiz-Cerdá, J L; Trassierra, M; Vera-Donoso, C D; Solsona, E; Jiménez-Cruz, F
2014-12-01
To apply new mathematical models according to Non Muscle Invasive Bladder Carcinoma (NMIBC) biological characteristics and enabling an accurate risk estimation of multiple recurrences and tumor progression. The classical Cox model is not valid for the assessment of this kind of events becausethe time betweenrecurrencesin the same patientmay be stronglycorrelated. These new models for risk estimation of recurrence/progression lead to individualized monitoring and treatment plan. 960 patients with primary NMIBC were enrolled. The median follow-up was 48.1 (3-160) months. Results obtained were validated in 240 patients from other center. Transurethral resection of the bladder (TURB) and random bladder biopsy were performed. Subsequently, adjuvant localized chemotherapy was performed. The variables analyzed were: number and tumor size, age, chemotherapy and histopathology. The endpoints were time to recurrence and time to progression. Cox model and its extensions were used as joint frailty model for multiple recurrence and progression. Model accuracy was calculated using Harrell's concordance index (c-index). 468 (48.8%) patients developed at least one tumor recurrence and tumor progression was reported in 52 (5.4%) patients. Variables for multiple-recurrence risk are: age, grade, number, size, treatment and the number of prior recurrences. All these together with age, stage and grade are the variables for progression risk. Concordance index was 0.64 and 0.85 for multiple recurrence and progression respectively. the high concordance reported besides to the validation process in external source, allow accurate multi-recurrence/progression risk estimation. As consequence, it is possible to schedule a follow-up and treatment individualized plan in new and recurrent NMCB cases. Copyright © 2014 AEU. Published by Elsevier Espana. All rights reserved.
A Heuristic Approach to Intra-Brain Communications Using Chaos in a Recurrent Neural Network Model
Soma, Ken-ichiro; Mori, Ryota; Sato, Ryuichi; Nara, Shigetoshi
2011-09-01
To approach functional roles of chaos in brain, a heuristic model to consider mechanisms of intra-brain communications is proposed. The key idea is to use chaos in firing pattern dynamics of a recurrent neural network consisting of birary state neurons, as propagation medium of pulse signals. Computer experiments and numerical methods are introduced to evaluate signal transport characteristics by calculating correlation functions between sending neurons and receiving neurons of pulse signals.
ReSeg: A Recurrent Neural Network-Based Model for Semantic Segmentation
Visin, Francesco; Ciccone, Marco; Romero, Adriana; Kastner, Kyle; Cho, Kyunghyun; Bengio, Yoshua; Matteucci, Matteo; Courville, Aaron
2015-01-01
We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally ...
A Self-Consistent Fault Slip Model for the 2011 Tohoku Earthquake and Tsunami
Yamazaki, Yoshiki; Cheung, Kwok Fai; Lay, Thorne
2018-02-01
The unprecedented geophysical and hydrographic data sets from the 2011 Tohoku earthquake and tsunami have facilitated numerous modeling and inversion analyses for a wide range of dislocation models. Significant uncertainties remain in the slip distribution as well as the possible contribution of tsunami excitation from submarine slumping or anelastic wedge deformation. We seek a self-consistent model for the primary teleseismic and tsunami observations through an iterative approach that begins with downsampling of a finite fault model inverted from global seismic records. Direct adjustment of the fault displacement guided by high-resolution forward modeling of near-field tsunami waveform and runup measurements improves the features that are not satisfactorily accounted for by the seismic wave inversion. The results show acute sensitivity of the runup to impulsive tsunami waves generated by near-trench slip. The adjusted finite fault model is able to reproduce the DART records across the Pacific Ocean in forward modeling of the far-field tsunami as well as the global seismic records through a finer-scale subfault moment- and rake-constrained inversion, thereby validating its ability to account for the tsunami and teleseismic observations without requiring an exotic source. The upsampled final model gives reasonably good fits to onshore and offshore geodetic observations albeit early after-slip effects and wedge faulting that cannot be reliably accounted for. The large predicted slip of over 20 m at shallow depth extending northward to 39.7°N indicates extensive rerupture and reduced seismic hazard of the 1896 tsunami earthquake zone, as inferred to varying extents by several recent joint and tsunami-only inversions.
International Nuclear Information System (INIS)
Helmstetter, A.; Sornette, D.
2002-01-01
The epidemic-type aftershock sequence (ETAS) model is a simple stochastic process modeling seismicity, based on the two best-established empirical laws, the Omori law (power-law decay ∼1/t 1+θ of seismicity after an earthquake) and Gutenberg-Richter law (power-law distribution of earthquake energies). In order to describe also the space distribution of seismicity, we use in addition a power-law distribution ∼1/r 1+μ of distances between triggered and triggering earthquakes. The ETAS model has been studied for the last two decades to model real seismicity catalogs and to obtain short-term probabilistic forecasts. Here, we present a mapping between the ETAS model and a class of CTRW (continuous time random walk) models, based on the identification of their corresponding master equations. This mapping allows us to use the wealth of results previously obtained on anomalous diffusion of CTRW. After translating into the relevant variable for the ETAS model, we provide a classification of the different regimes of diffusion of seismic activity triggered by a mainshock. Specifically, we derive the relation between the average distance between aftershocks and the mainshock as a function of the time from the mainshock and of the joint probability distribution of the times and locations of the aftershocks. The different regimes are fully characterized by the two exponents θ and μ. Our predictions are checked by careful numerical simulations. We stress the distinction between the 'bare' Omori law describing the seismic rate activated directly by a mainshock and the 'renormalized' Omori law taking into account all possible cascades from mainshocks to aftershocks of aftershock of aftershock, and so on. In particular, we predict that seismic diffusion or subdiffusion occurs and should be observable only when the observed Omori exponent is less than 1, because this signals the operation of the renormalization of the bare Omori law, also at the origin of seismic diffusion in
Reches, Ze'ev; Schubert, Gerald; Anderson, Charles
1994-01-01
We analyze the cycle of great earthquakes along the San Andreas fault with a finite element numerical model of deformation in a crust with a nonlinear viscoelastic rheology. The viscous component of deformation has an effective viscosity that depends exponentially on the inverse absolute temperature and nonlinearity on the shear stress; the elastic deformation is linear. Crustal thickness and temperature are constrained by seismic and heat flow data for California. The models are for anti plane strain in a 25-km-thick crustal layer having a very long, vertical strike-slip fault; the crustal block extends 250 km to either side of the fault. During the earthquake cycle that lasts 160 years, a constant plate velocity v(sub p)/2 = 17.5 mm yr is applied to the base of the crust and to the vertical end of the crustal block 250 km away from the fault. The upper half of the fault is locked during the interseismic period, while its lower half slips at the constant plate velocity. The locked part of the fault is moved abruptly 2.8 m every 160 years to simulate great earthquakes. The results are sensitive to crustal rheology. Models with quartzite-like rheology display profound transient stages in the velocity, displacement, and stress fields. The predicted transient zone extends about 3-4 times the crustal thickness on each side of the fault, significantly wider than the zone of deformation in elastic models. Models with diabase-like rheology behave similarly to elastic models and exhibit no transient stages. The model predictions are compared with geodetic observations of fault-parallel velocities in northern and central California and local rates of shear strain along the San Andreas fault. The observations are best fit by models which are 10-100 times less viscous than a quartzite-like rheology. Since the lower crust in California is composed of intermediate to mafic rocks, the present result suggests that the in situ viscosity of the crustal rock is orders of magnitude
Signals in the ionosphere generated by tsunami earthquakes: observations and modeling suppor
Rolland, L.; Sladen, A.; Mikesell, D.; Larmat, C. S.; Rakoto, V.; Remillieux, M.; Lee, R.; Khelfi, K.; Lognonne, P. H.; Astafyeva, E.
2017-12-01
Forecasting systems failed to predict the magnitude of the 2011 great tsunami in Japan due to the difficulty and cost of instrumenting the ocean with high-quality and dense networks. Melgar et al. (2013) show that using all of the conventional data (inland seismic, geodetic, and tsunami gauges) with the best inversion method still fails to predict the correct height of the tsunami before it breaks onto a coast near the epicenter (Even though typical tsunami waves are only a few centimeters high, they are powerful enough to create atmospheric vibrations extending all the way to the ionosphere, 300 kilometers up in the atmosphere. Therefore, we are proposing to incorporate the ionospheric signals into tsunami early-warning systems. We anticipate that the method could be decisive for mitigating "tsunami earthquakes" which trigger tsunamis larger than expected from their short-period magnitude. These events are challenging to characterize as they rupture the near-trench subduction interface, in a distant region less constrained by onshore data. As a couple of devastating tsunami earthquakes happens per decade, they represent a real threat for onshore populations and a challenge for tsunami early-warning systems. We will present the TEC observations of the recent Java 2006 and Mentawaii 2010 tsunami earthquakes and base our analysis on acoustic ray tracing, normal modes summation and the simulation code SPECFEM, which solves the wave equation in coupled acoustic (ocean, atmosphere) and elastic (solid earth) domains. Rupture histories are entered as finite source models, which will allow us to evaluate the effect of a relatively slow rupture on the surrounding ocean and atmosphere.
Modeling earthquake magnitudes from injection-induced seismicity on rough faults
Maurer, J.; Dunham, E. M.; Segall, P.
2017-12-01
It is an open question whether perturbations to the in-situ stress field due to fluid injection affect the magnitudes of induced earthquakes. It has been suggested that characteristics such as the total injected fluid volume control the size of induced events (e.g., Baisch et al., 2010; Shapiro et al., 2011). On the other hand, Van der Elst et al. (2016) argue that the size distribution of induced earthquakes follows Gutenberg-Richter, the same as tectonic events. Numerical simulations support the idea that ruptures nucleating inside regions with high shear-to-effective normal stress ratio may not propagate into regions with lower stress (Dieterich et al., 2015; Schmitt et al., 2015), however, these calculations are done on geometrically smooth faults. Fang & Dunham (2013) show that rupture length on geometrically rough faults is variable, but strongly dependent on background shear/effective normal stress. In this study, we use a 2-D elasto-dynamic rupture simulator that includes rough fault geometry and off-fault plasticity (Dunham et al., 2011) to simulate earthquake ruptures under realistic conditions. We consider aggregate results for faults with and without stress perturbations due to fluid injection. We model a uniform far-field background stress (with local perturbations around the fault due to geometry), superimpose a poroelastic stress field in the medium due to injection, and compute the effective stress on the fault as inputs to the rupture simulator. Preliminary results indicate that even minor stress perturbations on the fault due to injection can have a significant impact on the resulting distribution of rupture lengths, but individual results are highly dependent on the details of the local stress perturbations on the fault due to geometric roughness.
Lubrication pressure and fractional viscous damping effects on the spring-block model of earthquakes
Tanekou, G. B.; Fogang, C. F.; Kengne, R.; Pelap, F. B.
2018-04-01
We examine the dynamical behaviours of the "single mass-spring" model for earthquakes considering lubrication pressure effects on pre-existing faults and viscous fractional damping. The lubrication pressure supports a part of the load, thereby reducing the normal stress and the associated friction across the gap. During the co-seismic phase, all of the strain accumulated during the inter-seismic duration does not recover; a fraction of this strain remains as a result of viscous relaxation. Viscous damping friction makes it possible to study rocks at depth possessing visco-elastic behaviours. At increasing depths, rock deformation gradually transitions from brittle to ductile. The fractional derivative is based on the properties of rocks, including information about previous deformation events ( i.e., the so-called memory effect). Increasing the fractional derivative can extend or delay the transition from stick-slip oscillation to a stable equilibrium state and even suppress it. For the single block model, the interactions of the introduced lubrication pressure and viscous damping are found to give rise to oscillation death, which corresponds to aseismic fault behaviour. Our result shows that the earthquake occurrence increases with increases in both the damping coefficient and the lubrication pressure. We have also revealed that the accumulation of large stresses can be controlled via artificial lubrication.
A common mode of origin of power laws in models of market and earthquake
Bhattacharyya, Pratip; Chatterjee, Arnab; Chakrabarti, Bikas K.
2007-07-01
We show that there is a common mode of origin for the power laws observed in two different models: (i) the Pareto law for the distribution of money among the agents with random-saving propensities in an ideal gas-like market model and (ii) the Gutenberg-Richter law for the distribution of overlaps in a fractal-overlap model for earthquakes. We find that the power laws appear as the asymptotic forms of ever-widening log-normal distributions for the agents’ money and the overlap magnitude, respectively. The identification of the generic origin of the power laws helps in better understanding and in developing generalized views of phenomena in such diverse areas as economics and geophysics.
Modal-space reference-model-tracking fuzzy control of earthquake excited structures
Park, Kwan-Soon; Ok, Seung-Yong
2015-01-01
This paper describes an adaptive modal-space reference-model-tracking fuzzy control technique for the vibration control of earthquake-excited structures. In the proposed approach, the fuzzy logic is introduced to update optimal control force so that the controlled structural response can track the desired response of a reference model. For easy and practical implementation, the reference model is constructed by assigning the target damping ratios to the first few dominant modes in modal space. The numerical simulation results demonstrate that the proposed approach successfully achieves not only the adaptive fault-tolerant control system against partial actuator failures but also the robust performance against the variations of the uncertain system properties by redistributing the feedback control forces to the available actuators.
Using an Earthquake Simulator to Model Tremor Along a Strike Slip Fault
Cochran, E. S.; Richards-Dinger, K. B.; Kroll, K.; Harrington, R. M.; Dieterich, J. H.
2013-12-01
We employ the earthquake simulator, RSQSim, to investigate the conditions under which tremor occurs in the transition zone of the San Andreas fault. RSQSim is a computationally efficient method that uses rate- and state- dependent friction to simulate a wide range of event sizes for long time histories of slip [Dieterich and Richards-Dinger, 2010; Richards-Dinger and Dieterich, 2012]. RSQSim has been previously used to investigate slow slip events in Cascadia [Colella et al., 2011; 2012]. Earthquakes, tremor, slow slip, and creep occurrence are primarily controlled by the rate and state constants a and b and slip speed. We will report the preliminary results of using RSQSim to vary fault frictional properties in order to better understand rupture dynamics in the transition zone using observed characteristics of tremor along the San Andreas fault. Recent studies of tremor along the San Andreas fault provide information on tremor characteristics including precise locations, peak amplitudes, duration of tremor episodes, and tremor migration. We use these observations to constrain numerical simulations that examine the slip conditions in the transition zone of the San Andreas Fault. Here, we use the earthquake simulator, RSQSim, to conduct multi-event simulations of tremor for a strike slip fault modeled on Cholame section of the San Andreas fault. Tremor was first observed on the San Andreas fault near Cholame, California near the southern edge of the 2004 Parkfield rupture [Nadeau and Dolenc, 2005]. Since then, tremor has been observed across a 150 km section of the San Andreas with depths between 16-28 km and peak amplitudes that vary by a factor of 7 [Shelly and Hardebeck, 2010]. Tremor episodes, comprised of multiple low frequency earthquakes (LFEs), tend to be relatively short, lasting tens of seconds to as long as 1-2 hours [Horstmann et al., in review, 2013]; tremor occurs regularly with some tremor observed almost daily [Shelly and Hardebeck, 2010; Horstmann
Taşkin Kaya, Gülşen
2013-10-01
Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input
Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.
Hardy, N F; Buonomano, Dean V
2018-02-01
Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.
Model for a flexible motor memory based on a self-active recurrent neural network.
Boström, Kim Joris; Wagner, Heiko; Prieske, Markus; de Lussanet, Marc
2013-10-01
Using recent recurrent network architecture based on the reservoir computing approach, we propose and numerically simulate a model that is focused on the aspects of a flexible motor memory for the storage of elementary movement patterns into the synaptic weights of a neural network, so that the patterns can be retrieved at any time by simple static commands. The resulting motor memory is flexible in that it is capable to continuously modulate the stored patterns. The modulation consists in an approximately linear inter- and extrapolation, generating a large space of possible movements that have not been learned before. A recurrent network of thousand neurons is trained in a manner that corresponds to a realistic exercising scenario, with experimentally measured muscular activations and with kinetic data representing proprioceptive feedback. The network is "self-active" in that it maintains recurrent flow of activation even in the absence of input, a feature that resembles the "resting-state activity" found in the human and animal brain. The model involves the concept of "neural outsourcing" which amounts to the permanent shifting of computational load from higher to lower-level neural structures, which might help to explain why humans are able to execute learned skills in a fluent and flexible manner without the need for attention to the details of the movement. Copyright © 2013 Elsevier B.V. All rights reserved.
Modeling Belt-Servomechanism by Chebyshev Functional Recurrent Neuro-Fuzzy Network
Huang, Yuan-Ruey; Kang, Yuan; Chu, Ming-Hui; Chang, Yeon-Pun
A novel Chebyshev functional recurrent neuro-fuzzy (CFRNF) network is developed from a combination of the Takagi-Sugeno-Kang (TSK) fuzzy model and the Chebyshev recurrent neural network (CRNN). The CFRNF network can emulate the nonlinear dynamics of a servomechanism system. The system nonlinearity is addressed by enhancing the input dimensions of the consequent parts in the fuzzy rules due to functional expansion of a Chebyshev polynomial. The back propagation algorithm is used to adjust the parameters of the antecedent membership functions as well as those of consequent functions. To verify the performance of the proposed CFRNF, the experiment of the belt servomechanism is presented in this paper. Both of identification methods of adaptive neural fuzzy inference system (ANFIS) and recurrent neural network (RNN) are also studied for modeling of the belt servomechanism. The analysis and comparison results indicate that CFRNF makes identification of complex nonlinear dynamic systems easier. It is verified that the accuracy and convergence of the CFRNF are superior to those of ANFIS and RNN by the identification results of a belt servomechanism.
International Nuclear Information System (INIS)
Li, Dongxi; Xu, Wei; Sun, Chunyan; Wang, Liang
2012-01-01
We investigate the phenomenon that stochastic fluctuation induced the competition between tumor extinction and recurrence in the model of tumor growth derived from the catalytic Michaelis–Menten reaction. We analyze the probability transitions between the extinction state and the state of the stable tumor by the Mean First Extinction Time (MFET) and Mean First Return Time (MFRT). It is found that the positional fluctuations hinder the transition, but the environmental fluctuations, to a certain level, facilitate the tumor extinction. The observed behavior could be used as prior information for the treatment of cancer. -- Highlights: ► Stochastic fluctuation induced the competition between extinction and recurrence. ► The probability transitions are investigated. ► The positional fluctuations hinder the transition. ► The environmental fluctuations, to a certain level, facilitate the tumor extinction. ► The observed behavior can be used as prior information for the treatment of cancer.
Anatomical Cystocele Recurrence: Development and Internal Validation of a Prediction Model.
Vergeldt, Tineke F M; van Kuijk, Sander M J; Notten, Kim J B; Kluivers, Kirsten B; Weemhoff, Mirjam
2016-02-01
To develop a prediction model that estimates the risk of anatomical cystocele recurrence after surgery. The databases of two multicenter prospective cohort studies were combined, and we performed a retrospective secondary analysis of these data. Women undergoing an anterior colporrhaphy without mesh materials and without previous pelvic organ prolapse (POP) surgery filled in a questionnaire, underwent translabial three-dimensional ultrasonography, and underwent staging of POP preoperatively and postoperatively. We developed a prediction model using multivariable logistic regression and internally validated it using standard bootstrapping techniques. The performance of the prediction model was assessed by computing indices of overall performance, discriminative ability, calibration, and its clinical utility by computing test characteristics. Of 287 included women, 149 (51.9%) had anatomical cystocele recurrence. Factors included in the prediction model were assisted delivery, preoperative cystocele stage, number of compartments involved, major levator ani muscle defects, and levator hiatal area during Valsalva. Potential predictors that were excluded after backward elimination because of high P values were age, body mass index, number of vaginal deliveries, and family history of POP. The shrinkage factor resulting from the bootstrap procedure was 0.91. After correction for optimism, Nagelkerke's R and the Brier score were 0.15 and 0.22, respectively. This indicates satisfactory model fit. The area under the receiver operating characteristic curve of the prediction model was 71.6% (95% confidence interval 65.7-77.5). After correction for optimism, the area under the receiver operating characteristic curve was 69.7%. This prediction model, including history of assisted delivery, preoperative stage, number of compartments, levator defects, and levator hiatus, estimates the risk of anatomical cystocele recurrence.
Energy Technology Data Exchange (ETDEWEB)
Foxall, William [Univ. of California, Berkeley, CA (United States)
1992-11-01
Crystal fault zones exhibit spatially heterogeneous slip behavior at all scales, slip being partitioned between stable frictional sliding, or fault creep, and unstable earthquake rupture. An understanding the mechanisms underlying slip segmentation is fundamental to research into fault dynamics and the physics of earthquake generation. This thesis investigates the influence that large-scale along-strike heterogeneity in fault zone lithology has on slip segmentation. Large-scale transitions from the stable block sliding of the Central 4D Creeping Section of the San Andreas, fault to the locked 1906 and 1857 earthquake segments takes place along the Loma Prieta and Parkfield sections of the fault, respectively, the transitions being accomplished in part by the generation of earthquakes in the magnitude range 6 (Parkfield) to 7 (Loma Prieta). Information on sub-surface lithology interpreted from the Loma Prieta and Parkfield three-dimensional crustal velocity models computed by Michelini (1991) is integrated with information on slip behavior provided by the distributions of earthquakes located using, the three-dimensional models and by surface creep data to study the relationships between large-scale lithological heterogeneity and slip segmentation along these two sections of the fault zone.
Croissant, Thomas; Lague, Dimitri; Davy, Philippe; Steer, Philippe
2016-04-01
In active mountain ranges, large earthquakes (Mw > 5-6) trigger numerous landslides that impact river dynamics. These landslides bring local and sudden sediment piles that will be eroded and transported along the river network causing downstream changes in river geometry, transport capacity and erosion efficiency. The progressive removal of landslide materials has implications for downstream hazards management and also for understanding landscape dynamics at the timescale of the seismic cycle. The export time of landslide-derived sediments after large-magnitude earthquakes has been studied from suspended load measurements but a full understanding of the total process, including the coupling between sediment transfer and channel geometry change, still remains an issue. Note that the transport of small sediment pulses has been studied in the context of river restoration, but the magnitude of sediment pulses generated by landslides may make the problem different. Here, we study the export of large volumes (>106 m3) of sediments with the 2D hydro-morphodynamic model, Eros. This model uses a new hydrodynamic module that resolves a reduced form of the Saint-Venant equations with a particle method. It is coupled with a sediment transport and lateral and vertical erosion model. Eros accounts for the complex retroactions between sediment transport and fluvial geometry, with a stochastic description of the floods experienced by the river. Moreover, it is able to reproduce several features deemed necessary to study the evacuation of large sediment pulses, such as river regime modification (single-thread to multi-thread), river avulsion and aggradation, floods and bank erosion. Using a synthetic and simple topography we first present how granulometry, landslide volume and geometry, channel slope and flood frequency influence 1) the dominance of pulse advection vs. diffusion during its evacuation, 2) the pulse export time and 3) the remaining volume of sediment in the catchment
Correa Mora, Francisco
We model surface deformation recorded by GPS stations along the Pacific coasts of Mexico and Central America to estimate the magnitude of and variations in frictional locking (coupling) along the subduction interface, toward a better understanding of seismic hazard in these earthquake-prone regions. The first chapter describes my primary analysis technique, namely 3-dimensional finite element modeling to simulate subduction and bounded-variable inversions that optimize the fit to the GPS velocity field. This chapter focuses on and describes interseismic coupling of the Oaxaca segment of the Mexican subduction zone and introduces an analysis of transient slip events that occur in this region. Our results indicate that coupling is strong within the rupture zone of the 1978 Ms=7.8 Oaxaca earthquake, making this region a potential source of a future large earthquake. However, we also find evidence for significant variations in coupling on the subduction interface over distances of only tens of kilometers, decreasing toward the outer edges of the 1978 rupture zone. In the second chapter, we study in more detail some of the slow slip events that have been recorded over a broad area of southern Mexico, with emphasis on their space-time behavior. Our modeling indicates that transient deformation beneath southern Mexico is focused in two distinct slip patches mostly located downdip from seismogenic areas beneath Guerrero and Oaxaca. Contrary to conclusions reached in one previous study, we find no evidence for a spatial or temporal correlation between transient slip that occurs in these two widely separated source regions. Finally, chapter three extends the modeling techniques to new GPS data in Central America, where subduction coupling is weak or zero and the upper plate deformation is much more complex than in Mexico. Cocos-Caribbean plate convergence beneath El Salvador and Nicaragua is accompanied by subduction and trench-parallel motion of the forearc. Our GPS
Crowell, B.; Melgar, D.
2017-12-01
The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.
Robust recurrent neural network modeling for software fault detection and correction prediction
International Nuclear Information System (INIS)
Hu, Q.P.; Xie, M.; Ng, S.H.; Levitin, G.
2007-01-01
Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set
Historical earthquake research in Austria
Hammerl, Christa
2017-12-01
Austria has a moderate seismicity, and on average the population feels 40 earthquakes per year or approximately three earthquakes per month. A severe earthquake with light building damage is expected roughly every 2 to 3 years in Austria. Severe damage to buildings ( I 0 > 8° EMS) occurs significantly less frequently, the average period of recurrence is about 75 years. For this reason the historical earthquake research has been of special importance in Austria. The interest in historical earthquakes in the past in the Austro-Hungarian Empire is outlined, beginning with an initiative of the Austrian Academy of Sciences and the development of historical earthquake research as an independent research field after the 1978 "Zwentendorf plebiscite" on whether the nuclear power plant will start up. The applied methods are introduced briefly along with the most important studies and last but not least as an example of a recently carried out case study, one of the strongest past earthquakes in Austria, the earthquake of 17 July 1670, is presented. The research into historical earthquakes in Austria concentrates on seismic events of the pre-instrumental period. The investigations are not only of historical interest, but also contribute to the completeness and correctness of the Austrian earthquake catalogue, which is the basis for seismic hazard analysis and as such benefits the public, communities, civil engineers, architects, civil protection, and many others.
Monnier, Angélique; Loevenbruck, Anne; Gailler, Audrey; Hébert, Hélène
2016-04-01
The 11 March 2011 Tohoku-Oki event, whether earthquake or tsunami, is exceptionally well documented. A wide range of onshore and offshore data has been recorded from seismic, geodetic, ocean-bottom pressure and sea level sensors. Along with these numerous observations, advance in inversion technique and computing facilities have led to many source studies. Rupture parameters inversion such as slip distribution and rupture history permit to estimate the complex coseismic seafloor deformation. From the numerous published seismic source studies, the most relevant coseismic source models are tested. The comparison of the predicted signals generated using both static and cinematic ruptures to the offshore and coastal measurements help determine which source model should be used to obtain the more consistent coastal tsunami simulations. This work is funded by the TANDEM project, reference ANR-11-RSNR-0023-01 of the French Programme Investissements d'Avenir (PIA 2014-2018).
Energy Technology Data Exchange (ETDEWEB)
Dillon, Michael B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kane, Staci R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-03-01
A nuclear explosion has the potential to injure or kill tens to hundreds of thousands (or more) of people through exposure to fallout (external gamma) radiation. Existing buildings can protect their occupants (reducing fallout radiation exposures) by placing material and distance between fallout particles and individuals indoors. Prior efforts have determined an initial set of building attributes suitable to reasonably assess a given building’s protection against fallout radiation. The current work provides methods to determine the quantitative values for these attributes from (a) common architectural features and data and (b) buildings described using the Global Earthquake Model (GEM) taxonomy. These methods will be used to improve estimates of fallout protection for operational US Department of Defense (DoD) and US Department of Energy (DOE) consequence assessment models.
Stein, R. S.
2012-12-01
The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by
Invariance in the recurrence of large returns and the validation of models of price dynamics
Chang, Lo-Bin; Geman, Stuart; Hsieh, Fushing; Hwang, Chii-Ruey
2013-08-01
Starting from a robust, nonparametric definition of large returns (“excursions”), we study the statistics of their occurrences, focusing on the recurrence process. The empirical waiting-time distribution between excursions is remarkably invariant to year, stock, and scale (return interval). This invariance is related to self-similarity of the marginal distributions of returns, but the excursion waiting-time distribution is a function of the entire return process and not just its univariate probabilities. Generalized autoregressive conditional heteroskedasticity (GARCH) models, market-time transformations based on volume or trades, and generalized (Lévy) random-walk models all fail to fit the statistical structure of excursions.
A bivariate model for analyzing recurrent multi-type automobile failures
Sunethra, A. A.; Sooriyarachchi, M. R.
2017-09-01
The failure mechanism in an automobile can be defined as a system of multi-type recurrent failures where failures can occur due to various multi-type failure modes and these failures are repetitive such that more than one failure can occur from each failure mode. In analysing such automobile failures, both the time and type of the failure serve as response variables. However, these two response variables are highly correlated with each other since the timing of failures has an association with the mode of the failure. When there are more than one correlated response variables, the fitting of a multivariate model is more preferable than separate univariate models. Therefore, a bivariate model of time and type of failure becomes appealing for such automobile failure data. When there are multiple failure observations pertaining to a single automobile, such data cannot be treated as independent data because failure instances of a single automobile are correlated with each other while failures among different automobiles can be treated as independent. Therefore, this study proposes a bivariate model consisting time and type of failure as responses adjusted for correlated data. The proposed model was formulated following the approaches of shared parameter models and random effects models for joining the responses and for representing the correlated data respectively. The proposed model is applied to a sample of automobile failures with three types of failure modes and up to five failure recurrences. The parametric distributions that were suitable for the two responses of time to failure and type of failure were Weibull distribution and multinomial distribution respectively. The proposed bivariate model was programmed in SAS Procedure Proc NLMIXED by user programming appropriate likelihood functions. The performance of the bivariate model was compared with separate univariate models fitted for the two responses and it was identified that better performance is secured by
Recurrent frequency-size distribution of characteristic events
Directory of Open Access Journals (Sweden)
S. G. Abaimov
2009-04-01
Full Text Available Statistical frequency-size (frequency-magnitude properties of earthquake occurrence play an important role in seismic hazard assessments. The behavior of earthquakes is represented by two different statistics: interoccurrent behavior in a region and recurrent behavior at a given point on a fault (or at a given fault. The interoccurrent frequency-size behavior has been investigated by many authors and generally obeys the power-law Gutenberg-Richter distribution to a good approximation. It is expected that the recurrent frequency-size behavior should obey different statistics. However, this problem has received little attention because historic earthquake sequences do not contain enough events to reconstruct the necessary statistics. To overcome this lack of data, this paper investigates the recurrent frequency-size behavior for several problems. First, the sequences of creep events on a creeping section of the San Andreas fault are investigated. The applicability of the Brownian passage-time, lognormal, and Weibull distributions to the recurrent frequency-size statistics of slip events is tested and the Weibull distribution is found to be the best-fit distribution. To verify this result the behaviors of numerical slider-block and sand-pile models are investigated and the Weibull distribution is confirmed as the applicable distribution for these models as well. Exponents β of the best-fit Weibull distributions for the observed creep event sequences and for the slider-block model are found to have similar values ranging from 1.6 to 2.2 with the corresponding aperiodicities C_{V} of the applied distribution ranging from 0.47 to 0.64. We also note similarities between recurrent time-interval statistics and recurrent frequency-size statistics.
A class of Box-Cox transformation models for recurrent event data.
Sun, Liuquan; Tong, Xingwei; Zhou, Xian
2011-04-01
In this article, we propose a class of Box-Cox transformation models for recurrent event data, which includes the proportional means models as special cases. The new model offers great flexibility in formulating the effects of covariates on the mean functions of counting processes while leaving the stochastic structure completely unspecified. For the inference on the proposed models, we apply a profile pseudo-partial likelihood method to estimate the model parameters via estimating equation approaches and establish large sample properties of the estimators and examine its performance in moderate-sized samples through simulation studies. In addition, some graphical and numerical procedures are presented for model checking. An example of application on a set of multiple-infection data taken from a clinic study on chronic granulomatous disease (CGD) is also illustrated.
Numerical Modeling on Co-seismic Influence of Wenchuan 8.0 Earthquake in Sichuan-Yunnan Area, China
Chen, L.; Li, H.; Lu, Y.; Li, Y.; Ye, J.
2009-12-01
In this paper, a three dimensional finite element model for active faults which are handled by contact friction elements in Sichuan-Yunnan area is built. Applying the boundary conditions determined through GPS data, a numerical simulations on spatial patterns of stress-strain changes induced by Wenchuan Ms8.0 earthquake are performed. Some primary results are: a) the co-seismic displacements in Longmen shan fault zone by the initial cracking event benefit not only the NE-direction expanding of subsequent fracture process but also the focal mechanism conversions from thrust to right lateral strike for the most of following sub-cracking events. b) tectonic movements induced by the Wenchuan earthquake are stronger in the upper wall of Longmen shan fault belt than in the lower wall and are influenced remarkably by the northeast boundary faults of the rhombic block. c) the extrema of stress changes induced by the main shock are 106Pa and its spatial size is about 400km long and 100km wide. The total stress level is reduced in the most regions in Longmen shan fault zone, whereas stress change is rather weak in its southwest segment and possibly result in fewer aftershocks in there. d) effects induced by the Wenchuan earthquake to the major active faults are obviously different from each other. e) triggering effect of the Wenchuan earthquake to the following Huili 6.1 earthquake is very weak.
Occhipinti, G.; Manta, F.; Rolland, L.; Watada, S.; Makela, J. J.; Hill, E.; Astafieva, E.; Lognonne, P. H.
2017-12-01
Detection of ionospheric anomalies following the Sumatra and Tohoku earthquakes (e.g., Occhipinti 2015) demonstrated that ionosphere is sensitive to earthquake and tsunami propagation: ground and oceanic vertical displacement induces acoustic-gravity waves propagating within the neutral atmosphere and detectable in the ionosphere. Observations supported by modelling proved that ionospheric anomalies related to tsunamis are deterministic and reproducible by numerical modeling via the ocean/neutral-atmosphere/ionosphere coupling mechanism (Occhipinti et al., 2008). To prove that the tsunami signature in the ionosphere is routinely detected we show here perturbations of total electron content (TEC) measured by GPS and following tsunamigenic earthquakes from 2004 to 2011 (Rolland et al. 2010, Occhipinti et al., 2013), nominally, Sumatra (26 December, 2004 and 12 September, 2007), Chile (14 November, 2007), Samoa (29 September, 2009) and the recent Tohoku-Oki (11 Mars, 2011). Based on the observations close to the epicenter, mainly performed by GPS networks located in Sumatra, Chile and Japan, we highlight the TEC perturbation observed within the first 8 min after the seismic rupture. This perturbation contains information about the ground displacement, as well as the consequent sea surface displacement resulting in the tsunami. In addition to GNSS-TEC observations close to the epicenter, new exciting measurements in the far-field were performed by airglow measurement in Hawaii show the propagation of the internal gravity waves induced by the Tohoku tsunami (Occhipinti et al., 2011). This revolutionary imaging technique is today supported by two new observations of moderate tsunamis: Queen Charlotte (M: 7.7, 27 October, 2013) and Chile (M: 8.2, 16 September 2015). We finally detail here our recent work (Manta et al., 2017) on the case of tsunami alert failure following the Mw7.8 Mentawai event (25 October, 2010), and its twin tsunami alert response following the Mw7
Gamba, P.; Cavalca, D.; Jaiswal, K.S.; Huyck, C.; Crowley, H.
2012-01-01
In order to quantify earthquake risk of any selected region or a country of the world within the Global Earthquake Model (GEM) framework (www.globalquakemodel.org/), a systematic compilation of building inventory and population exposure is indispensable. Through the consortium of leading institutions and by engaging the domain-experts from multiple countries, the GED4GEM project has been working towards the development of a first comprehensive publicly available Global Exposure Database (GED). This geospatial exposure database will eventually facilitate global earthquake risk and loss estimation through GEM’s OpenQuake platform. This paper provides an overview of the GED concepts, aims, datasets, and inference methodology, as well as the current implementation scheme, status and way forward.
Baddari, Kamel; Makdeche, Said; Bellalem, Fouzi
2013-02-01
Based on the moment magnitude scale, a probabilistic model was developed to predict the occurrences of strong earthquakes in the seismoactive area of Zemmouri, Algeria. Firstly, the distributions of earthquake magnitudes M i were described using the distribution function F 0(m), which adjusts the magnitudes considered as independent random variables. Secondly, the obtained result, i.e., the distribution function F 0(m) of the variables M i was used to deduce the distribution functions G(x) and H(y) of the variables Y i = Log M 0,i and Z i = M 0,i , where (Y i)i and (Z i)i are independent. Thirdly, some forecast for moments of the future earthquakes in the studied area is given.
CUFE at SemEval-2016 Task 4: A Gated Recurrent Model for Sentiment Classification
Nabil, Mahmoud
2016-06-16
In this paper we describe a deep learning system that has been built for SemEval 2016 Task4 (Subtask A and B). In this work we trained a Gated Recurrent Unit (GRU) neural network model on top of two sets of word embeddings: (a) general word embeddings generated from unsupervised neural language model; and (b) task specific word embeddings generated from supervised neural language model that was trained to classify tweets into positive and negative categories. We also added a method for analyzing and splitting multi-words hashtags and appending them to the tweet body before feeding it to our model. Our models achieved 0.58 F1-measure for Subtask A (ranked 12/34) and 0.679 Recall for Subtask B (ranked 12/19).
GRACE gravity data help constraining seismic models of the 2004 Sumatran earthquake
Cambiotti, G.; Bordoni, A.; Sabadini, R.; Colli, L.
2011-10-01
The analysis of Gravity Recovery and Climate Experiment (GRACE) Level 2 data time series from the Center for Space Research (CSR) and GeoForschungsZentrum (GFZ) allows us to extract a new estimate of the co-seismic gravity signal due to the 2004 Sumatran earthquake. Owing to compressible self-gravitating Earth models, including sea level feedback in a new self-consistent way and designed to compute gravitational perturbations due to volume changes separately, we are able to prove that the asymmetry in the co-seismic gravity pattern, in which the north-eastern negative anomaly is twice as large as the south-western positive anomaly, is not due to the previously overestimated dilatation in the crust. The overestimate was due to a large dilatation localized at the fault discontinuity, the gravitational effect of which is compensated by an opposite contribution from topography due to the uplifted crust. After this localized dilatation is removed, we instead predict compression in the footwall and dilatation in the hanging wall. The overall anomaly is then mainly due to the additional gravitational effects of the ocean after water is displaced away from the uplifted crust, as first indicated by de Linage et al. (2009). We also detail the differences between compressible and incompressible material properties. By focusing on the most robust estimates from GRACE data, consisting of the peak-to-peak gravity anomaly and an asymmetry coefficient, that is given by the ratio of the negative gravity anomaly over the positive anomaly, we show that they are quite sensitive to seismic source depths and dip angles. This allows us to exploit space gravity data for the first time to help constraining centroid-momentum-tensor (CMT) source analyses of the 2004 Sumatran earthquake and to conclude that the seismic moment has been released mainly in the lower crust rather than the lithospheric mantle. Thus, GRACE data and CMT source analyses, as well as geodetic slip distributions aided
Guo, B.
2017-12-01
Mountain watershed in Western China is prone to flash floods. The Wenchuan earthquake on May 12, 2008 led to the destruction of surface, and frequent landslides and debris flow, which further exacerbated the flash flood hazards. Two giant torrent and debris flows occurred due to heavy rainfall after the earthquake, one was on August 13 2010, and the other on August 18 2010. Flash floods reduction and risk assessment are the key issues in post-disaster reconstruction. Hydrological prediction models are important and cost-efficient mitigation tools being widely applied. In this paper, hydrological observations and simulation using remote sensing data and the WMS model are carried out in the typical flood-hit area, Longxihe watershed, Dujiangyan City, Sichuan Province, China. The hydrological response of rainfall runoff is discussed. The results show that: the WMS HEC-1 model can well simulate the runoff process of small watershed in mountainous area. This methodology can be used in other earthquake-affected areas for risk assessment and to predict the magnitude of flash floods. Key Words: Rainfall-runoff modeling. Remote Sensing. Earthquake. WMS.
Wang, Tongfei; Kang, Xiaomin; He, Liying; Liu, Zhilan; Xu, Haijing; Zhao, Aimin
2017-09-01
To establish a statistical model to predict thrombophilia in patients with unexplained recurrent pregnancy loss (URPL). A retrospective case-control study was conducted at Ren Ji Hospital, Shanghai, China, from March 2014 to October 2016. The levels of D-dimer (DD), fibrinogen degradation products (FDP), activated partial thromboplastin time (APTT), prothrombin time (PT), thrombin time (TT), fibrinogen (Fg), and platelet aggregation in response to arachidonic acid (AA) and adenosine diphosphate (ADP) were collected. Receiver operating characteristic curve analysis was used to analyze data from 158 UPRL patients (≥3 previous first trimester pregnancy losses with unexplained etiology) and 131 non-RPL patients (no history of recurrent pregnancy loss). A logistic regression model (LRM) was built and the model was externally validated in another group of patients. The LRM included AA, DD, FDP, TT, APTT, and PT. The overall accuracy of the LRM was 80.9%, with sensitivity and specificity of 78.5% and 78.3%, respectively. The diagnostic threshold of the possibility of the LRM was 0.6492, with a sensitivity of 78.5% and a specificity of 78.3%. Subsequently, the LRM was validated with an overall accuracy of 83.6%. The LRM is a valuable model for prediction of thrombophilia in URPL patients. © 2017 International Federation of Gynecology and Obstetrics.
A reliable simultaneous representation of seismic hazard and of ground shaking recurrence
Peresan, A.; Panza, G. F.; Magrin, A.; Vaccari, F.
2015-12-01
Different earthquake hazard maps may be appropriate for different purposes - such as emergency management, insurance and engineering design. Accounting for the lower occurrence rate of larger sporadic earthquakes may allow to formulate cost-effective policies in some specific applications, provided that statistically sound recurrence estimates are used, which is not typically the case of PSHA (Probabilistic Seismic Hazard Assessment). We illustrate the procedure to associate the expected ground motions from Neo-deterministic Seismic Hazard Assessment (NDSHA) to an estimate of their recurrence. Neo-deterministic refers to a scenario-based approach, which allows for the construction of a broad range of earthquake scenarios via full waveforms modeling. From the synthetic seismograms the estimates of peak ground acceleration, velocity and displacement, or any other parameter relevant to seismic engineering, can be extracted. NDSHA, in its standard form, defines the hazard computed from a wide set of scenario earthquakes (including the largest deterministically or historically defined credible earthquake, MCE) and it does not supply the frequency of occurrence of the expected ground shaking. A recent enhanced variant of NDSHA that reliably accounts for recurrence has been developed and it is applied to the Italian territory. The characterization of the frequency-magnitude relation can be performed by any statistically sound method supported by data (e.g. multi-scale seismicity model), so that a recurrence estimate is associated to each of the pertinent sources. In this way a standard NDSHA map of ground shaking is obtained simultaneously with the map of the corresponding recurrences. The introduction of recurrence estimates in NDSHA naturally allows for the generation of ground shaking maps at specified return periods. This permits a straightforward comparison between NDSHA and PSHA maps.
Modelling and analysis of canister and buffer for earthquake induced rock shear and glacial load
International Nuclear Information System (INIS)
Hernelind, Jan
2010-08-01
Existing fractures crossing a deposition hole may be activated and sheared by an earthquake. The effect of such a rock shear has been investigated by finite element calculations. The buffer material in a deposition hole acts as a cushion between the canister and the rock, which reduces the effect of a rock shear substantially. Lower density of the buffer yields softer material and reduced effect on the canister. However, at the high density that is suggested for a repository the stiffness of the buffer is rather high. The stiffness is also a function of the rate of shear, which means that there may be a substantial damage on the canister at very high shear rates. However, the earthquake induced rock shear velocity is lower than 1 m/s which is not considered to be very high. The rock shear has been modelled with finite element calculations with the code Abaqus. A three dimensional finite element mesh of the buffer and the canister has been created and simulation of a rock shear has been performed. The rock shear has been assumed to take place either perpendicular to the canister at the quarter point or at an inclined angle of 22.5 deg in tension. Furthermore horizontal shear has been studied using a vertical shear plane either at the centre or at 1/4-point for the canister. The shear calculations have been driven to a total shear of 10 cm. The canister also has to be designed to withstand the loads caused by a thick ice sheet. Besides rock shear the model has been used to analyse the effect of such glacial load (either combined with rock shear or without rock shear). This report also summarizes the effect when considering creep in the copper shell
Modelling and analysis of canister and buffer for earthquake induced rock shear and glacial load
Energy Technology Data Exchange (ETDEWEB)
Hernelind, Jan (5T Engineering AB (Sweden))
2010-08-15
Existing fractures crossing a deposition hole may be activated and sheared by an earthquake. The effect of such a rock shear has been investigated by finite element calculations. The buffer material in a deposition hole acts as a cushion between the canister and the rock, which reduces the effect of a rock shear substantially. Lower density of the buffer yields softer material and reduced effect on the canister. However, at the high density that is suggested for a repository the stiffness of the buffer is rather high. The stiffness is also a function of the rate of shear, which means that there may be a substantial damage on the canister at very high shear rates. However, the earthquake induced rock shear velocity is lower than 1 m/s which is not considered to be very high. The rock shear has been modelled with finite element calculations with the code Abaqus. A three dimensional finite element mesh of the buffer and the canister has been created and simulation of a rock shear has been performed. The rock shear has been assumed to take place either perpendicular to the canister at the quarter point or at an inclined angle of 22.5 deg in tension. Furthermore horizontal shear has been studied using a vertical shear plane either at the centre or at 1/4-point for the canister. The shear calculations have been driven to a total shear of 10 cm. The canister also has to be designed to withstand the loads caused by a thick ice sheet. Besides rock shear the model has been used to analyse the effect of such glacial load (either combined with rock shear or without rock shear). This report also summarizes the effect when considering creep in the copper shell
Interpretation of correlated neural variability from models of feed-forward and recurrent circuits
2018-01-01
Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures—recurrent connections, shared feed-forward projections, and shared gain fluctuations—on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing. PMID:29408930
Interpretation of correlated neural variability from models of feed-forward and recurrent circuits.
Directory of Open Access Journals (Sweden)
Volker Pernice
2018-02-01
Full Text Available Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures-recurrent connections, shared feed-forward projections, and shared gain fluctuations-on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing.
Using recurrent neural network models for early detection of heart failure onset.
Choi, Edward; Schuetz, Andy; Stewart, Walter F; Sun, Jimeng
2017-03-01
We explored whether use of deep learning to model temporal relations among events in electronic health records (EHRs) would improve model performance in predicting initial diagnosis of heart failure (HF) compared to conventional methods that ignore temporality. Data were from a health system's EHR on 3884 incident HF cases and 28 903 controls, identified as primary care patients, between May 16, 2000, and May 23, 2013. Recurrent neural network (RNN) models using gated recurrent units (GRUs) were adapted to detect relations among time-stamped events (eg, disease diagnosis, medication orders, procedure orders, etc.) with a 12- to 18-month observation window of cases and controls. Model performance metrics were compared to regularized logistic regression, neural network, support vector machine, and K-nearest neighbor classifier approaches. Using a 12-month observation window, the area under the curve (AUC) for the RNN model was 0.777, compared to AUCs for logistic regression (0.747), multilayer perceptron (MLP) with 1 hidden layer (0.765), support vector machine (SVM) (0.743), and K-nearest neighbor (KNN) (0.730). When using an 18-month observation window, the AUC for the RNN model increased to 0.883 and was significantly higher than the 0.834 AUC for the best of the baseline methods (MLP). Deep learning models adapted to leverage temporal relations appear to improve performance of models for detection of incident heart failure with a short observation window of 12-18 months. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.
A systematic comparison of recurrent event models for application to composite endpoints.
Ozga, Ann-Kathrin; Kieser, Meinhard; Rauch, Geraldine
2018-01-04
Many clinical trials focus on the comparison of the treatment effect between two or more groups concerning a rarely occurring event. In this situation, showing a relevant effect with an acceptable power requires the observation of a large number of patients over a long period of time. For feasibility issues, it is therefore often considered to include several event types of interest, non-fatal or fatal, and to combine them within a composite endpoint. Commonly, a composite endpoint is analyzed with standard survival analysis techniques by assessing the time to the first occurring event. This approach neglects that an individual may experience more than one event which leads to a loss of information. As an alternative, composite endpoints could be analyzed by models for recurrent events. There exists a number of such models, e.g. regression models based on count data or Cox-based models such as the approaches of Andersen and Gill, Prentice, Williams and Peterson or, Wei, Lin and Weissfeld. Although some of the methods were already compared within the literature there exists no systematic investigation for the special requirements regarding composite endpoints. Within this work a simulation-based comparison of recurrent event models applied to composite endpoints is provided for different realistic clinical trial scenarios. We demonstrate that the Andersen-Gill model and the Prentice- Williams-Petersen models show similar results under various data scenarios whereas the Wei-Lin-Weissfeld model delivers effect estimators which can considerably deviate under commonly met data scenarios. Based on the conducted simulation study, this paper helps to understand the pros and cons of the investigated methods in the context of composite endpoints and provides therefore recommendations for an adequate statistical analysis strategy and a meaningful interpretation of results.
Mass-spring model used to simulate the sloshing of fluid in the container under the earthquake
International Nuclear Information System (INIS)
Wen Jing; Luan Lin; Gao Xiaoan; Wang Wei; Lu Daogang; Zhang Shuangwang
2005-01-01
A lumped-mass spring model is given to simulated the sloshing of liquid in the container under the earthquake in the ASCE 4-86. A new mass-spring model is developed in the 3D finite element model instead of beam model in this paper. The stresses corresponding to the sloshing mass could be given directly, which avoids the construction of beam model. This paper presents 3-D Mass-Spring Model for the total overturning moment as well as an example of the model. Moreover the mass-spring models for the overturning moment to the sides and to the bottom of the container are constructed respectively. (authors)
Earthquake modelling at the country level using aggregated spatio-temporal point processes
Lieshout, van M.N.M.; Stein, A.
2011-01-01
The goal of this paper is to derive a risk map for earthquake occurrences in Pakistan from a catalogue that contains spatial coordinates of shallow earthquakes of magnitude 4.5 or larger aggregated over calendar years. We test relative temporal stationarity and use the inhomogeneous J–function to
International Nuclear Information System (INIS)
Fujita, Takafumi; Shimosaka, Haruo
1980-01-01
This paper is described on the results of analysis of the response of liquid containers (tanks) to earthquakes. Sine wave oscillation was applied experimentally to model tanks with legs. A model with one degree of freedom is good enough for the analysis. To investigate the reason of this fact, the response multiplication factor of tank displacement was analysed. The shapes of the model tanks were rectangular and cylindrical. Analyses were made by a potential theory. The experimental studies show that the characteristics of attenuation of oscillation was non-linear. The model analysis of this non-linear attenuation was also performed. Good agreement between the experimental and the analytical results was recognized. The probability analysis of the response to earthquake with simulated shock waves was performed, using the above mentioned model, and good agreement between the experiment and the analysis was obtained. (Kato, T.)
Increased survival rate by local release of diclofenac in a murine model of recurrent oral carcinoma
Directory of Open Access Journals (Sweden)
Will OM
2016-10-01
determination of tumor recurrence. At the end of 7 weeks following tumor resection, 33% of mice with diclofenac-loaded scaffolds had a recurrent tumor, in comparison to 90%–100% of the mice in the other three groups. At this time point, mice with diclofenac-releasing scaffolds showed 89% survival rate, while the other groups showed survival rates of 10%–25%. Immunohistochemical staining of recurrent tumors revealed a near 10-fold decrease in the proliferation marker Ki-67 in the tumors derived from mice with diclofenac-releasing scaffolds. In summary, the local application of diclofenac in an orthotopic mouse tumor resection model of oral cancer reduced tumor recurrence with significant improvement in survival over a 7-week study period following tumor resection. Local drug release of anti-inflammatory agents should be investigated as a therapeutic option in the prevention of tumor recurrence in oral squamous carcinoma. Keywords: tumor recurrence, oral squamous cell carcinoma, head and neck cancer, NSAIDs, drug releasing polymers, mouse model
Irregularities in Early Seismic Rupture Propagation for Large Events in a Crustal Earthquake Model
Lapusta, N.; Rice, J. R.; Rice, J. R.
2001-12-01
We study early seismic propagation of model earthquakes in a 2-D model of a vertical strike-slip fault with depth-variable rate and state friction properties. Our model earthquakes are obtained in fully dynamic simulations of sequences of instabilities on a fault subjected to realistically slow tectonic loading (Lapusta et al., JGR, 2000). This work is motivated by results of Ellsworth and Beroza (Science, 1995), who observe that for many earthquakes, far-field velocity seismograms during initial stages of dynamic rupture propagation have irregular fluctuations which constitute a "seismic nucleation phase". In our simulations, we find that such irregularities in velocity seismograms can be caused by two factors: (1) rupture propagation over regions of stress concentrations and (2) partial arrest of rupture in neighboring creeping regions. As rupture approaches a region of stress concentration, it sees increasing background stress and its moment acceleration (to which velocity seismographs in the far field are proportional) increases. After the peak in stress concentration, the rupture sees decreasing background stress and moment acceleration decreases. Hence a fluctuation in moment acceleration is created. If rupture starts sufficiently far from a creeping region, then partial arrest of rupture in the creeping region causes a decrease in moment acceleration. As the other parts of rupture continue to develop, moment acceleration then starts to grow again, and a fluctuation again results. Other factors may cause the irregularities in moment acceleration, e.g., phenomena such as branching and/or intermittent rupture propagation (Poliakov et al., submitted to JGR, 2001) which we have not studied here. Regions of stress concentration are created in our model by arrest of previous smaller events as well as by interactions with creeping regions. One such region is deep in the fault zone, and is caused by the temperature-induced transition from seismogenic to creeping
Barberopoulou, A.; Qamar, A.; Pratt, T.L.; Steele, W.P.
2006-01-01
Analysis of strong-motion instrument recordings in Seattle, Washington, resulting from the 2002 Mw 7.9 Denali, Alaska, earthquake reveals that amplification in the 0.2-to 1.0-Hz frequency band is largely governed by the shallow sediments both inside and outside the sedimentary basins beneath the Puget Lowland. Sites above the deep sedimentary strata show additional seismic-wave amplification in the 0.04- to 0.2-Hz frequency range. Surface waves generated by the Mw 7.9 Denali, Alaska, earthquake of 3 November 2002 produced pronounced water waves across Washington state. The largest water waves coincided with the area of largest seismic-wave amplification underlain by the Seattle basin. In the current work, we present reports that show Lakes Union and Washington, both located on the Seattle basin, are susceptible to large water waves generated by large local earthquakes and teleseisms. A simple model of a water body is adopted to explain the generation of waves in water basins. This model provides reasonable estimates for the water-wave amplitudes in swimming pools during the Denali earthquake but appears to underestimate the waves observed in Lake Union.
Kaneda, Y.; Kawaguchi, K.; Araki, E.; Matsumoto, H.; Nakamura, T.; Nakano, M.; Kamiya, S.; Ariyoshi, K.; Baba, T.; Ohori, M.; Hori, T.; Takahashi, N.; Kaneko, S.; Donet Research; Development Group
2010-12-01
assimilation method using DONET data is very important to improve the recurrence cycle simulation model. 5) Understanding of the interaction between the crust and upper mantle around the Nankai trough subduction zone. We will deploy DONET not only in the Tonankai seismogenic zone but also DONET2 with high voltages in the Nankai seismogenic zone western the Nankai trough: The total system will be deployed to understand the seismic linkage between the Tonankai and Nankai earthquakes: Using DONET and DONET2 data, we will be able to observe the crustal activities and before and after slips at the Tonankai earthquake and Nankai earthquake. And we will improve the recurrence cycle simulation model by the advanced data assimilation method. Actually, we constructed one observatory in DONET and observed some earthquakes and tsunamis. We will introduce details of DONET/DONET2 and some observed data.
Volumetric abnormalities of the brain in a rat model of recurrent headache.
Jia, Zhihua; Tang, Wenjing; Zhao, Dengfa; Hu, Guanqun; Li, Ruisheng; Yu, Shengyuan
2018-01-01
Voxel-based morphometry is used to detect structural brain changes in patients with migraine. However, the relevance of migraine and structural changes is not clear. This study investigated structural brain abnormalities based on voxel-based morphometry using a rat model of recurrent headache. The rat model was established by infusing an inflammatory soup through supradural catheters in conscious male rats. Rats were subgrouped according to the frequency and duration of the inflammatory soup infusion. Tactile sensory testing was conducted prior to infusion of the inflammatory soup or saline. The periorbital tactile thresholds in the high-frequency inflammatory soup stimulation group declined persistently from day 5. Increased white matter volume was observed in the rats three weeks after inflammatory soup stimulation, brainstem in the in the low-frequency inflammatory soup-infusion group and cortex in the high-frequency inflammatory soup-infusion group. After six weeks' stimulation, rats showed gray matter volume changes. The brain structural abnormalities recovered after the stimulation was stopped in the low-frequency inflammatory soup-infused rats and persisted even after the high-frequency inflammatory soup stimulus stopped. The changes of voxel-based morphometry in migraineurs may be the result of recurrent headache. Cognition, memory, and learning may play an important role in the chronification of migraines. Reducing migraine attacks has the promise of preventing chronicity of migraine.
Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters
Song, S. G.
2013-12-24
Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.
DEFF Research Database (Denmark)
Stein, Wilfred D; Litman, Thomas
2006-01-01
appears to be a random event. Inasmuch as the kinetics of cancer recurrence in published data sets closely follows the model found for the appearance of sporadic retinoblastoma, tumor recurrence could be triggered by mutations in awakening- suppressor mechanisms. The retinoblastoma tumor suppressor gene...... was identified by tracing its occurrence in familial retinoblastoma pedigrees. Will it be possible to track the postulated cancer recurrence, awakening suppressor gene(s) in early recurrence breast cancer patients?...
Numerical modeling of liquefaction-induced failure of geo-structures subjected to earthquakes
International Nuclear Information System (INIS)
Rapti, Ioanna
2016-01-01
The increasing importance of performance-based earthquake engineering analysis points out the necessity to assess quantitatively the risk of liquefaction. In this extreme scenario of soil liquefaction, devastating consequences are observed, e.g. excessive settlements, lateral spreading and slope instability. The present PhD thesis discusses the global dynamic response and interaction of an earth structure-foundation system, so as to determine quantitatively the collapse mechanism due to foundation's soil liquefaction. As shear band generation is a potential earthquake-induced failure mode in such structures, the FE mesh dependency of results of dynamic analyses is thoroughly investigated and an existing regularization method is evaluated. The open-source FE software developed by EDF R and D, called Code-Aster, is used for the numerical simulations, while soil behavior is represented by the ECP constitutive model, developed at Centrale-Supelec. Starting from a simplified model of 1D SH wave propagation in a soil column with coupled hydro-mechanical nonlinear behavior, the effect of seismic hazard and soil's permeability on liquefaction is assessed. Input ground motion is a key component for soil liquefaction apparition, as long duration of main shock can lead to important nonlinearity and extended soil liquefaction. Moreover, when a variation of permeability as function of liquefaction state is considered, changes in the dissipation phase of excess pore water pressure and material behavior are observed, which do not follow a single trend. The effect of a regularization method with enhanced kinematics approach, called first gradient of dilation model, on 1D SH wave propagation is studied through an analytical solution. Deficiencies of the use of this regularization method are observed and discussed, e.g. spurious waves apparition in the soil's seismic response. Next, a 2D embankment-type model is simulated and its dynamic response is evaluated in dry, fully drained
Fleitout, L.; Trubienko, O.; Garaud, J.; Vigny, C.; Cailletaud, G.; Simons, W. J.; Satirapod, C.; Shestakov, N.
2012-12-01
A 3D finite element code (Zebulon-Zset) is used to model deformations through the seismic cycle in the areas surrounding the last three large subduction earthquakes: Sumatra, Japan and Chile. The mesh featuring a broad spherical shell portion with a viscoelastic asthenosphere is refined close to the subduction zones. The model is constrained by 6 years of postseismic data in Sumatra area and over a year of data for Japan and Chile plus preseismic data in the three areas. The coseismic displacements on the subduction plane are inverted from the coseismic displacements using the finite element program and provide the initial stresses. The predicted horizontal postseismic displacements depend upon the thicknesses of the elastic plate and of the low viscosity asthenosphere. Non-dimensionalized by the coseismic displacements, they present an almost uniform value between 500km and 1500km from the trench for elastic plates 80km thick. The time evolution of the velocities is function of the creep law (Maxwell, Burger or power-law creep). Moreover, the forward models predict a sizable far-field subsidence, also with a spatial distribution which varies with the geometry of the asthenosphere and lithosphere. Slip on the subduction interface does not induce such a subsidence. The observed horizontal velocities, divided by the coseismic displacement, present a similar pattern as function of time and distance from trench for the three areas, indicative of similar lithospheric and asthenospheric thicknesses and asthenospheric viscosity. This pattern cannot be fitted with power-law creep in the asthenosphere but indicates a lithosphere 60 to 90km thick and an asthenosphere of thickness of the order of 100km with a burger rheology represented by a Kelvin-Voigt element with a viscosity of 3.1018Pas and μKelvin=μelastic/3. A second Kelvin-Voigt element with very limited amplitude may explain some characteristics of the short time-scale signal. The postseismic subsidence is
Kyriakopoulos, Christos; Oglesby, David D.; Funning, Gareth J.; Ryan, Kenneth
2017-01-01
The 2010 Mw 7.2 El Mayor-Cucapah earthquake is the largest event recorded in the broader Southern California-Baja California region in the last 18 years. Here we try to analyze primary features of this type of event by using dynamic rupture simulations based on a multifault interface and later compare our results with space geodetic models. Our results show that starting from homogeneous prestress conditions, slip heterogeneity can be achieved as a result of variable dip angle along strike and the modulation imposed by step over segments. We also considered effects from a topographic free surface and find that although this does not produce significant first-order effects for this earthquake, even a low topographic dome such as the Cucapah range can affect the rupture front pattern and fault slip rate. Finally, we inverted available interferometric synthetic aperture radar data, using the same geometry as the dynamic rupture model, and retrieved the space geodetic slip distribution that serves to constrain the dynamic rupture models. The one to one comparison of the final fault slip pattern generated with dynamic rupture models and the space geodetic inversion show good agreement. Our results lead us to the following conclusion: in a possible multifault rupture scenario, and if we have first-order geometry constraints, dynamic rupture models can be very efficient in predicting large-scale slip heterogeneities that are important for the correct assessment of seismic hazard and the magnitude of future events. Our work contributes to understanding the complex nature of multifault systems.
Historic Eastern Canadian earthquakes
International Nuclear Information System (INIS)
Asmis, G.J.K.; Atchinson, R.J.
1981-01-01
Nuclear power plants licensed in Canada have been designed to resist earthquakes: not all plants, however, have been explicitly designed to the same level of earthquake induced forces. Understanding the nature of strong ground motion near the source of the earthquake is still very tentative. This paper reviews historical and scientific accounts of the three strongest earthquakes - St. Lawrence (1925), Temiskaming (1935), Cornwall (1944) - that have occurred in Canada in 'modern' times, field studies of near-field strong ground motion records and their resultant damage or non-damage to industrial facilities, and numerical modelling of earthquake sources and resultant wave propagation to produce accelerograms consistent with the above historical record and field studies. It is concluded that for future construction of NPP's near-field strong motion must be explicitly considered in design
Teamwork tools and activities within the hazard component of the Global Earthquake Model
Pagani, M.; Weatherill, G.; Monelli, D.; Danciu, L.
2013-05-01
The Global Earthquake Model (GEM) is a public-private partnership aimed at supporting and fostering a global community of scientists and engineers working in the fields of seismic hazard and risk assessment. In the hazard sector, in particular, GEM recognizes the importance of local ownership and leadership in the creation of seismic hazard models. For this reason, over the last few years, GEM has been promoting different activities in the context of seismic hazard analysis ranging, for example, from regional projects targeted at the creation of updated seismic hazard studies to the development of a new open-source seismic hazard and risk calculation software called OpenQuake-engine (http://globalquakemodel.org). In this communication we'll provide a tour of the various activities completed, such as the new ISC-GEM Global Instrumental Catalogue, and of currently on-going initiatives like the creation of a suite of tools for the creation of PSHA input models. Discussion, comments and criticism by the colleagues in the audience will be highly appreciated.
Nova outburst modeling and its application to the recurrent nova phenomenon
International Nuclear Information System (INIS)
Sparks, W.M.; Starrfield, S.; Truran, J.
1985-12-01
The thermonuclear runaway (TNR) theory for the cause of the common novae is reviewed. Numerical simulations of this theory were performed using an implicit hydrodynamic Lagrangian computer code. Relevant physical phenomena are explained with the simpler envelope-in-place calculations. Next the models that include accretion are discussed. The calculations agree very well with observations of common novae. The observational differences between common novae and recurrent novae are examined. We propose input parameters to the TNR model which can give the outburst characteristics of RS Ophiuchi and discuss the implications. This review is concluded with a brief discussion of two current topics in novae research: shear mixing on the white dwarf and Neon novae. 36 refs., 4 figs
McCormack, Kimberly A.; Hesse, Marc A.
2018-04-01
We model the subsurface hydrologic response to the 7.6 Mw subduction zone earthquake that occurred on the plate interface beneath the Nicoya peninsula in Costa Rica on September 5, 2012. The regional-scale poroelastic model of the overlying plate integrates seismologic, geodetic and hydrologic data sets to predict the post-seismic poroelastic response. A representative two-dimensional model shows that thrust earthquakes with a slip width less than a third of their depth produce complex multi-lobed pressure perturbations in the shallow subsurface. This leads to multiple poroelastic relaxation timescales that may overlap with the longer viscoelastic timescales. In the three-dimensional model, the complex slip distribution of 2012 Nicoya event and its small width to depth ratio lead to a pore pressure distribution comprising multiple trench parallel ridges of high and low pressure. This leads to complex groundwater flow patterns, non-monotonic variations in predicted well water levels, and poroelastic relaxation on multiple time scales. The model also predicts significant tectonically driven submarine groundwater discharge off-shore. In the weeks following the earthquake, the predicted net submarine groundwater discharge in the study area increases, creating a 100 fold increase in net discharge relative to topography-driven flow over the first 30 days. Our model suggests the hydrological response on land is more complex than typically acknowledged in tectonic studies. This may complicate the interpretation of transient post-seismic surface deformations. Combined tectonic-hydrological observation networks have the potential to reduce such ambiguities.
Liang, Ja-Der; Ping, Xiao-Ou; Tseng, Yi-Ju; Huang, Guan-Tarn; Lai, Feipei; Yang, Pei-Ming
2014-12-01
Recurrence of hepatocellular carcinoma (HCC) is an important issue despite effective treatments with tumor eradication. Identification of patients who are at high risk for recurrence may provide more efficacious screening and detection of tumor recurrence. The aim of this study was to develop recurrence predictive models for HCC patients who received radiofrequency ablation (RFA) treatment. From January 2007 to December 2009, 83 newly diagnosed HCC patients receiving RFA as their first treatment were enrolled. Five feature selection methods including genetic algorithm (GA), simulated annealing (SA) algorithm, random forests (RF) and hybrid methods (GA+RF and SA+RF) were utilized for selecting an important subset of features from a total of 16 clinical features. These feature selection methods were combined with support vector machine (SVM) for developing predictive models with better performance. Five-fold cross-validation was used to train and test SVM models. The developed SVM-based predictive models with hybrid feature selection methods and 5-fold cross-validation had averages of the sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and area under the ROC curve as 67%, 86%, 82%, 69%, 90%, and 0.69, respectively. The SVM derived predictive model can provide suggestive high-risk recurrent patients, who should be closely followed up after complete RFA treatment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Recurrent and Dynamic Models for Predicting Streaming Video Quality of Experience.
Bampis, Christos G; Li, Zhi; Katsavounidis, Ioannis; Bovik, Alan C
2018-07-01
Streaming video services represent a very large fraction of global bandwidth consumption. Due to the exploding demands of mobile video streaming services, coupled with limited bandwidth availability, video streams are often transmitted through unreliable, low-bandwidth networks. This unavoidably leads to two types of major streaming-related impairments: compression artifacts and/or rebuffering events. In streaming video applications, the end-user is a human observer; hence being able to predict the subjective Quality of Experience (QoE) associated with streamed videos could lead to the creation of perceptually optimized resource allocation strategies driving higher quality video streaming services. We propose a variety of recurrent dynamic neural networks that conduct continuous-time subjective QoE prediction. By formulating the problem as one of time-series forecasting, we train a variety of recurrent neural networks and non-linear autoregressive models to predict QoE using several recently developed subjective QoE databases. These models combine multiple, diverse neural network inputs, such as predicted video quality scores, rebuffering measurements, and data related to memory and its effects on human behavioral responses, using them to predict QoE on video streams impaired by both compression artifacts and rebuffering events. Instead of finding a single time-series prediction model, we propose and evaluate ways of aggregating different models into a forecasting ensemble that delivers improved results with reduced forecasting variance. We also deploy appropriate new evaluation metrics for comparing time-series predictions in streaming applications. Our experimental results demonstrate improved prediction performance that approaches human performance. An implementation of this work can be found at https://github.com/christosbampis/NARX_QoE_release.
Faults self-organized by repeated earthquakes in a quasi-static antiplane crack model
Directory of Open Access Journals (Sweden)
D. Sornette
1996-01-01
Full Text Available We study a 2D quasi-static discrete crack anti-plane model of a tectonic plate with long range elastic forces and quenched disorder. The plate is driven at its border and the load is transferred to all elements through elastic forces. This model can be considered as belonging to the class of self-organized models which may exhibit spontaneous criticality, with four additional ingredients compared to sandpile models, namely quenched disorder, boundary driving, long range forces and fast time crack rules. In this 'crack' model, as in the 'dislocation' version previously studied, we find that the occurrence of repeated earthquakes organizes the activity on well-defined fault-like structures. In contrast with the 'dislocation' model, after a transient, the time evolution becomes periodic with run-aways ending each cycle. This stems from the 'crack' stress transfer rule preventing criticality to organize in favour of cyclic behaviour. For sufficiently large disorder and weak stress drop, these large events are preceded by a complex spacetime history of foreshock activity, characterized by a Gutenberg-Richter power law distribution with universal exponent B = 1±0.05. This is similar to a power law distribution of small nucleating droplets before the nucleation of the macroscopic phase in a first-order phase transition. For large disorder and large stress drop, and for certain specific initial disorder configurations, the stress field becomes frustrated in fast time: out-of-plane deformations (thrust and normal faulting and/or a genuine dynamics must be introduced to resolve this frustration.
Galvez, P.; Dalguer, L. A.; Rahnema, K.; Bader, M.
2014-12-01
The 2011 Mw9 Tohoku earthquake has been recorded with a vast GPS and seismic network given unprecedented chance to seismologists to unveil complex rupture processes in a mega-thrust event. In fact more than one thousand near field strong-motion stations across Japan (K-Net and Kik-Net) revealed complex ground motion patterns attributed to the source effects, allowing to capture detailed information of the rupture process. The seismic stations surrounding the Miyagi regions (MYGH013) show two clear distinct waveforms separated by 40 seconds. This observation is consistent with the kinematic source model obtained from the inversion of strong motion data performed by Lee's et al (2011). In this model two rupture fronts separated by 40 seconds emanate close to the hypocenter and propagate towards the trench. This feature is clearly observed by stacking the slip-rate snapshots on fault points aligned in the EW direction passing through the hypocenter (Gabriel et al, 2012), suggesting slip reactivation during the main event. A repeating slip on large earthquakes may occur due to frictional melting and thermal fluid pressurization effects. Kanamori & Heaton (2002) argued that during faulting of large earthquakes the temperature rises high enough creating melting and further reduction of friction coefficient. We created a 3D dynamic rupture model to reproduce this slip reactivation pattern using SPECFEM3D (Galvez et al, 2014) based on a slip-weakening friction with sudden two sequential stress drops . Our model starts like a M7-8 earthquake breaking dimly the trench, then after 40 seconds a second rupture emerges close to the trench producing additional slip capable to fully break the trench and transforming the earthquake into a megathrust event. The resulting sea floor displacements are in agreement with 1Hz GPS displacements (GEONET). The seismograms agree roughly with seismic records along the coast of Japan.The simulated sea floor displacement reaches 8-10 meters of
Jover-Esplá, Ana Gabriela; Palazón-Bru, Antonio; Folgado-de la Rosa, David Manuel; Severá-Ferrándiz, Guillermo; Sancho-Mestre, Manuela; de Juan-Herrero, Joaquín; Gil-Guillén, Vicente Francisco
2018-05-01
The existing predictive models of laryngeal cancer recurrence present limitations for clinical practice. Therefore, we constructed, internally validated and implemented in a mobile application (Android) a new model based on a points system taking into account the internationally recommended statistical methodology. This longitudinal prospective study included 189 patients with glottic cancer in 2004-2016 in a Spanish region. The main variable was time-to-recurrence, and its potential predictors were: age, gender, TNM classification, stage, smoking, alcohol consumption, and histology. A points system was developed to predict five-year risk of recurrence based on a Cox model. This was validated internally by bootstrapping, determining discrimination (C-statistics) and calibration (smooth curves). A total of 77 patients presented recurrence (40.7%) in a mean follow-up period of 3.4 ± 3.0 years. The factors in the model were: age, lymph node stage, alcohol consumption and stage. Discrimination and calibration were satisfactory. A points system was developed to obtain the probability of recurrence of laryngeal glottic cancer in five years, using five clinical variables. Our system should be validated externally in other geographical areas. Copyright © 2018 Elsevier Ltd. All rights reserved.
Tsuboi, S.; Nakamura, T.; Miyoshi, T.
2015-12-01
May 30, 2015 Bonin Islands, Japan earthquake (Mw 7.8, depth 679.9km GCMT) was one of the deepest earthquakes ever recorded. We apply the waveform inversion technique (Kikuchi & Kanamori, 1991) to obtain slip distribution in the source fault of this earthquake in the same manner as our previous work (Nakamura et al., 2010). We use 60 broadband seismograms of IRIS GSN seismic stations with epicentral distance between 30 and 90 degrees. The broadband original data are integrated into ground displacement and band-pass filtered in the frequency band 0.002-1 Hz. We use the velocity structure model IASP91 to calculate the wavefield near source and stations. We assume that the fault is squared with the length 50 km. We obtain source rupture model for both nodal planes with high dip angle (74 degree) and low dip angle (26 degree) and compare the synthetic seismograms with the observations to determine which source rupture model would explain the observations better. We calculate broadband synthetic seismograms with these source propagation models using the spectral-element method (Komatitsch & Tromp, 2001). We use new Earth Simulator system in JAMSTEC to compute synthetic seismograms using the spectral-element method. The simulations are performed on 7,776 processors, which require 1,944 nodes of the Earth Simulator. On this number of nodes, a simulation of 50 minutes of wave propagation accurate at periods of 3.8 seconds and longer requires about 5 hours of CPU time. Comparisons of the synthetic waveforms with the observation at teleseismic stations show that the arrival time of pP wave calculated for depth 679km matches well with the observation, which demonstrates that the earthquake really happened below the 660 km discontinuity. In our present forward simulations, the source rupture model with the low-angle fault dipping is likely to better explain the observations.
A dynamic model for slab development associated with the 2015 Mw 7.9 Bonin Islands deep earthquak
Zhan, Z.; Yang, T.; Gurnis, M.
2016-12-01
The 680 km deep May 30, 2015 Mw 7.9 Bonin Islands earthquake is isolated from the nearest earthquakes by more than 150 km. The geodynamic context leading to this isolated deep event is unclear. Tomographic models and seismicity indicate that the morphology of the west-dipping Pacific slab changes rapidly along the strike of the Izu-Bonin-Mariana trench. To the north, the Izu-Bonin section of the Pacific slab lies horizontally above the 660 km discontinuity and extends more than 500 km westward. Several degrees south, the Mariana section dips vertically and penetrates directly into the lower mantle. The observed slab morphology is consistent with plate reconstructions suggesting that the northern section of the IBM trench retreated rapidly since the late Eocene while the southern section of the IBM trench was relatively stable during the same period. We suggest that the location of the isolated 2015 Bonin Islands deep earthquake can be explained by the buckling of the Pacific slab beneath the Bonin Islands. We use geodynamic models to investigate the slab morphology, temperature and stress regimes under different trench motion histories. Models confirm previous results that the slab often lies horizontally within the transition zone when the trench retreats, but buckles when the trench position becomes fixed with respect to the lower mantle. We show that a slab-buckling model is consistent with the observed deep earthquake P-axis directions (assumed to be the axis of principal compressional stress) regionally. The influences of various physical parameters on slab morphology, temperature and stress regime are investigated. In the models investigated, the horizontal width of the buckled slab is no more than 400 km.
International Nuclear Information System (INIS)
Cioflan, C.O.; Apostol, B.F.; Moldoveanu, C.L.; Marmureanu, G.; Panza, G.F.
2002-03-01
The mapping of the seismic ground motion in Bucharest, due to the strong Vrancea earthquakes is carried out using a complex hybrid waveform modeling method which combines the modal summation technique, valid for laterally homogenous anelastic media, with finite-differences technique and optimizes the advantages of both methods. For recent earthquakes, it is possible to validate the modeling by comparing the synthetic seismograms with the records. As controlling records we consider the accelerograms of the Magurele station, low pass filtered with a cut off frequency of 1.0 Hz, of the 3 last major strong (M w >6) Vrancea earthquakes. Using the hybrid method with a double-couple- seismic source approximation, scaled for the source dimensions and relatively simple regional (bedrock) and local structure models we succeeded in reproducing the recorded ground motion in Bucharest, at a satisfactory level for seismic engineering. Extending the modeling to the whole territory of the Bucharest area, we construct a new seismic microzonation map, where five different zones are identified by their characteristic response spectra. (author)
Theunissen, T.; Chevrot, S.; Sylvander, M.; Monteiller, V.; Calvet, M.; Villaseñor, A.; Benahmed, S.; Pauchet, H.; Grimaud, F.
2018-03-01
Local seismic networks are usually designed so that earthquakes are located inside them (primary azimuthal gap 180° and distance to the first station higher than 15 km). Errors on velocity models and accuracy of absolute earthquake locations are assessed based on a reference data set made of active seismic, quarry blasts and passive temporary experiments. Solutions and uncertainties are estimated using the probabilistic approach of the NonLinLoc (NLLoc) software based on Equal Differential Time. Some updates have been added to NLLoc to better focus on the final solution (outlier exclusion, multiscale grid search, S-phases weighting). Errors in the probabilistic approach are defined to take into account errors on velocity models and on arrival times. The seismicity in the final 3-D catalogue is located with a horizontal uncertainty of about 2.0 ± 1.9 km and a vertical uncertainty of about 3.0 ± 2.0 km.
International Nuclear Information System (INIS)
2002-01-01
This report describes the simplified models for predicting the response of high-damping natural rubber bearings (HDNRB) to earthquake ground motions and benchmark problems for assessing the accuracy of finite element analyses in designing base-isolators. (author)
Rupture Complexity Promoted by Damaged Fault Zones in Earthquake Cycle Models
Idini, B.; Ampuero, J. P.
2017-12-01
Pulse-like ruptures tend to be more sensitive to stress heterogeneity than crack-like ones. For instance, a stress-barrier can more easily stop the propagation of a pulse than that of a crack. While crack-like ruptures tend to homogenize the stress field within their rupture area, pulse-like ruptures develop heterogeneous stress fields. This feature of pulse-like ruptures can potentially lead to complex seismicity with a wide range of magnitudes akin to the Gutenberg-Richter law. Previous models required a friction law with severe velocity-weakening to develop pulses and complex seismicity. Recent dynamic rupture simulations show that the presence of a damaged zone around a fault can induce pulse-like rupture, even under a simple slip-weakening friction law, although the mechanism depends strongly on initial stress conditions. Here we aim at testing if fault zone damage is a sufficient ingredient to generate complex seismicity. In particular, we investigate the effects of damaged fault zones on the emergence and sustainability of pulse-like ruptures throughout multiple earthquake cycles, regardless of initial conditions. We consider a fault bisecting a homogeneous low-rigidity layer (the damaged zone) embedded in an intact medium. We conduct a series of earthquake cycle simulations to investigate the effects of two fault zone properties: damage level D and thickness H. The simulations are based on classical rate-and-state friction, the quasi-dynamic approximation and the software QDYN (https://github.com/ydluo/qdyn). Selected fully-dynamic simulations are also performed with a spectral element method. Our numerical results show the development of complex rupture patterns in some damaged fault configurations, including events of different sizes, as well as pulse-like, multi-pulse and hybrid pulse-crack ruptures. We further apply elasto-static theory to assess how D and H affect ruptures with constant stress drop, in particular the flatness of their slip profile
Energy Technology Data Exchange (ETDEWEB)
Beyreuther, Moritz; Wassermann, Joachim [Department of Earth and Environmental Sciences (Geophys. Observatory), Ludwig Maximilians Universitaet Muenchen, D-80333 (Germany); Carniel, Roberto [Dipartimento di Georisorse e Territorio Universitat Degli Studi di Udine, I-33100 (Italy)], E-mail: roberto.carniel@uniud.it
2008-10-01
A possible interaction of (volcano-) tectonic earthquakes with the continuous seismic noise recorded in the volcanic island of Tenerife was recently suggested, but existing catalogues seem to be far from being self consistent, calling for the development of automatic detection and classification algorithms. In this work we propose the adoption of a methodology based on Hidden Markov Models (HMMs), widely used already in other fields, such as speech classification.
Modelling of the ground motion at Russe site (NE Bulgaria) due to the Vrancea earthquakes
International Nuclear Information System (INIS)
Kouteva, Mihaela; Panza, Giuliano F.; Paskaleva, Ivanka; Romanelli, Fabio
2001-11-01
An approach, capable of synthesising strong ground motion from a basic understanding of fault mechanism and of seismic wave propagation in the Earth, is applied to model the seismic input at a set of 25 sites along a chosen profile at Russe, NE Bulgaria, due to two intermediate-depth Vrancea events (August 30, 1986, Mw=7.2, and May 30, 1990, Mw=6.9). According to our results, once a strong ground motion parameter has been selected to characterise the ground motion, it is necessary to investigate the relationships between its values and the features of the earthquake source, the path to the site and the nature of the site. Therefore, a proper seismic hazard assessment requires an appropriate parametric study to define the different ground shaking scenarios corresponding to the relevant seismogenic zones affecting the given site. Site response assessment is provided simultaneously in frequency and space domains, and thus the applied procedure differs from the traditional engineering approach that discusses the site as a single point. The applied procedure can be efficiently used to estimate the ground motion for different purposes like microzonation, urban planning, retrofitting or insurance of the built environment. (author)
Rhu, Jinsoo; Kim, Jong Man; Choi, Gyu Seong; Kwon, Choon Hyuck David; Joh, Jae-Won
2018-02-20
This study was designed to validate the alpha-fetoprotein model for predicting recurrence after liver transplantation in Korean hepatocellular carcinoma patients. Patients who underwent liver transplantation for hepatocellular carcinoma at Samsung Medical Center between 2007 and 2015 were included. Recurrence, overall survival, and disease-specific survival of patients divided by both the Milan criteria and the alpha-fetoprotein model were compared using Kaplan-Meier log-rank test. The predictability of the alpha-fetoprotein model compared to the Milan criteria was tested by means of net reclassification improvement analysis applied to patients with a follow-up of at least 2 years. A total of 400 patients were included in the study. Patients within Milan criteria had 5-year recurrence, and overall survival rates of 20.9% and 76.3% respectively, compared to corresponding rates of 50.3% and 55.7%, respectively, for patients who were beyond Milan criteria. Alpha-fetoprotein model low risk patients had 5-year recurrence and overall survival rates of 21.1% and 76.2%, respectively, compared to corresponding rates of 57.7% and 52.2%, respectively, in high risk patients (P<0.001, all). Although overall net reclassification improvements were statistically nonsignificant for recurrence (NRI=1.7%, Z=0.30, p=0.7624), and overall survival (NRI=9.0%, Z=1.60, p=0.1098), they were significantly better for predicting no recurrence (NRI=6.6%, Z=3.16, p=0.0016) and no death. (NRI=7.7%, Z=3.65, p=0.0003) CONCLUSIONS: The alpha-fetoprotein model seems to be a promising tool for liver transplantation candidacy, but further investigation is needed.
A recurrent neural model for proto-object based contour integration and figure-ground segregation.
Hu, Brian; Niebur, Ernst
2017-12-01
Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.
Cocco, M.; Feuillet, N.; Nostro, C.; Musumeci, C.
2003-04-01
We investigate the mechanical interactions between tectonic faults and volcanic sources through elastic stress transfer and discuss the results of several applications to Italian active volcanoes. We first present the stress modeling results that point out a two-way coupling between Vesuvius eruptions and historical earthquakes in Southern Apennines, which allow us to provide a physical interpretation of their statistical correlation. Therefore, we explore the elastic stress interaction between historical eruptions at the Etna volcano and the largest earthquakes in Eastern Sicily and Calabria. We show that the large 1693 seismic event caused an increase of compressive stress along the rift zone, which can be associated to the lack of flank eruptions of the Etna volcano for about 70 years after the earthquake. Moreover, the largest Etna eruptions preceded by few decades the large 1693 seismic event. Our modeling results clearly suggest that all these catastrophic events are tectonically coupled. We also investigate the effect of elastic stress perturbations on the instrumental seismicity caused by magma inflation at depth both at the Etna and at the Alban Hills volcanoes. In particular, we model the seismicity pattern at the Alban Hills volcano (central Italy) during a seismic swarm occurred in 1989-90 and we interpret it in terms of Coulomb stress changes caused by magmatic processes in an extensional tectonic stress field. We verify that the earthquakes occur in areas of Coulomb stress increase and that their faulting mechanisms are consistent with the stress perturbation induced by the volcanic source. Our results suggest a link between faults and volcanic sources, which we interpret as a tectonic coupling explaining the seismicity in a large area surrounding the volcanoes.
Karimzadeh, Shaghayegh; Askan, Aysegul
2018-04-01
Located within a basin structure, at the conjunction of North East Anatolian, North Anatolian and Ovacik Faults, Erzincan city center (Turkey) is one of the most hazardous regions in the world. Combination of the seismotectonic and geological settings of the region has resulted in series of significant seismic activities including the 1939 (Ms 7.8) as well as the 1992 (Mw = 6.6) earthquakes. The devastative 1939 earthquake occurred in the pre-instrumental era in the region with no available local seismograms. Thus, a limited number of studies exist on that earthquake. However, the 1992 event, despite the sparse local network at that time, has been studied extensively. This study aims to simulate the 1939 Erzincan earthquake using available regional seismic and geological parameters. Despite several uncertainties involved, such an effort to quantitatively model the 1939 earthquake is promising, given the historical reports of extensive damage and fatalities in the area. The results of this study are expressed in terms of anticipated acceleration time histories at certain locations, spatial distribution of selected ground motion parameters and felt intensity maps in the region. Simulated motions are first compared against empirical ground motion prediction equations derived with both local and global datasets. Next, anticipated intensity maps of the 1939 earthquake are obtained using local correlations between peak ground motion parameters and felt intensity values. Comparisons of the estimated intensity distributions with the corresponding observed intensities indicate a reasonable modeling of the 1939 earthquake.
Modeling long-term human activeness using recurrent neural networks for biometric data.
Kim, Zae Myung; Oh, Hyungrai; Kim, Han-Gyu; Lim, Chae-Gyun; Oh, Kyo-Joong; Choi, Ho-Jin
2017-05-18
With the invention of fitness trackers, it has been possible to continuously monitor a user's biometric data such as heart rates, number of footsteps taken, and amount of calories burned. This paper names the time series of these three types of biometric data, the user's "activeness", and investigates the feasibility in modeling and predicting the long-term activeness of the user. The dataset used in this study consisted of several months of biometric time-series data gathered by seven users independently. Four recurrent neural network (RNN) architectures-as well as a deep neural network and a simple regression model-were proposed to investigate the performance on predicting the activeness of the user under various length-related hyper-parameter settings. In addition, the learned model was tested to predict the time period when the user's activeness falls below a certain threshold. A preliminary experimental result shows that each type of activeness data exhibited a short-term autocorrelation; and among the three types of data, the consumed calories and the number of footsteps were positively correlated, while the heart rate data showed almost no correlation with neither of them. It is probably due to this characteristic of the dataset that although the RNN models produced the best results on modeling the user's activeness, the difference was marginal; and other baseline models, especially the linear regression model, performed quite admirably as well. Further experimental results show that it is feasible to predict a user's future activeness with precision, for example, a trained RNN model could predict-with the precision of 84%-when the user would be less active within the next hour given the latest 15 min of his activeness data. This paper defines and investigates the notion of a user's "activeness", and shows that forecasting the long-term activeness of the user is indeed possible. Such information can be utilized by a health-related application to proactively
van Meurs, Hannah S.; Schuit, Ewoud; Horlings, Hugo M.; van der Velden, Jacobus; van Driel, Willemien J.; Mol, Ben Willem J.; Kenter, Gemma G.; Buist, Marrije R.
2014-01-01
Models to predict the probability of recurrence free survival exist for various types of malignancies, but a model for recurrence free survival in individuals with an adult granulosa cell tumor (GCT) of the ovary is lacking. We aimed to develop and internally validate such a prognostic model. We
International Nuclear Information System (INIS)
Muto, K.; Motosaka, M.; Kamata, M.; Masuda, K.; Urao, K.; Mameda, T.
1985-01-01
In order to investigate the 3-dimensional earthquake response characteristics of an embedded structure with consideration for soil-structure interaction, the authors have developed an analytical method using 3-dimensional hybrid model of boundary elements (BEM) and finite elements (FEM) and have conducted a dynamic analysis of an actual nuclear reactor building. This paper describes a comparative study between two different embedment depths in soil as elastic half-space. As the results, it was found that the earthquake response intensity decreases with the increase of the embedment depth and that this method was confirmed to be effective for investigating the 3-D response characteristics of embedded structures such as deflection pattern of each floor level, floor response spectra in high frequency range. (orig.)
Dynamic rupture models of subduction zone earthquakes with off-fault plasticity
Wollherr, S.; van Zelst, I.; Gabriel, A. A.; van Dinther, Y.; Madden, E. H.; Ulrich, T.
2017-12-01
Modeling tsunami-genesis based on purely elastic seafloor displacement typically underpredicts tsunami sizes. Dynamic rupture simulations allow to analyse whether plastic energy dissipation is a missing rheological component by capturing the complex interplay of the rupture front, emitted seismic waves and the free surface in the accretionary prism. Strike-slip models with off-fault plasticity suggest decreasing rupture speed and extensive plastic yielding mainly at shallow depths. For simplified subduction geometries inelastic deformation on the verge of Coulomb failure may enhance vertical displacement, which in turn favors the generation of large tsunamis (Ma, 2012). However, constraining appropriate initial conditions in terms of fault geometry, initial fault stress and strength remains challenging. Here, we present dynamic rupture models of subduction zones constrained by long-term seismo-thermo-mechanical modeling (STM) without any a priori assumption of regions of failure. The STM model provides self-consistent slab geometries, as well as stress and strength initial conditions which evolve in response to tectonic stresses, temperature, gravity, plasticity and pressure (van Dinther et al. 2013). Coseismic slip and coupled seismic wave propagation is modelled using the software package SeisSol (www.seissol.org), suited for complex fault zone structures and topography/bathymetry. SeisSol allows for local time-stepping, which drastically reduces the time-to-solution (Uphoff et al., 2017). This is particularly important in large-scale scenarios resolving small-scale features, such as the shallow angle between the megathrust fault and the free surface. Our dynamic rupture model uses a Drucker-Prager plastic yield criterion and accounts for thermal pressurization around the fault mimicking the effect of pore pressure changes due to frictional heating. We first analyze the influence of this rheology on rupture dynamics and tsunamigenic properties, i.e. seafloor
Wang, J.; Xu, C.; Furlong, K.; Zhong, B.; Xiao, Z.; Yi, L.; Chen, T.
2017-12-01
Although Coulomb stress changes induced by earthquake events have been used to quantify stress transfers and to retrospectively explain stress triggering among earthquake sequences, realistic reliable prospective earthquake forecasting remains scarce. To generate a robust Coulomb stress map for earthquake forecasting, uncertainties in Coulomb stress changes associated with the source fault, receiver fault and friction coefficient and Skempton's coefficient need to be exhaustively considered. In this paper, we specifically explore the uncertainty in slip models of the source fault of the 2011 Mw 9.0 Tohoku-oki earthquake as a case study. This earthquake was chosen because of its wealth of finite-fault slip models. Based on the wealth of those slip models, we compute the coseismic Coulomb stress changes induced by this mainshock. Our results indicate that nearby Coulomb stress changes for each slip model can be quite different, both for the Coulomb stress map at a given depth and on the Pacific subducting slab. The triggering rates for three months of aftershocks of the mainshock, with and without considering the uncertainty in slip models, differ significantly, decreasing from 70% to 18%. Reliable Coulomb stress changes in the three seismogenic zones of Nanki, Tonankai and Tokai are insignificant, approximately only 0.04 bar. By contrast, the portions of the Pacific subducting slab at a depth of 80 km and beneath Tokyo received a positive Coulomb stress change of approximately 0.2 bar. The standard errors of the seismicity rate and earthquake probability based on the Coulomb rate-and-state model (CRS) decay much faster with elapsed time in stress triggering zones than in stress shadows, meaning that the uncertainties in Coulomb stress changes in stress triggering zones would not drastically affect assessments of the seismicity rate and earthquake probability based on the CRS in the intermediate to long term.
Field, Edward; Porter, Keith; Milner, Kevn
2017-01-01
We present a prototype operational loss model based on UCERF3-ETAS, which is the third Uniform California Earthquake Rupture Forecast with an Epidemic Type Aftershock Sequence (ETAS) component. As such, UCERF3-ETAS represents the first earthquake forecast to relax fault segmentation assumptions and to include multi-fault ruptures, elastic-rebound, and spatiotemporal clustering, all of which seem important for generating realistic and useful aftershock statistics. UCERF3-ETAS is nevertheless an approximation of the system, however, so usefulness will vary and potential value needs to be ascertained in the context of each application. We examine this question with respect to statewide loss estimates, exemplifying how risk can be elevated by orders of magnitude due to triggered events following various scenario earthquakes. Two important considerations are the probability gains, relative to loss likelihoods in the absence of main shocks, and the rapid decay of gains with time. Significant uncertainties and model limitations remain, so we hope this paper will inspire similar analyses with respect to other risk metrics to help ascertain whether operationalization of UCERF3-ETAS would be worth the considerable resources required.
Directory of Open Access Journals (Sweden)
Katherine Rotker
2016-01-01
Full Text Available Varicocele recurrence is one of the most common complications associated with varicocele repair. A systematic review was performed to evaluate varicocele recurrence rates, anatomic causes of recurrence, and methods of management of recurrent varicoceles. The PubMed database was evaluated using keywords "recurrent" and "varicocele" as well as MESH criteria "recurrent" and "varicocele." Articles were not included that were not in English, represented single case reports, focused solely on subclinical varicocele, or focused solely on a pediatric population (age <18. Rates of recurrence vary with the technique of varicocele repair from 0% to 35%. Anatomy of recurrence can be defined by venography. Management of varicocele recurrence can be surgical or via embolization.
Bose, Maren; Graves, Robert; Gill, David; Callaghan, Scott; Maechling, Phillip J.
2014-01-01
Real-time applications such as earthquake early warning (EEW) typically use empirical ground-motion prediction equations (GMPEs) along with event magnitude and source-to-site distances to estimate expected shaking levels. In this simplified approach, effects due to finite-fault geometry, directivity and site and basin response are often generalized, which may lead to a significant under- or overestimation of shaking from large earthquakes (M > 6.5) in some locations. For enhanced site-specific ground-motion predictions considering 3-D wave-propagation effects, we develop support vector regression (SVR) models from the SCEC CyberShake low-frequency (415 000 finite-fault rupture scenarios (6.5 ≤ M ≤ 8.5) for southern California defined in UCERF 2.0. We use CyberShake to demonstrate the application of synthetic waveform data to EEW as a ‘proof of concept’, being aware that these simulations are not yet fully validated and might not appropriately sample the range of rupture uncertainty. Our regression models predict the maximum and the temporal evolution of instrumental intensity (MMI) at 71 selected test sites using only the hypocentre, magnitude and rupture ratio, which characterizes uni- and bilateral rupture propagation. Our regression approach is completely data-driven (where here the CyberShake simulations are considered data) and does not enforce pre-defined functional forms or dependencies among input parameters. The models were established from a subset (∼20 per cent) of CyberShake simulations, but can explain MMI values of all >400 k rupture scenarios with a standard deviation of about 0.4 intensity units. We apply our models to determine threshold magnitudes (and warning times) for various active faults in southern California that earthquakes need to exceed to cause at least ‘moderate’, ‘strong’ or ‘very strong’ shaking in the Los Angeles (LA) basin. These thresholds are used to construct a simple and robust EEW algorithm: to
Earthquake and failure forecasting in real-time: A Forecasting Model Testing Centre
Filgueira, Rosa; Atkinson, Malcolm; Bell, Andrew; Main, Ian; Boon, Steven; Meredith, Philip
2013-04-01
Across Europe there are a large number of rock deformation laboratories, each of which runs many experiments. Similarly there are a large number of theoretical rock physicists who develop constitutive and computational models both for rock deformation and changes in geophysical properties. Here we consider how to open up opportunities for sharing experimental data in a way that is integrated with multiple hypothesis testing. We present a prototype for a new forecasting model testing centre based on e-infrastructures for capturing and sharing data and models to accelerate the Rock Physicist (RP) research. This proposal is triggered by our work on data assimilation in the NERC EFFORT (Earthquake and Failure Forecasting in Real Time) project, using data provided by the NERC CREEP 2 experimental project as a test case. EFFORT is a multi-disciplinary collaboration between Geoscientists, Rock Physicists and Computer Scientist. Brittle failure of the crust is likely to play a key role in controlling the timing of a range of geophysical hazards, such as volcanic eruptions, yet the predictability of brittle failure is unknown. Our aim is to provide a facility for developing and testing models to forecast brittle failure in experimental and natural data. Model testing is performed in real-time, verifiably prospective mode, in order to avoid selection biases that are possible in retrospective analyses. The project will ultimately quantify the predictability of brittle failure, and how this predictability scales from simple, controlled laboratory conditions to the complex, uncontrolled real world. Experimental data are collected from controlled laboratory experiments which includes data from the UCL Laboratory and from Creep2 project which will undertake experiments in a deep-sea laboratory. We illustrate the properties of the prototype testing centre by streaming and analysing realistically noisy synthetic data, as an aid to generating and improving testing methodologies in
Nonlinear Model Predictive Control Based on a Self-Organizing Recurrent Neural Network.
Han, Hong-Gui; Zhang, Lu; Hou, Ying; Qiao, Jun-Fei
2016-02-01
A nonlinear model predictive control (NMPC) scheme is developed in this paper based on a self-organizing recurrent radial basis function (SR-RBF) neural network, whose structure and parameters are adjusted concurrently in the training process. The proposed SR-RBF neural network is represented in a general nonlinear form for predicting the future dynamic behaviors of nonlinear systems. To improve the modeling accuracy, a spiking-based growing and pruning algorithm and an adaptive learning algorithm are developed to tune the structure and parameters of the SR-RBF neural network, respectively. Meanwhile, for the control problem, an improved gradient method is utilized for the solution of the optimization problem in NMPC. The stability of the resulting control system is proved based on the Lyapunov stability theory. Finally, the proposed SR-RBF neural network-based NMPC (SR-RBF-NMPC) is used to control the dissolved oxygen (DO) concentration in a wastewater treatment process (WWTP). Comparisons with other existing methods demonstrate that the SR-RBF-NMPC can achieve a considerably better model fitting for WWTP and a better control performance for DO concentration.
Chherawala, Youssouf; Roy, Partha Pratim; Cheriet, Mohamed
2016-12-01
The performance of handwriting recognition systems is dependent on the features extracted from the word image. A large body of features exists in the literature, but no method has yet been proposed to identify the most promising of these, other than a straightforward comparison based on the recognition rate. In this paper, we propose a framework for feature set evaluation based on a collaborative setting. We use a weighted vote combination of recurrent neural network (RNN) classifiers, each trained with a particular feature set. This combination is modeled in a probabilistic framework as a mixture model and two methods for weight estimation are described. The main contribution of this paper is to quantify the importance of feature sets through the combination weights, which reflect their strength and complementarity. We chose the RNN classifier because of its state-of-the-art performance. Also, we provide the first feature set benchmark for this classifier. We evaluated several feature sets on the IFN/ENIT and RIMES databases of Arabic and Latin script, respectively. The resulting combination model is competitive with state-of-the-art systems.
Recurrent dynamics in an epidemic model due to stimulated bifurcation crossovers
International Nuclear Information System (INIS)
Juanico, Drandreb Earl
2015-01-01
Epidemics are known to persist in the form of recurrence cycles. Despite intervention efforts through vaccination and targeted social distancing, peaks of activity for infectious diseases like influenza reappear over time. Analysis of a stochastic model is here undertaken to explore a proposed cycle-generating mechanism – the bifurcation crossover. Time series from simulations of the model exhibit oscillations similar to the temporal signature of influenza activity. Power-spectral density indicates a resonant frequency, which corresponds to the annual seasonality of influenza in temperate zones. The study finds that intervention actions influence the extinguishability of epidemic activity. Asymptotic solution to a backward Kolmogorov equation corresponds to a mean extinction time that is a function of both intervention efficacy and population size. Intervention efficacy must be greater than a certain threshold to increase the chances of extinguishing the epidemic. Agreement of the model with several phenomenological features of epidemic cycles lends to it a tractability that may serve as early warning of imminent outbreaks
Emergence of unstable itinerant orbits in a recurrent neural network model
International Nuclear Information System (INIS)
Suemitsu, Yoshikazu; Nara, Shigetoshi
2005-01-01
A recurrent neural network model with time delay is investigated by numerical methods. The model functions as both conventional associative memory and also enables us to embed a new kind of memory attractor that cannot be realized in models without time delay, for example chain-ring attractors. This is attributed to the fact that the time delay extends the available state space dimension. The difference between the basin structures of chain-ring attractors and of isolated cycle attractors is investigated with respect to the two attractor pattern sets, random memory patterns and designed memory patterns with intended structures. Compared to isolated attractors with random memory patterns, the basins of chain-ring attractors are reduced considerably. Computer experiments confirm that the basin volume of each embedded chain-ring attractor shrinks and the emergence of unstable itinerant orbits in the outer state space of the memory attractor basins is discovered. The instability of such itinerant orbits is investigated. Results show that a 1-bit difference in initial conditions does not exceed 10% of a total dimension within 100 updating steps
Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.
Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F
2001-01-01
When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.
Recurrent dynamics in an epidemic model due to stimulated bifurcation crossovers
Energy Technology Data Exchange (ETDEWEB)
Juanico, Drandreb Earl [Department of Mathematics, Ateneo de Manila University, Loyola Heights, Quezon City, Philippines 1108 (Philippines); National Institute of Physics, University of the Philippines, Diliman, Quezon City, Philippines 1101 (Philippines)
2015-05-15
Epidemics are known to persist in the form of recurrence cycles. Despite intervention efforts through vaccination and targeted social distancing, peaks of activity for infectious diseases like influenza reappear over time. Analysis of a stochastic model is here undertaken to explore a proposed cycle-generating mechanism – the bifurcation crossover. Time series from simulations of the model exhibit oscillations similar to the temporal signature of influenza activity. Power-spectral density indicates a resonant frequency, which corresponds to the annual seasonality of influenza in temperate zones. The study finds that intervention actions influence the extinguishability of epidemic activity. Asymptotic solution to a backward Kolmogorov equation corresponds to a mean extinction time that is a function of both intervention efficacy and population size. Intervention efficacy must be greater than a certain threshold to increase the chances of extinguishing the epidemic. Agreement of the model with several phenomenological features of epidemic cycles lends to it a tractability that may serve as early warning of imminent outbreaks.
Carr, Brett B.; Clarke, Amanda B.; de'Michieli Vitturi, Mattia
2018-01-01
Extrusion rates during lava dome-building eruptions are variable and eruption sequences at these volcanoes generally have multiple phases. Merapi Volcano, Java, Indonesia, exemplifies this common style of activity. Merapi is one of Indonesia's most active volcanoes and during the 20th and early 21st centuries effusive activity has been characterized by long periods of very slow (work has suggested that the peak extrusion rates observed in early June were triggered by the earthquake through either dynamic stress-induced overpressure or the addition of CO2 due to decarbonation and gas escape from new fractures in the bedrock. We use the numerical model to test the feasibility of these proposed hypotheses and show that, in order to explain the observed change in extrusion rate, an increase of approximately 5-7 MPa in magma storage zone overpressure is required. We also find that the addition of ∼1000 ppm CO2 to some portion of the magma in the storage zone following the earthquake reduces water solubility such that gas exsolution is sufficient to generate the required overpressure. Thus, the proposed mechanism of CO2 addition is a viable explanation for the peak phase of the Merapi 2006 eruption. A time-series of extrusion rate shows a sudden increase three days following the earthquake. We explain this three-day delay by the combined time required for the effects of the earthquake and corresponding CO2 increase to develop in the magma storage system (1-2 days), and the time we calculate for the affected magma to ascend from storage zone to surface (40 h). The increased extrusion rate was sustained for 2-7 days before dissipating and returning to pre-earthquake levels. During this phase, we estimate that 3.5 million m3 DRE of magma was erupted along with 11 ktons of CO2. The final phase of the 2006 eruption was characterized by highly variable extrusion rates. We demonstrate that those changes were likely controlled by failure of the edifice that had been confining
Daniell, James; Schaefer, Andreas; Wenzel, Friedemann; Khazai, Bijan; Girard, Trevor; Kunz-Plapp, Tina; Kunz, Michael; Muehr, Bernhard
2016-04-01
Over the days following the 2015 Nepal earthquake, rapid loss estimates of deaths and the economic loss and reconstruction cost were undertaken by our research group in conjunction with the World Bank. This modelling relied on historic losses from other Nepal earthquakes as well as detailed socioeconomic data and earthquake loss information via CATDAT. The modelled results were very close to the final death toll and reconstruction cost for the 2015 earthquake of around 9000 deaths and a direct building loss of ca. 3 billion (a). A description of the process undertaken to produce these loss estimates is described and the potential for use in analysing reconstruction costs from future Nepal earthquakes in rapid time post-event. The reconstruction cost and death toll model is then used as the base model for the examination of the effect of spending money on earthquake retrofitting of buildings versus complete reconstruction of buildings. This is undertaken future events using empirical statistics from past events along with further analytical modelling. The effects of investment vs. the time of a future event is also explored. Preliminary low-cost options (b) along the line of other country studies for retrofitting (ca. 100) are examined versus the option of different building typologies in Nepal as well as investment in various sectors of construction. The effect of public vs. private capital expenditure post-earthquake is also explored as part of this analysis, as well as spending on other components outside of earthquakes. a) http://www.scientificamerican.com/article/experts-calculate-new-loss-predictions-for-nepal-quake/ b) http://www.aees.org.au/wp-content/uploads/2015/06/23-Daniell.pdf
Recurrent neural network based hybrid model for reconstructing gene regulatory network.
Raza, Khalid; Alam, Mansaf
2016-10-01
One of the exciting problems in systems biology research is to decipher how genome controls the development of complex biological system. The gene regulatory networks (GRNs) help in the identification of regulatory interactions between genes and offer fruitful information related to functional role of individual gene in a cellular system. Discovering GRNs lead to a wide range of applications, including identification of disease related pathways providing novel tentative drug targets, helps to predict disease response, and also assists in diagnosing various diseases including cancer. Reconstruction of GRNs from available biological data is still an open problem. This paper proposes a recurrent neural network (RNN) based model of GRN, hybridized with generalized extended Kalman filter for weight update in backpropagation through time training algorithm. The RNN is a complex neural network that gives a better settlement between biological closeness and mathematical flexibility to model GRN; and is also able to capture complex, non-linear and dynamic relationships among variables. Gene expression data are inherently noisy and Kalman filter performs well for estimation problem even in noisy data. Hence, we applied non-linear version of Kalman filter, known as generalized extended Kalman filter, for weight update during RNN training. The developed model has been tested on four benchmark networks such as DNA SOS repair network, IRMA network, and two synthetic networks from DREAM Challenge. We performed a comparison of our results with other state-of-the-art techniques which shows superiority of our proposed model. Further, 5% Gaussian noise has been induced in the dataset and result of the proposed model shows negligible effect of noise on results, demonstrating the noise tolerance capability of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Jensen, P; Krogsgaard, M R; Christiansen, J
1996-01-01
-80. INTERVENTIONS: All patients were followed up by rectoscopy and double contrast barium enema. The survival data were analysed by Cox's proportional hazards model. MAIN OUTCOME MEASURES: Variables of significant prognostic importance for recurrence of adenomas and the development of cancer were identified...
Gelderloos, L.J.; Chrupala, Grzegorz
2016-01-01
We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover
Deep Recurrent Model for Server Load and Performance Prediction in Data Center
Directory of Open Access Journals (Sweden)
Zheng Huang
2017-01-01
Full Text Available Recurrent neural network (RNN has been widely applied to many sequential tagging tasks such as natural language process (NLP and time series analysis, and it has been proved that RNN works well in those areas. In this paper, we propose using RNN with long short-term memory (LSTM units for server load and performance prediction. Classical methods for performance prediction focus on building relation between performance and time domain, which makes a lot of unrealistic hypotheses. Our model is built based on events (user requests, which is the root cause of server performance. We predict the performance of the servers using RNN-LSTM by analyzing the log of servers in data center which contains user’s access sequence. Previous work for workload prediction could not generate detailed simulated workload, which is useful in testing the working condition of servers. Our method provides a new way to reproduce user request sequence to solve this problem by using RNN-LSTM. Experiment result shows that our models get a good performance in generating load and predicting performance on the data set which has been logged in online service. We did experiments with nginx web server and mysql database server, and our methods can been easily applied to other servers in data center.
Uloza, Virgilijus; Kuzminienė, Alina; Palubinskienė, Jolita; Balnytė, Ingrida; Ulozienė, Ingrida; Valančiūtė, Angelija
2017-07-01
We aimed to develop a chick embryo chorioallantoic membrane (CAM) model of recurrent respiratory papilloma (RPP) and to evaluate its morphological and morphometric characteristics, together with angiogenic features. Fresh RRP tissue samples obtained from 13 patients were implanted in 174 chick embryo CAMs. Morphological, morphometric, and angiogenic changes in the CAM and chorionic epithelium were evaluated up until 7 days after the implantation. Immunohistochemical analysis (34βE12, Ki-67, MMP-9, PCNA, and Sambucus nigra staining) was performed to detect cytokeratins and endothelial cells and to evaluate proliferative capacity of the RRP before and after implantation on the CAM. The implanted RRP tissue samples survived on CAM in 73% of cases while retaining their essential morphologic characteristics and proliferative capacity of the original tumor. Implants induced thickening of both the CAM (241-560%, p=0.001) and the chorionic epithelium (107-151%, p=0.001), while the number of blood vessels (37-85%, p=0.001) in the CAM increased. The results of the present study confirmed that chick embryo CAM is a relevant host for serving as a medium for RRP fresh tissue implantation. The CAM assay demonstrated the specific RRP tumor growth pattern after implantation and provided the first morphological and morphometric characterization of the RRP CAM model that opens new horizons in studying this disease.
International Nuclear Information System (INIS)
Hofmann, R.B.
1995-01-01
Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository
Bennington, Ninfa; Thurber, Clifford; Feigl, Kurt; ,
2011-01-01
Several studies of the 2004 Parkfield earthquake have linked the spatial distribution of the event’s aftershocks to the mainshock slip distribution on the fault. Using geodetic data, we find a model of coseismic slip for the 2004 Parkfield earthquake with the constraint that the edges of coseismic slip patches align with aftershocks. The constraint is applied by encouraging the curvature of coseismic slip in each model cell to be equal to the negative of the curvature of seismicity density. The large patch of peak slip about 15 km northwest of the 2004 hypocenter found in the curvature-constrained model is in good agreement in location and amplitude with previous geodetic studies and the majority of strong motion studies. The curvature-constrained solution shows slip primarily between aftershock “streaks” with the continuation of moderate levels of slip to the southeast. These observations are in good agreement with strong motion studies, but inconsistent with the majority of published geodetic slip models. Southeast of the 2004 hypocenter, a patch of peak slip observed in strong motion studies is absent from our curvature-constrained model, but the available GPS data do not resolve slip in this region. We conclude that the geodetic slip model constrained by the aftershock distribution fits the geodetic data quite well and that inconsistencies between models derived from seismic and geodetic data can be attributed largely to resolution issues.
Directory of Open Access Journals (Sweden)
Sajad Ganjehi
2013-08-01
Full Text Available Introduction : Earthquakes are imminent threats to urban areas of Iran, especially Tehran. They can cause extensive destructions and lead to heavy casualties. One of the most important aspects of disaster management after earthquake is the rapid transfer of casualties to emergency shelters. To expedite emergency evacuation process the optimal safe path method should be considered. To examine the safety of road networks and to determine the most optimal route at pre-earthquake phase, a series of parameters should be taken into account. Methods : In this study, we employed a multi-criteria decision-making approach to determine and evaluate the effective safety parameters for selection of optimal routes in emergency evacuation after an earthquake. Results: The relationship between the parameters was analyzed and the effect of each parameter was listed. A process model was described and a case study was implemented in the 13th Aban neighborhood ( Tehran’s 20th municipal district . Then, an optimal path to safe places in an emergency evacuation after an earthquake in the 13th Aban neighborhood was selected. Conclusion : Analytic hierarchy process (AHP, as the main model, was employed. Each parameter of the model was described. Also, the capabilities of GIS software such as layer coverage were used. Keywords: Earthquake, emergency evacuation, Analytic Hierarchy Process (AHP, crisis management, optimization, 13th Aban neighborhood of Tehran
Semi-parametric proportional intensity models robustness for right-censored recurrent failure data
Energy Technology Data Exchange (ETDEWEB)
Jiang, S.T. [College of Engineering, University of Oklahoma, 202 West Boyd St., Room 107, Norman, OK 73019 (United States); Landers, T.L. [College of Engineering, University of Oklahoma, 202 West Boyd St., Room 107, Norman, OK 73019 (United States)]. E-mail: landers@ou.edu; Rhoads, T.R. [College of Engineering, University of Oklahoma, 202 West Boyd St., Room 107, Norman, OK 73019 (United States)
2005-10-01
This paper reports the robustness of the four proportional intensity (PI) models: Prentice-Williams-Peterson-gap time (PWP-GT), PWP-total time (PWP-TT), Andersen-Gill (AG), and Wei-Lin-Weissfeld (WLW), for right-censored recurrent failure event data. The results are beneficial to practitioners in anticipating the more favorable engineering application domains and selecting appropriate PI models. The PWP-GT and AG prove to be models of choice over ranges of sample sizes, shape parameters, and censoring severity. At the smaller sample size (U=60), where there are 30 per class for a two-level covariate, the PWP-GT proves to perform well for moderate right-censoring (P {sub c}{<=}0.8), where 80% of the units have some censoring, and moderately decreasing, constant, and moderately increasing rates of occurrence of failures (power-law NHPP shape parameter in the range of 0.8{<=}{delta}{<=}1.8). For the large sample size (U=180), the PWP-GT performs well for severe right-censoring (0.8
model proves to outperform the PWP-TT and WLW for stationary processes (HPP) across a wide range of right-censorship (0.0{<=}P {sub c}{<=}1.0) and for sample sizes of 60 or more.
The HayWired Earthquake Scenario—Earthquake Hazards
Detweiler, Shane T.; Wein, Anne M.
2017-04-24
The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of
Pattern recognition and modelling of earthquake registrations with interactive computer support
International Nuclear Information System (INIS)
Manova, Katarina S.
2004-01-01
The object of the thesis is Pattern Recognition. Pattern recognition i.e. classification, is applied in many fields: speech recognition, hand printed character recognition, medical analysis, satellite and aerial-photo interpretations, biology, computer vision, information retrieval and so on. In this thesis is studied its applicability in seismology. Signal classification is an area of great importance in a wide variety of applications. This thesis deals with the problem of (automatic) classification of earthquake signals, which are non-stationary signals. Non-stationary signal classification is an area of active research in the signal and image processing community. The goal of the thesis is recognition of earthquake signals according to their epicentral zone. Source classification i.e. recognition is based on transformation of seismograms (earthquake registrations) to images, via time-frequency transformations, and applying image processing and pattern recognition techniques for feature extraction, classification and recognition. The tested data include local earthquakes from seismic regions in Macedonia. By using actual seismic data it is shown that proposed methods provide satisfactory results for classification and recognition.(Author)
A multi-objective robust optimization model for logistics planning in the earthquake response phase
Najafi, M.; Eshghi, K.; Dullaert, W.E.H.
2013-01-01
Usually, resources are short in supply when earthquakes occur. In such emergency situations, disaster relief organizations must use these scarce resources efficiently to achieve the best possible emergency relief. This paper therefore proposes a multi-objective, multi-mode, multi-commodity, and
Earthquake resistant design of structures
International Nuclear Information System (INIS)
Choi, Chang Geun; Kim, Gyu Seok; Lee, Dong Geun
1990-02-01
This book tells of occurrence of earthquake and damage analysis of earthquake, equivalent static analysis method, application of equivalent static analysis method, dynamic analysis method like time history analysis by mode superposition method and direct integration method, design spectrum analysis considering an earthquake-resistant design in Korea. Such as analysis model and vibration mode, calculation of base shear, calculation of story seismic load and combine of analysis results.
Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Hara, Tatsuhiko
2017-07-01
Seismic activity occurred off western Kyushu, Japan, at the northern end of the Okinawa Trough on May 6, 2016 (14:11 JST), 22 days after the onset of the 2016 Kumamoto earthquake sequence. The area is adjacent to the Beppu-Shimabara graben where the 2016 Kumamoto earthquake sequence occurred. In the area off western Kyushu, a M7.1 earthquake also occurred on November 14, 2015 (5:51 JST), and a tsunami with a height of 0.3 m was observed. In order to better understand these seismic activity and tsunamis, it is necessary to study the sources of, and strong motions due to, earthquakes in the area off western Kyushu. For such studies, validation of synthetic waveforms is important because of the presence of the oceanic water layer and thick sediments in the source area. We show the validation results for synthetic waveforms through nonlinear inversion analyses of small earthquakes ( M5). We use a land-ocean unified 3D structure model, 3D HOT finite-difference method ("HOT" stands for Heterogeneity, Ocean layer and Topography) and a multi-graphic processing unit (GPU) acceleration to simulate the wave propagations. We estimate the first-motion augmented moment tensor (FAMT) solution based on both the long-period surface waves and short-period body waves. The FAMT solutions systematically shift landward by about 13 km, on average, from the epicenters determined by the Japan Meteorological Agency. The synthetics provide good reproductions of the observed full waveforms with periods of 10 s or longer. On the other hand, for waveforms with shorter periods (down to 4 s), the later surface waves are not reproduced well, while the first parts of the waveforms (comprising P- and S-waves) are reproduced to some extent. These results indicate that the current 3D structure model around Kyushu is effective for generating full waveforms, including surface waves with periods of about 10 s or longer. Based on these findings, we analyze the 2015 M7.1 event using the cross
Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake
Durukal, E.; Sesetyan, K.; Erdik, M.
2009-04-01
The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing
Mandal, Sudip; Saha, Goutam; Pal, Rajat Kumar
2017-08-01
Correct inference of genetic regulations inside a cell from the biological database like time series microarray data is one of the greatest challenges in post genomic era for biologists and researchers. Recurrent Neural Network (RNN) is one of the most popular and simple approach to model the dynamics as well as to infer correct dependencies among genes. Inspired by the behavior of social elephants, we propose a new metaheuristic namely Elephant Swarm Water Search Algorithm (ESWSA) to infer Gene Regulatory Network (GRN). This algorithm is mainly based on the water search strategy of intelligent and social elephants during drought, utilizing the different types of communication techniques. Initially, the algorithm is tested against benchmark small and medium scale artificial genetic networks without and with presence of different noise levels and the efficiency was observed in term of parametric error, minimum fitness value, execution time, accuracy of prediction of true regulation, etc. Next, the proposed algorithm is tested against the real time gene expression data of Escherichia Coli SOS Network and results were also compared with others state of the art optimization methods. The experimental results suggest that ESWSA is very efficient for GRN inference problem and performs better than other methods in many ways.
Recurrent network models for perfect temporal integration of fluctuating correlated inputs.
Directory of Open Access Journals (Sweden)
Hiroshi Okamoto
2009-06-01
Full Text Available Temporal integration of input is essential to the accumulation of information in various cognitive and behavioral processes, and gradually increasing neuronal activity, typically occurring within a range of seconds, is considered to reflect such computation by the brain. Some psychological evidence suggests that temporal integration by the brain is nearly perfect, that is, the integration is non-leaky, and the output of a neural integrator is accurately proportional to the strength of input. Neural mechanisms of perfect temporal integration, however, remain largely unknown. Here, we propose a recurrent network model of cortical neurons that perfectly integrates partially correlated, irregular input spike trains. We demonstrate that the rate of this temporal integration changes proportionately to the probability of spike coincidences in synaptic inputs. We analytically prove that this highly accurate integration of synaptic inputs emerges from integration of the variance of the fluctuating synaptic inputs, when their mean component is kept constant. Highly irregular neuronal firing and spike coincidences are the major features of cortical activity, but they have been separately addressed so far. Our results suggest that the efficient protocol of information integration by cortical networks essentially requires both features and hence is heterotic.
Recurrence in affective disorder
DEFF Research Database (Denmark)
Kessing, L V; Olsen, E W; Andersen, P K
1999-01-01
The risk of recurrence in affective disorder is influenced by the number of prior episodes and by a person's tendency toward recurrence. Newly developed frailty models were used to estimate the effect of the number of episodes on the rate of recurrence, taking into account individual frailty toward...... recurrence. The study base was the Danish psychiatric case register of all hospital admissions for primary affective disorder in Denmark during 1971-1993. A total of 20,350 first-admission patients were discharged with a diagnosis of major affective disorder. For women with unipolar disorder and for all...... kinds of patients with bipolar disorder, the rate of recurrence was affected by the number of prior episodes even when the effect was adjusted for individual frailty toward recurrence. No effect of episodes but a large effect of the frailty parameter was found for unipolar men. The authors concluded...
Clustering and interpretation of local earthquake tomography models in the southern Dead Sea basin
Bauer, Klaus; Braeuer, Benjamin
2016-04-01
The Dead Sea transform (DST) marks the boundary between the Arabian and the African plates. Ongoing left-lateral relative plate motion and strike-slip deformation started in the Early Miocene (20 MA) and produced a total shift of 107 km until presence. The Dead Sea basin (DSB) located in the central part of the DST is one of the largest pull-apart basins in the world. It was formed from step-over of different fault strands at a major segment boundary of the transform fault system. The basin development was accompanied by deposition of clastics and evaporites and subsequent salt diapirism. Ongoing deformation within the basin and activity of the boundary faults are indicated by increased seismicity. The internal architecture of the DSB and the crustal structure around the DST were subject of several large scientific projects carried out since 2000. Here we report on a local earthquake tomography study from the southern DSB. In 2006-2008, a dense seismic network consisting of 65 stations was operated for 18 months in the southern part of the DSB and surrounding regions. Altogether 530 well-constrained seismic events with 13,970 P- and 12,760 S-wave arrival times were used for a travel time inversion for Vp, Vp/Vs velocity structure and seismicity distribution. The work flow included 1D inversion, 2.5D and 3D tomography, and resolution analysis. We demonstrate a possible strategy how several tomographic models such as Vp, Vs and Vp/Vs can be integrated for a combined lithological interpretation. We analyzed the tomographic models derived by 2.5D inversion using neural network clustering techniques. The method allows us to identify major lithologies by their petrophysical signatures. Remapping the clusters into the subsurface reveals the distribution of basin sediments, prebasin sedimentary rocks, and crystalline basement. The DSB shows an asymmetric structure with thickness variation from 5 km in the west to 13 km in the east. Most importantly, a well-defined body
García-Rodríguez, M. J.; Malpica, J. A.; Benito, B.
2009-04-01
In recent years, interest in landslide hazard assessment studies has increased substantially. They are appropriate for evaluation and mitigation plan development in landslide-prone areas. There are several techniques available for landslide hazard research at a regional scale. Generally, they can be classified in two groups: qualitative and quantitative methods. Most of qualitative methods tend to be subjective, since they depend on expert opinions and represent hazard levels in descriptive terms. On the other hand, quantitative methods are objective and they are commonly used due to the correlation between the instability factors and the location of the landslides. Within this group, statistical approaches and new heuristic techniques based on artificial intelligence (artificial neural network (ANN), fuzzy logic, etc.) provide rigorous analysis to assess landslide hazard over large regions. However, they depend on qualitative and quantitative data, scale, types of movements and characteristic factors used. We analysed and compared an approach for assessing earthquake-triggered landslides hazard using logistic regression (LR) and artificial neural networks (ANN) with a back-propagation learning algorithm. One application has been developed in El Salvador, a country of Central America where the earthquake-triggered landslides are usual phenomena. In a first phase, we analysed the susceptibility and hazard associated to the seismic scenario of the 2001 January 13th earthquake. We calibrated the models using data from the landslide inventory for this scenario. These analyses require input variables representing physical parameters to contribute to the initiation of slope instability, for example, slope gradient, elevation, aspect, mean annual precipitation, lithology, land use, and terrain roughness, while the occurrence or non-occurrence of landslides is considered as dependent variable. The results of the landslide susceptibility analysis are checked using landslide
Evidence for a twelfth large earthquake on the southern hayward fault in the past 1900 years
Lienkaemper, J.J.; Williams, P.L.; Guilderson, T.P.
2010-01-01
We present age and stratigraphic evidence for an additional paleoearthquake at the Tyson Lagoon site. The acquisition of 19 additional radiocarbon dates and the inclusion of this additional event has resolved a large age discrepancy in our earlier earthquake chronology. The age of event E10 was previously poorly constrained, thus increasing the uncertainty in the mean recurrence interval (RI), a critical factor in seismic hazard evaluation. Reinspection of many trench logs revealed substantial evidence suggesting that an additional earthquake occurred between E10 and E9 within unit u45. Strata in older u45 are faulted in the main fault zone and overlain by scarp colluviums in two locations.We conclude that an additional surfacerupturing event (E9.5) occurred between E9 and E10. Since 91 A.D. (??40 yr, 1??), 11 paleoearthquakes preceded the M 6:8 earthquake in 1868, yielding a mean RI of 161 ?? 65 yr (1??, standard deviation of recurrence intervals). However, the standard error of the mean (SEM) is well determined at ??10 yr. Since ~1300 A.D., the mean rate has increased slightly, but is indistinguishable from the overall rate within the uncertainties. Recurrence for the 12-event sequence seems fairly regular: the coefficient of variation is 0.40, and it yields a 30-yr earthquake probability of 29%. The apparent regularity in timing implied by this earthquake chronology lends support for the use of time-dependent renewal models rather than assuming a random process to forecast earthquakes, at least for the southern Hayward fault.
Hoellinger, Thomas; Petieau, Mathieu; Duvinage, Matthieu; Castermans, Thierry; Seetharaman, Karthik; Cebolla, Ana-Maria; Bengoetxea, Ana; Ivanenko, Yuri; Dan, Bernard; Cheron, Guy
2013-01-01
The existence of dedicated neuronal modules such as those organized in the cerebral cortex, thalamus, basal ganglia, cerebellum, or spinal cord raises the question of how these functional modules are coordinated for appropriate motor behavior. Study of human locomotion offers an interesting field for addressing this central question. The coordination of the elevation of the 3 leg segments under a planar covariation rule (Borghese et al., 1996) was recently modeled (Barliya et al., 2009) by phase-adjusted simple oscillators shedding new light on the understanding of the central pattern generator (CPG) processing relevant oscillation signals. We describe the use of a dynamic recurrent neural network (DRNN) mimicking the natural oscillatory behavior of human locomotion for reproducing the planar covariation rule in both legs at different walking speeds. Neural network learning was based on sinusoid signals integrating frequency and amplitude features of the first three harmonics of the sagittal elevation angles of the thigh, shank, and foot of each lower limb. We verified the biological plausibility of the neural networks. Best results were obtained with oscillations extracted from the first three harmonics in comparison to oscillations outside the harmonic frequency peaks. Physiological replication steadily increased with the number of neuronal units from 1 to 80, where similarity index reached 0.99. Analysis of synaptic weighting showed that the proportion of inhibitory connections consistently increased with the number of neuronal units in the DRNN. This emerging property in the artificial neural networks resonates with recent advances in neurophysiology of inhibitory neurons that are involved in central nervous system oscillatory activities. The main message of this study is that this type of DRNN may offer a useful model of physiological central pattern generator for gaining insights in basic research and developing clinical applications.
The effects of freeze-dried Ganoderma lucidum mycelia on a recurrent oral ulceration rat model.
Xie, Ling; Zhong, Xiaohong; Liu, Dongbo; Liu, Lin; Xia, Zhilan
2017-12-01
Conventional scientific studies had supported the use of polysaccharides and β-glucans from a number of fungi, including Ganoderma lucidum for the treatment of recurrent oral ulceration (ROU). Our aim of the present study was to evaluate whether freeze-dried powder from G. lucidum mycelia (FDPGLM) prevents ROU in rats. A Sprague-Dawley (SD) rat model with ROU was established by autoantigen injection. The ROU rats were treated with three different dosages of FDPGLM and prednisone acetate (PA), and their effects were evaluated according to the clinical therapeutic evaluation indices of ROU. High-dose FDPGLM induced significantly prolonged total intervals and a reduction in the number of ulcers and ulcer areas, thereby indicating that the treatment was effective in preventing ROU. Enzyme-linked immunosorbent assay (ELISA) showed that high-dose FDPGLM significantly enhanced the serum transforming growth factor-β1 (TGF-β1) levels, whereas reduced those of interleukin-6 (IL-6) and interleukin-17 (IL-17). Flow cytometry (FCM) showed that the proportion of CD4 + CD25 + Foxp3 + (forkhead box P3) regulatory T cells (Tregs) significantly increased by 1.5-fold in the high-dose FDPGLM group compared to that in the rat model group (P < 0.01). The application of middle- and high-dose FDPGLM also resulted in the upregulation of Foxp3 and downregulation of retinoid-related orphan receptor gamma t(RORγt) mRNA. High-dose FDPGLM possibly plays a role in ROU by promoting CD4 + CD25 + Foxp3 + Treg and inhibiting T helper cell 17 differentiation. This study also shows that FDPGLM may be potentially used as a complementary and alternative medicine treatment scheme for ROU.
Nowcasting Earthquakes and Tsunamis
Rundle, J. B.; Turcotte, D. L.
2017-12-01
The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk
International Nuclear Information System (INIS)
Baeckblom, Goeran; Munier, Raymond
2002-06-01
their original values within a few months. The density of the buffer around the canister is high enough to prevent liquefaction due to shaking. The predominant brittle deformation of a rock mass will be reactivation of pre- existing fractures. The ata emanating from faults intersecting tunnels show that creation of new fractures is confined to the immediate vicinity of the reactivated faults and that deformation in host rock is rapidly decreasing with the distance from the fault. By selection of appropriate respect distances the probability of canister damage due to faulting is further lowered. Data from deep South African mines show that rocks in a environment with non- existing faults and with low fracture densities and high stresses might generate faults in a previously unfractured rock mass. The Swedish repository will be located in fractured bedrock, at intermediate depth, 400 - 700 m, where stresses are moderate. The conditions to create these peculiar mining-induced features will not prevail in the repository environment. Should these faults anyhow be created, the canister is designed to withstand a shear deformation of at least 0.1 m. This corresponds to a magnitude 6 earthquake along the fault with a length of at least 1 km which is highly unlikely. Respect distance has to be site and fault specific. Field evidence gathered in this study indicates that respect distances may be considerably smaller (tens to hundreds of m) than predicted by numerical modelling (thousands of m). However, the accumulated deformation during repeated, future seismic event has to be accounted for
Geophysical Anomalies and Earthquake Prediction
Jackson, D. D.
2008-12-01
Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require
Asano, K.; Sekiguchi, H.; Iwata, T.; Yoshimi, M.; Hayashida, T.; Saomoto, H.; Horikawa, H.
2013-12-01
The three-dimensional velocity structure model for the Osaka sedimentary basin, southwest Japan is developed and improved based on many kinds of geophysical explorations for decades (e.g., Kagawa et al., 1993; Horikawa et al., 2003; Iwata et al., 2008). Recently, our project (Sekiguchi et al., 2013) developed a new three-dimensional velocity model for strong motion prediction of the Uemachi fault earthquake in the Osaka basin considering both geophysical and geological information by adding newly obtained exploration data such as reflection surveys, microtremor surveys, and receiver function analysis (hereafter we call UMC2013 model) . On April 13, 2013, an inland earthquake of Mw5.8 occurred in Awaji Island, which is close to the southwestern boundary of the aftershock area of the 1995 Kobe earthquake. The strong ground motions are densely observed at more than 100 stations in the basin. The ground motion lasted longer than four minutes in the Osaka urban area where its bedrock depth is about 1-2 km. This long-duration ground motions are mainly due to the surface waves excited in this sedimentary basin whereas the magnitude of this earthquake is moderate and the rupture duration is expected to be less than 5 s. In this study, we modeled long-period (more than 2s) ground motions during this earthquake to check the performance of the present UMC2013 model and to obtain a better constraint on the attenuation factor of sedimentary part of the basin. The seismic wave propagation in the region including the source and the Osaka basin is modeled by the finite difference method using the staggered grid solving the elasto-dynamic equations. The domain of 90km×85km×25.5km is modeled and discretized with a grid spacing of 50 m. Since the minimum S-wave velocity of the UMC2013 model is about 250 m/s, this calculation is valid up to the period of about 1 s. The effect of attenuation is included in the form of Q(f)=Q0(T0/T) proposed by Graves (1996). A PML is implemented in
Odum, J.K.; Stephenson, W.J.; Shedlock, K.M.; Pratt, T.L.
1998-01-01
The February 7, 1812, New Madrid, Missouri, earthquake (M [moment magnitude] 8) was the third and final large-magnitude event to rock the northern Mississippi Embayment during the winter of 1811-1812. Although ground shaking was so strong that it rang church bells, stopped clocks, buckled pavement, and rocked buildings up and down the eastern seaboard, little coseismic surface deformation exists today in the New Madrid area. The fault(s) that ruptured during this event have remained enigmatic. We have integrated geomorphic data documenting differential surficial deformation (supplemented by historical accounts of surficial deformation and earthquake-induced Mississippi River waterfalls and rapids) with the interpretation of existing and recently acquired seismic reflection data, to develop a tectonic model of the near-surface structures in the New Madrid, Missouri, area. This model consists of two primary components: a northnorthwest-trending thrust fault and a series of northeast-trending, strike-slip, tear faults. We conclude that the Reelfoot fault is a thrust fault that is at least 30 km long. We also infer that tear faults in the near surface partitioned the hanging wall into subparallel blocks that have undergone differential displacement during episodes of faulting. The northeast-trending tear faults bound an area documented to have been uplifted at least 0.5 m during the February 7, 1812, earthquake. These faults also appear to bound changes in the surface density of epicenters that are within the modern seismicity, which is occurring in the stepover zone of the left-stepping right-lateral strike-slip fault system of the modern New Madrid seismic zone.
Directory of Open Access Journals (Sweden)
Faqi eDiao
2015-10-01
Full Text Available The active collision at the Himalayas combines crustal shortening and thickening, associated with the development of hazardous seismogenic faults. The 2015 Kathmandu earthquake largely affected Kathmandu city and partially ruptured a previously identified seismic gap. With a magnitude of Mw 7.8 as determined by the GEOFON seismic network, the 25 April 2015 earthquake displays uplift of the Kathmandu basin constrained by interferometrically processed ALOS-2, RADARSAT-2 and Sentinel-1 satellite radar data. An area of about 7,000 km² in the basin showed ground uplift locally exceeding 2 m, and a similarly large area (approx. 9000 km2 showed subsidence in the north, both of which could be simulated with a fault that is localized beneath the Kathmandu basin at a shallow depth of 5-15 km. Coulomb stress calculations reveal that the same fault adjacent to the Kathmandu basin experienced stress increase, similar as at sub-parallel faults of the thin skinned nappes, exactly at the location where the largest aftershock occurred (Mw 7.3 on 12. May, 2015. Therefore this study provides insights into the shortening and uplift tectonics of the Himalayas and shows the stress redistribution associated with the earthquake.
Energy Technology Data Exchange (ETDEWEB)
Miyamoto, Y.; Miura, K. (Kajima Corp., Tokyo (Japan)); Scott, R.; Hushmand, B. (California Inst. of Technology, California, CA (United States))
1992-09-30
For the purpose of studying the pile foundation response in liquefiable soil deposit during earthquakes, a centrifugal loading system is employed which can reproduce the stress conditions of the soil in the actual ground, and earthquake wave vibration tests are performed in dry and saturated sand layers using a pile foundation model equipped with 4 piles. In addition, the result of the tests is analyzed by simulation using an analytic method for which effective stress is taken into consideration to investigate the effectiveness of this analytical model. It is clarified from the result of the experiments that the bending moment of the pile and the response characteristics of the foundation in the pile foundation response in saturated sand are greatly affected by the longer period of acceleration wave form of the ground and the increase in the ground displacement due to excess pore water pressure buildup. It is shown that the analytical model of the pile foundation/ground system is appropriate, and that this analytical method is effective in evaluating the seismic response of the pile foundation in nonlinear liquefiable soil. 23 refs., 21 figs., 3 tabs.
Haidar, Yarah M; Sahyouni, Ronald; Moshtaghi, Omid; Wang, Beverly Y; Djalilian, Hamid R; Middlebrooks, John C; Verma, Sunil P; Lin, Harrison W
2017-10-31
Laryngeal muscles (LMs) are controlled by the recurrent laryngeal nerve (RLN), injury of which can result in vocal fold (VF) paralysis (VFP). We aimed to introduce a bioelectric approach to selective stimulation of LMs and graded muscle contraction responses. Acute experiments in cats. The study included six anesthetized cats. In four cats, a multichannel penetrating microelectrode array (MEA) was placed into an uninjured RLN. For RLN injury experiments, one cat received a standardized hemostat-crush injury, and one cat received a transection-reapproximation injury 4 months prior to testing. In each experiment, three LMs (thyroarytenoid, posterior cricoarytenoid, and cricothyroid muscles) were monitored with an electromyographic (EMG) nerve integrity monitoring system. Electrical current pulses were delivered to each stimulating channel individually. Elicited EMG voltage outputs were recorded for each muscle. Direct videolaryngoscopy was performed for visualization of VF movement. Stimulation through individual channels led to selective activation of restricted nerve populations, resulting in selective contraction of individual LMs. Increasing current levels resulted in rising EMG voltage responses. Typically, activation of individual muscles was successfully achieved via single placement of the MEA by selection of appropriate stimulation channels. VF abduction was predominantly observed on videolaryngoscopy. Nerve histology confirmed injury in cases of RLN crush and transection experiments. We demonstrated the ability of a penetrating MEA to selectively stimulate restricted fiber populations within the feline RLN and selectively elicit contractions of discrete LMs in both acute and injury-model experiments, suggesting a potential role for intraneural MEA implantation in VFP management. NA Laryngoscope, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Charles Darwin's earthquake reports
Galiev, Shamil
2010-05-01
problems which began to discuss only during the last time. Earthquakes often precede volcanic eruptions. According to Darwin, the earthquake-induced shock may be a common mechanism of the simultaneous eruptions of the volcanoes separated by long distances. In particular, Darwin wrote that ‘… the elevation of many hundred square miles of territory near Concepcion is part of the same phenomenon, with that splashing up, if I may so call it, of volcanic matter through the orifices in the Cordillera at the moment of the shock;…'. According to Darwin the crust is a system where fractured zones, and zones of seismic and volcanic activities interact. Darwin formulated the task of considering together the processes studied now as seismology and volcanology. However the difficulties are such that the study of interactions between earthquakes and volcanoes began only recently and his works on this had relatively little impact on the development of geosciences. In this report, we discuss how the latest data on seismic and volcanic events support the Darwin's observations and ideas about the 1835 Chilean earthquake. The material from researchspace. auckland. ac. nz/handle/2292/4474 is used. We show how modern mechanical tests from impact engineering and simple experiments with weakly-cohesive materials also support his observations and ideas. On the other hand, we developed the mathematical theory of the earthquake-induced catastrophic wave phenomena. This theory allow to explain the most important aspects the Darwin's earthquake reports. This is achieved through the simplification of fundamental governing equations of considering problems to strongly-nonlinear wave equations. Solutions of these equations are constructed with the help of analytic and numerical techniques. The solutions can model different strongly-nonlinear wave phenomena which generate in a variety of physical context. A comparison with relevant experimental observations is also presented.
Hearn, Elizabeth H.; Koltermann, Christine; Rubinstein, Justin R.
2018-01-01
We have developed groundwater flow models to explore the possible relationship between wastewater injection and the 12 November 2014 Mw 4.8 Milan, Kansas earthquake. We calculate pore pressure increases in the uppermost crust using a suite of models in which hydraulic properties of the Arbuckle Formation and the Milan earthquake fault zone, the Milan earthquake hypocenter depth, and fault zone geometry are varied. Given pre‐earthquake injection volumes and reasonable hydrogeologic properties, significantly increasing pore pressure at the Milan hypocenter requires that most flow occur through a conductive channel (i.e., the lower Arbuckle and the fault zone) rather than a conductive 3‐D volume. For a range of reasonable lower Arbuckle and fault zone hydraulic parameters, the modeled pore pressure increase at the Milan hypocenter exceeds a minimum triggering threshold of 0.01 MPa at the time of the earthquake. Critical factors include injection into the base of the Arbuckle Formation and proximity of the injection point to a narrow fault damage zone or conductive fracture in the pre‐Cambrian basement with a hydraulic diffusivity of about 3–30 m2/s. The maximum pore pressure increase we obtain at the Milan hypocenter before the earthquake is 0.06 MPa. This suggests that the Milan earthquake occurred on a fault segment that was critically stressed prior to significant wastewater injection in the area. Given continued wastewater injection into the upper Arbuckle in the Milan region, assessment of the middle Arbuckle as a hydraulic barrier remains an important research priority.
Ruiz Estrada, Mario Arturo; Yap, Su Fei; Park, Donghyun
2014-07-01
Natural hazards have a potentially large impact on economic growth, but measuring their economic impact is subject to a great deal of uncertainty. The central objective of this paper is to demonstrate a model--the natural disasters vulnerability evaluation (NDVE) model--that can be used to evaluate the impact of natural hazards on gross national product growth. The model is based on five basic indicators-natural hazards growth rates (αi), the national natural hazards vulnerability rate (ΩT), the natural disaster devastation magnitude rate (Π), the economic desgrowth rate (i.e. shrinkage of the economy) (δ), and the NHV surface. In addition, we apply the NDVE model to the north-east Japan earthquake and tsunami of March 2011 to evaluate its impact on the Japanese economy. © 2014 The Author(s). Disasters © Overseas Development Institute, 2014.
Naikwad, S. N.; Dudul, S. V.
2009-01-01
A focused time lagged recurrent neural network (FTLR NN) with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes tempora...
Assessment of erosion hazard after recurrence fires with the RUSLE 3D MODEL
Vecín-Arias, Daniel; Palencia, Covadonga; Fernández Raga, María
2016-04-01
The objective of this work is to calculate if there is more soil erosion after the recurrence of several forest fires on an area. To that end, it has been studied an area of 22 130 ha because has a high frequency of fires. This area is located in the northwest of the Iberian Peninsula. The assessment of erosion hazard was calculated in several times using Geographic Information Systems (GIS).The area have been divided into several plots according to the number of times they have been burnt in the past 15 years. Due to the complexity that has to make a detailed study of a so large field and that there are not information available anually, it is necessary to select the more interesting moments. In august 2012 it happened the most agressive and extensive fire of the area. So the study was focused on the erosion hazard for 2011 and 2014, because they are the date before and after from the fire of 2012 in which there are orthophotos available. RUSLE3D model (Revised Universal Soil Loss Equation) was used to calculate maps erosion losses. This model improves the traditional USLE (Wischmeier and D., 1965) because it studies the influence of the concavity / convexity (Renard et al., 1997), and improves the estimation of the slope factor LS (Renard et al., 1991). It is also one of the most commonly used models in literatura (Mitasova et al., 1996; Terranova et al., 2009). The tools used are free and accessible, using GIS "gvSIG" (http://www.gvsig.com/es) and the metadata were taken from Spatial Data Infrastructure of Spain webpage (IDEE, 2016). However the RUSLE model has many critics as some authors who suggest that only serves to carry out comparisons between areas, and not for the calculation of absolute soil loss data. These authors argue that in field measurements the actual recovered eroded soil can suppose about one-third of the values obtained with the model (Šúri et al., 2002). The study of the area shows that the error detected by the critics could come from
Ling, Hong; Samarasinghe, Sandhya; Kulasiri, Don
2013-12-01
Understanding the control of cellular networks consisting of gene and protein interactions and their emergent properties is a central activity of Systems Biology research. For this, continuous, discrete, hybrid, and stochastic methods have been proposed. Currently, the most common approach to modelling accurate temporal dynamics of networks is ordinary differential equations (ODE). However, critical limitations of ODE models are difficulty in kinetic parameter estimation and numerical solution of a large number of equations, making them more suited to smaller systems. In this article, we introduce a novel recurrent artificial neural network (RNN) that addresses above limitations and produces a continuous model that easily estimates parameters from data, can handle a large number of molecular interactions and quantifies temporal dynamics and emergent systems properties. This RNN is based on a system of ODEs representing molecular interactions in a signalling network. Each neuron represents concentration change of one molecule represented by an ODE. Weights of the RNN correspond to kinetic parameters in the system and can be adjusted incrementally during network training. The method is applied to the p53-Mdm2 oscillation system - a crucial component of the DNA damage response pathways activated by a damage signal. Simulation results indicate that the proposed RNN can successfully represent the behaviour of the p53-Mdm2 oscillation system and solve the parameter estimation problem with high accuracy. Furthermore, we presented a modified form of the RNN that estimates parameters and captures systems dynamics from sparse data collected over relatively large time steps. We also investigate the robustness of the p53-Mdm2 system using the trained RNN under various levels of parameter perturbation to gain a greater understanding of the control of the p53-Mdm2 system. Its outcomes on robustness are consistent with the current biological knowledge of this system. As more
Zhong, Q.; Shi, B.
2011-12-01
The disaster of the Ms 7.8 earthquake occurred in Tangshan, China, on July 28th 1976 caused at least 240,000 deaths. The mainshock was followed by two largest aftershocks, the Ms 7.1 occurred after 15 hr later of the mainshock, and the Ms 6.9 occurred on 15 November. The aftershock sequence is lasting to date, making the regional seismic activity rate around the Tangshan main fault much higher than that of before the main event. If these aftershocks are involved in the local main event catalog for the PSHA calculation purpose, the resultant seismic hazard calculation will be overestimated in this region and underestimated in other place. However, it is always difficult to accurately determine the time duration of aftershock sequences and identifies the aftershocks from main event catalog for seismologist. In this study, by using theoretical inference and empirical relation given by Dieterich, we intended to derive the plausible time length of aftershock sequences of the Ms 7.8 Tangshan earthquake. The aftershock duration from log-log regression approach gives us about 120 years according to the empirical Omori's relation. Based on Dietrich approach, it has been claimed that the aftershock duration is a function of remote shear stressing rate, normal stress acting on the fault plane, and fault frictional constitutive parameters. In general, shear stressing rate could be estimated in three ways: 1. Shear stressing rate could be written as a function of static stress drop and a mean earthquake recurrence time. In this case, the time length of aftershock sequences is about 70-100 years. However, since the recurrence time inherits a great deal of uncertainty. 2. Ziv and Rubin derived a general function between shear stressing rate, fault slip speed and fault width with a consideration that aftershock duration does not scale with mainshock magnitude. Therefore, from Ziv's consideration, the aftershock duration is about 80 years. 3. Shear stressing rate is also can be
Energy Technology Data Exchange (ETDEWEB)
Aagaard, B T; Graves, R W; Rodgers, A; Brocher, T M; Simpson, R W; Dreger, D; Petersson, N A; Larsen, S C; Ma, S; Jachens, R C
2009-11-04
We simulate long-period (T > 1.0-2.0 s) and broadband (T > 0.1 s) ground motions for 39 scenarios earthquakes (Mw 6.7-7.2) involving the Hayward, Calaveras, and Rodgers Creek faults. For rupture on the Hayward fault we consider the effects of creep on coseismic slip using two different approaches, both of which reduce the ground motions compared with neglecting the influence of creep. Nevertheless, the scenario earthquakes generate strong shaking throughout the San Francisco Bay area with about 50% of the urban area experiencing MMI VII or greater for the magnitude 7.0 scenario events. Long-period simulations of the 2007 Mw 4.18 Oakland and 2007 Mw 4.5 Alum Rock earthquakes show that the USGS Bay Area Velocity Model version 08.3.0 permits simulation of the amplitude and duration of shaking throughout the San Francisco Bay area, with the greatest accuracy in the Santa Clara Valley (San Jose area). The ground motions exhibit a strong sensitivity to the rupture length (or magnitude), hypocenter (or rupture directivity), and slip distribution. The ground motions display a much weaker sensitivity to the rise time and rupture speed. Peak velocities, peak accelerations, and spectral accelerations from the synthetic broadband ground motions are, on average, slightly higher than the Next Generation Attenuation (NGA) ground-motion prediction equations. We attribute at least some of this difference to the relatively narrow width of the Hayward fault ruptures. The simulations suggest that the Spudich and Chiou (2008) directivity corrections to the NGA relations could be improved by including a dependence on the rupture speed and increasing the areal extent of rupture directivity with period. The simulations also indicate that the NGA relations may under-predict amplification in shallow sedimentary basins.
Catchings, Rufus D.; Dixit, M.M.; Goldman, Mark R.; Kumar, S.
2015-01-01
The Koyna-Warna area of India is one of the best worldwide examples of reservoir-induced seismicity, with the distinction of having generated the largest known induced earthquake (M6.3 on 10 December 1967) and persistent moderate-magnitude (>M5) events for nearly 50 years. Yet, the fault structure and tectonic setting that has accommodated the induced seismicity is poorly known, in part because the seismic events occur beneath a thick sequence of basalt layers. On the basis of the alignment of earthquake epicenters over an ~50 year period, lateral variations in focal mechanisms, upper-crustal tomographic velocity images, geophysical data (aeromagnetic, gravity, and magnetotelluric), geomorphic data, and correlation with similar structures elsewhere, we suggest that the Koyna-Warna area lies within a right step between northwest trending, right-lateral faults. The sub-basalt basement may form a local structural depression (pull-apart basin) caused by extension within the step-over zone between the right-lateral faults. Our postulated model accounts for the observed pattern of normal faulting in a region that is dominated by north-south directed compression. The right-lateral faults extend well beyond the immediate Koyna-Warna area, possibly suggesting a more extensive zone of seismic hazards for the central India area. Induced seismic events have been observed many places worldwide, but relatively large-magnitude induced events are less common because critically stressed, preexisting structures are a necessary component. We suggest that releasing bends and fault step-overs like those we postulate for the Koyna-Warna area may serve as an ideal tectonic environment for generating moderate- to large- magnitude induced (reservoir, injection, etc.) earthquakes.
International Nuclear Information System (INIS)
Husen, S.; Clinton, J. F.; Kissling, E.
2011-01-01
One-dimensional (1D) velocity models are still widely used for computing earthquake locations at seismological centers or in regions where three-dimensional (3D) velocity models are not available due to the lack of data of sufficiently high quality. The concept of the minimum 1D model with appropriate station corrections provides a framework to compute initial hypocenter locations and seismic velocities for local earthquake tomography. Since a minimum 1D model represents a solution to the coupled hypocenter-velocity problem it also represents a suitable velocity model for earthquake location and data quality assessment, such as evaluating the consistency in assigning pre-defined weighting classes and average picking error. Nevertheless, the use of a simple 1D velocity structure in combination with station delays raises the question of how appropriate the minimum 1D model concept is when applied to complex tectonic regions with significant three-dimensional (3D) variations in seismic velocities. In this study we compute one regional minimum 1D model and three local minimum 1D models for selected subregions of the Swiss Alpine region, which exhibits a strongly varying Moho topography. We compare the regional and local minimum 1D models in terms of earthquake locations and data quality assessment to measure their performance. Our results show that the local minimum 1D models provide more realistic hypocenter locations and better data fits than a single model for the Alpine region. We attribute this to the fact that in a local minimum 1D model local and regional effects of the velocity structure can be better separated. Consequently, in tectonically complex regions, minimum 1D models should be computed in sub-regions defined by similar structure, if they are used for earthquake location and data quality assessment. (authors)
Energy Technology Data Exchange (ETDEWEB)
Husen, S.; Clinton, J. F. [Swiss Seismological Service, ETH Zuerich, Zuerich (Switzerland); Kissling, E. [Institute of Geophysics, ETH Zuerich, Zuerich (Switzerland)
2011-10-15
One-dimensional (1D) velocity models are still widely used for computing earthquake locations at seismological centers or in regions where three-dimensional (3D) velocity models are not available due to the lack of data of sufficiently high quality. The concept of the minimum 1D model with appropriate station corrections provides a framework to compute initial hypocenter locations and seismic velocities for local earthquake tomography. Since a minimum 1D model represents a solution to the coupled hypocenter-velocity problem it also represents a suitable velocity model for earthquake location and data quality assessment, such as evaluating the consistency in assigning pre-defined weighting classes and average picking error. Nevertheless, the use of a simple 1D velocity structure in combination with station delays raises the question of how appropriate the minimum 1D model concept is when applied to complex tectonic regions with significant three-dimensional (3D) variations in seismic velocities. In this study we compute one regional minimum 1D model and three local minimum 1D models for selected subregions of the Swiss Alpine region, which exhibits a strongly varying Moho topography. We compare the regional and local minimum 1D models in terms of earthquake locations and data quality assessment to measure their performance. Our results show that the local minimum 1D models provide more realistic hypocenter locations and better data fits than a single model for the Alpine region. We attribute this to the fact that in a local minimum 1D model local and regional effects of the velocity structure can be better separated. Consequently, in tectonically complex regions, minimum 1D models should be computed in sub-regions defined by similar structure, if they are used for earthquake location and data quality assessment. (authors)
Energy Technology Data Exchange (ETDEWEB)
Cappa, F.; Rutqvist, J.; Yamamoto, K.
2009-05-15
In Matsushiro, central Japan, a series of more than 700,000 earthquakes occurred over a 2-year period (1965-1967) associated with a strike-slip faulting sequence. This swarm of earthquakes resulted in ground surface deformations, cracking of the topsoil, and enhanced spring-outflows with changes in chemical compositions as well as carbon dioxide (CO{sub 2}) degassing. Previous investigations of the Matsushiro earthquake swarm have suggested that migration of underground water and/or magma may have had a strong influence on the swarm activity. In this study, employing coupled multiphase flow and geomechanical modelling, we show that observed crustal deformations and seismicity can have been driven by upwelling of deep CO{sub 2}-rich fluids around the intersection of two fault zones - the regional East Nagano earthquake fault and the conjugate Matsushiro fault. We show that the observed spatial evolution of seismicity along the two faults and magnitudes surface uplift, are convincingly explained by a few MPa of pressurization from the upwelling fluid within the critically stressed crust - a crust under a strike-slip stress regime near the frictional strength limit. Our analysis indicates that the most important cause for triggering of seismicity during the Matsushiro swarm was the fluid pressurization with the associated reduction in effective stress and strength in fault segments that were initially near critically stressed for shear failure. Moreover, our analysis indicates that a two order of magnitude permeability enhancement in ruptured fault segments may be necessary to match the observed time evolution of surface uplift. We conclude that our hydromechanical modelling study of the Matsushiro earthquake swarm shows a clear connection between earthquake rupture, deformation, stress, and permeability changes, as well as large-scale fluid flow related to degassing of CO{sub 2} in the shallow seismogenic crust. Thus, our study provides further evidence of the
Janahi, I A; Elidemir, O; Shardonofsky, F R; Abu-Hassan, M N; Fan, L L; Larsen, G L; Blackburn, M R; Colasurdo, G N
2000-12-01
Recurrent aspiration of milk into the respiratory tract has been implicated in the pathogenesis of a variety of inflammatory lung disorders including asthma. However, the lack of animal models of aspiration-induced lung injury has limited our knowledge of the pathophysiological characteristics of this disorder. This study was designed to evaluate the effects of recurrent milk aspiration on airway mechanics and lung cells in a murine model. Under light anesthesia, BALB/c mice received daily intranasal instillations of whole cow's milk (n = 7) or sterile physiologic saline (n = 9) for 10 d. Respiratory system resistance (Rrs) and dynamic elastance (Edyn,rs) were measured in anesthetized, tracheotomized, paralyzed and mechanically ventilated mice 24 h after the last aspiration of milk. Rrs and Edyn,rs were derived from transrespiratory and plethysmographic pressure signals. In addition, airway responses to increasing concentrations of i.v. methacholine (Mch) were determined. Airway responses were measured in terms of PD(100) (dose of Mch causing 100% increase from baseline Rrs) and Rrs,max (% increase from baseline at the maximal plateau response) and expressed as % control (mean +/- SE). We found recurrent milk aspiration did not affect Edyn and baseline Rrs values. However, airway responses to Mch were increased after milk aspiration when compared with control mice. These changes in airway mechanics were associated with an increased percentage of lymphocytes and eosinophils in the bronchoalveolar lavage, mucus production, and lung inflammation. Our findings suggest that recurrent milk aspiration leads to alterations in airway function, lung eosinophilia, and goblet cell hyperplasia in a murine model.
Connecting slow earthquakes to huge earthquakes
Obara, Kazushige; Kato, Aitaro
2016-01-01
Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of th...
Testing earthquake source inversion methodologies
Page, Morgan T.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
Galvez, P.; Somerville, P.; Bayless, J.; Dalguer, L. A.
2015-12-01
The rupture process of the 2011 Tohoku earthquake exhibits depth-dependent variations in the frequency content of seismic radiation from the plate interface. This depth-varying rupture property has also been observed in other subduction zones (Lay et al, 2012). During the Tohoku earthquake, the shallow region radiated coherent low frequency seismic waves whereas the deeper region radiated high frequency waves. Several kinematic inversions (Suzuki et al, 2011; Lee et al, 2011; Bletery et al, 2014; Minson et al, 2014) detected seismic waves below 0.1 Hz coming from the shallow depths that produced slip larger than 40-50 meters close to the trench. Using empirical green functions, Asano & Iwata (2012), Kurahashi and Irikura (2011) and others detected regions of strong ground motion radiation at frequencies up to 10Hz located mainly at the bottom of the plate interface. A recent dynamic model that embodies this depth-dependent radiation using physical models has been developed by Galvez et al (2014, 2015). In this model the rupture process is modeled using a linear weakening friction law with slip reactivation on the shallow region of the plate interface (Galvez et al, 2015). This model reproduces the multiple seismic wave fronts recorded on the Kik-net seismic network along the Japanese coast up to 0.1 Hz as well as the GPS displacements. In the deep region, the rupture sequence is consistent with the sequence of the strong ground motion generation areas (SMGAs) that radiate high frequency ground motion at the bottom of the plate interface (Kurahashi and Irikura, 2013). It remains challenging to perform ground motions fully coupled with a dynamic rupture up to 10 Hz for a megathrust event. Therefore, to generate high frequency ground motions, we make use of the stochastic approach of Graves and Pitarka (2010) but add to the source spectrum the slip rate function of the dynamic model. In this hybrid-dynamic approach, the slip rate function is windowed with Gaussian
Wakai, A.; Senna, S.; Jin, K.; Cho, I.; Matsuyama, H.; Fujiwara, H.
2017-12-01
To estimate damage caused by strong ground motions from a large earthquake, it is important to accurately evaluate broadband ground-motion characteristics in wide area. For realizing that, it is one of the important issues to model detailed subsurface structure from top surface of seismic bedrock to ground surface.Here, we focus on Kanto area, including Tokyo, where there are thicker sedimentary layers. We, first, have ever collected deep bore-hole data, soil physical properties obtained by some geophysical explorations, geological information and existing models for deep ground from top surface of seismic bedrock to that of engineering bedrock, and have collected a great number of bore-hole data and surficial geological ones for shallow ground from top surface of engineering bedrock to ground surface. Using them, we modeled initial geological subsurface structure for each of deep ground and shallow one. By connecting them appropriately, we constructed initial geological subsurface structure models from top surface of seismic bedrock to ground surface.In this study, we first collected a lot of records obtained by dense microtremor observations and earthquake ones in the whole Kanto area. About microtremor observations, we conducted measurements from large array with the size of hundreds of meters to miniature array with the size of 60 centimeters to cover both of deep ground and shallow one. And then, using ground motion characteristics such as disperse curves and H/V(R/V) spectral ratios obtained from these records, the initial geological subsurface structure models were improved in terms of velocity structure from top surface of seismic bedrock to ground surface in the area.We will report outlines on microtremor array observations, analysis methods and improved subsurface structure models.
Williams, Randolph; Goodwin, Laurel; Sharp, Warren; Mozley, Peter
2017-04-01
U-Th dates on calcite precipitated in coseismic extension fractures in the Loma Blanca normal fault zone, Rio Grande rift, NM, USA, constrain earthquake recurrence intervals from 150-565 ka. This is the longest direct record of seismicity documented for a fault in any tectonic environment. Combined U-Th and stable isotope analyses of these calcite veins define 13 distinct earthquake events. These data show that for more than 400 ka the Loma Blanca fault produced earthquakes with a mean recurrence interval of 40 ± 7 ka. The coefficient of variation for these events is 0.40, indicating strongly periodic seismicity consistent with a time-dependent model of earthquake recurrence. Stochastic statistical analyses further validate the inference that earthquake behavior on the Loma Blanca was time-dependent. The time-dependent nature of these earthquakes suggests that the seismic cycle was fundamentally controlled by a stress renewal process. However, this periodic cycle was punctuated by an episode of clustered seismicity at 430 ka. Recurrence intervals within the earthquake cluster were as low as 5-11 ka. Breccia veins formed during this episode exhibit carbon isotope signatures consistent with having formed through pronounced degassing of a CO2 charged brine during post-failure, fault-localized fluid migration. The 40 ka periodicity of the long-term earthquake record of the Loma Blanca fault is similar in magnitude to recurrence intervals documented through paleoseismic studies of other normal faults in the Rio Grande rift and Basin and Range Province. We propose that it represents a background rate of failure in intraplate extension. The short-term, clustered seismicity that occurred on the fault records an interruption of the stress renewal process, likely by elevated fluid pressure in deeper structural levels of the fault, consistent with fault-valve behavior. The relationship between recurrence interval and inferred fluid degassing suggests that pore fluid pressure
Ruiz, S.; Ojeda, J.; DelCampo, F., Sr.; Pasten, C., Sr.; Otarola, C., Sr.; Silva, R., Sr.
2017-12-01
In May 1960 took place the most unusual seismic sequence registered instrumentally. The Mw 8.1, Concepción earthquake occurred May, 21, 1960. The aftershocks of this event apparently migrated to the south-east, and the Mw 9.5, Valdivia mega-earthquake occurred after 33 hours. The structural damage produced by both events is not larger than other earthquakes in Chile and lower than crustal earthquakes of smaller magnitude. The damage was located in the sites with shallow soil layers of low shear wave velocity (Vs). However, no seismological station recorded this sequence. For that reason, we generate synthetic acceleration times histories for strong motion in the main cities affected by these events. We use 155 points of vertical surface displacements recopiled by Plafker and Savage in 1968, and considering the observations of this authors and local residents we separated the uplift and subsidence information associated to the first earthquake Mw 8.1 and the second mega-earthquake Mw 9.5. We consider the elastic deformation propagation, assume realist lithosphere geometry, and compute a Bayesian method that maximizes the probability density a posteriori to obtain the slip distribution. Subsequently, we use a stochastic method of generation of strong motion considering the finite fault model obtained for both earthquakes. We considered the incidence angle of ray to the surface, free surface effect and energy partition for P, SV and SH waves, dynamic corner frequency and the influence of site effect. The results show that the earthquake Mw 8.1 occurred down-dip the slab, the strong motion records are similar to other Chilean earthquake like Tocopilla Mw 7.7 (2007). For the Mw 9.5 earthquake we obtain synthetic acceleration time histories with PGA values around 0.8 g in cities near to the maximum asperity or that have low velocity soil layers. This allows us to conclude that strong motion records have important influence of the shallow soil deposits. These records
Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K
2016-01-01
The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.
Xu, Wenbin
2017-05-01
Subduction earthquakes have been widely studied in the Chilean subduction zone, but earthquakes occurring in its southern part have attracted less research interest primarily due to its lower rate of seismic activity. Here I use Sentinel-1 interferometric synthetic aperture radar (InSAR) data and range offset measurements to generate coseismic crustal deformation maps of the 2016 Mw 7.5 Chiloé earthquake in southern Chile. I find a concentrated crustal deformation with ground displacement of approximately 50 cm in the southern part of the Chiloé island. The best fitting fault model shows a pure thrust-fault motion on a shallow dipping plane orienting 4° NNE. The InSAR-determined moment is 2.4 × 1020 Nm with a shear modulus of 30 GPa, equivalent to Mw 7.56, which is slightly lower than the seismic moment. The model shows that the slip did not reach the trench, and it reruptured part of the fault that ruptured in the 1960 Mw 9.5 earthquake. The 2016 event has only released a small portion of the accumulated strain energy on the 1960 rupture zone, suggesting that the seismic hazard of future great earthquakes in southern Chile is high.
Extreme value distribution of earthquake magnitude
Zi, Jun Gan; Tung, C. C.
1983-07-01
Probability distribution of maximum earthquake magnitude is first derived for an unspecified probability distribution of earthquake magnitude. A model for energy release of large earthquakes, similar to that of Adler-Lomnitz and Lomnitz, is introduced from which the probability distribution of earthquake magnitude is obtained. An extensive set of world data for shallow earthquakes, covering the period from 1904 to 1980, is used to determine the parameters of the probability distribution of maximum earthquake magnitude. Because of the special form of probability distribution of earthquake magnitude, a simple iterative scheme is devised to facilitate the estimation of these parameters by the method of least-squares. The agreement between the empirical and derived probability distributions of maximum earthquake magnitude is excellent.
Tsai, M. C.; Hu, J. C.; Yang, Y. H.; Hashimoto, M.; Aurelio, M.; Su, Z.; Escudero, J. A.
2017-12-01
Multi-sight and high spatial resolution interferometric SAR data enhances our ability for mapping detailed coseismic deformation to estimate fault rupture model and to infer the Coulomb stress change associated with a big earthquake. Here, we use multi-sight coseismic interferograms acquired by ALOS-2 and Sentinel-1A satellites to estimate the fault geometry and slip distribution on the fault plane of the 2017 Mw 6.5 Ormoc Earthquake in Leyte island of Philippine. The best fitting model predicts that the coseismic rupture occurs along a fault plane with strike of 325.8º and dip of 78.5ºE. This model infers that the rupture of 2017 Ormoc earthquake is dominated by left-lateral slip with minor dip-slip motion, consistent with the left-lateral strike-slip Philippine fault system. The fault tip has propagated to the ground surface, and the predicted coseismic slip on the surface is about 1 m located at 6.5 km Northeast of Kananga city. Significant slip is concentrated on the fault patches at depth of 0-8 km and an along-strike distance of 20 km with varying slip magnitude from 0.3 m to 2.3 m along the southwest segment of this seismogenic fault. Two minor coseismic fault patches are predicted underneath of the Tononan geothermal field and the creeping segment of the northwest portion of this seismogenic fault. This implies that the high geothermal gradient underneath of the Tongonan geothermal filed could prevent heated rock mass from the coseismic failure. The seismic moment release of our preferred fault model is 7.78×1018 Nm, equivalent to Mw 6.6 event. The Coulomb failure stress (CFS) calculated by the preferred fault model predicts significant positive CFS change on the northwest segment of the Philippine fault in Leyte Island which has coseismic slip deficit and is absent from aftershocks. Consequently, this segment should be considered to have increasing of risk for future seismic hazard.
Rosenberg, Jon; Galen, Benjamin T
2017-07-01
Recurrent meningitis is a rare clinical scenario that can be self-limiting or life threatening depending on the underlying etiology. This review describes the causes, risk factors, treatment, and prognosis for recurrent meningitis. As a general overview of a broad topic, the aim of this review is to provide clinicians with a comprehensive differential diagnosis to aide in the evaluation and management of a patient with recurrent meningitis. New developments related to understanding the pathophysiology of recurrent meningitis are as scarce as studies evaluating the treatment and prevention of this rare disorder. A trial evaluating oral valacyclovir suppression after HSV-2 meningitis did not demonstrate a benefit in preventing recurrences. The data on prophylactic antibiotics after basilar skull fractures do not support their use. Intrathecal trastuzumab has shown promise in treating leptomeningeal carcinomatosis from HER-2 positive breast cancer. Monoclonal antibodies used to treat cancer and autoimmune diseases are new potential causes of drug-induced aseptic meningitis. Despite their potential for causing recurrent meningitis, the clinical entities reviewed herein are not frequently discussed together given that they are a heterogeneous collection of unrelated, rare diseases. Epidemiologic data on recurrent meningitis are lacking. The syndrome of recurrent benign lymphocytic meningitis described by Mollaret in 1944 was later found to be closely related to HSV-2 reactivation, but HSV-2 is by no means the only etiology of recurrent aseptic meningitis. While the mainstay of treatment for recurrent meningitis is supportive care, it is paramount to ensure that reversible and treatable causes have been addressed for further prevention.
What Can We Learn from a Simple Physics-Based Earthquake Simulator?
Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele
2018-03-01
Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of
Langer, L.; Gharti, H. N.; Tromp, J.
2017-12-01
In recent years, observations of deformation at plate boundaries have been greatly improved by the development of techniques in space geodesy. However, models of seismic deformation remain limited and are unable to account for realistic 3D structure in topography and material properties. We demonstrate the importance of 3D structure using a spectral-element method that incorporates fault geometry, topography, and heterogeneous material properties in a (non)linear viscoelastic domain. Our method is benchmarked against Okada's analytical technique and the PyLith software package. The April 2015 Nepal earthquake is used as a case study to examine whether 3D structure can affect the predictions of seismic deformation models. We find that the inclusion of topography has a significant effect on our results.
Sachs, M. K.; Yoder, M. R.; Turcotte, D. L.; Rundle, J. B.; Malamud, B. D.
2012-05-01
Extreme events that change global society have been characterized as black swans. The frequency-size distributions of many natural phenomena are often well approximated by power-law (fractal) distributions. An important question is whether the probability of extreme events can be estimated by extrapolating the power-law distributions. Events that exceed these extrapolations have been characterized as dragon-kings. In this paper we consider extreme events for earthquakes, volcanic eruptions, wildfires, landslides and floods. We also consider the extreme event behavior of three models that exhibit self-organized criticality (SOC): the slider-block, forest-fire, and sand-pile models. Since extrapolations using power-laws are widely used in probabilistic hazard assessment, the occurrence of dragon-king events have important practical implications.
International Nuclear Information System (INIS)
Nagashima, Miori; Itokawa, Etsuko; Ozuka, Yohei
2012-01-01
This study addressed the controversial issue of disaster waste treatment in the reconstruction efforts following the Great East Japan Earthquake. Using the Sector Model (Matsumoto 2009), we categorized a range of actions taken in relation to the cross-jurisdictional treatment into the four sectors, government, industry, academia, and private. The analysis through this Sector Model made it possible to map the entire layout of waste treatment, inclusive of less-visible industry and academia sectors. Accordingly, we have argued that differences of risk awareness are not necessarily due to sector differences but rather depend on two aspects of the disaster waste treatment; the safety levels and the nationwide treatment of waste in Japan. We further suggest that the discrepancy in the arguments on safety levels emerged as a result of scientific under-determination and cross-jurisdictional treatment from social and/or political under-determination. (author)
Wang, Kang; Fialko, Yuri
2018-01-01
We use space geodetic data to investigate coseismic and postseismic deformation due to the 2015 Mw 7.8 Gorkha earthquake that occurred along the central Himalayan arc. Because the earthquake area is characterized by strong variations in surface relief and material properties, we developed finite element models that explicitly account for topography and 3-D elastic structure. We computed the line-of-sight displacement histories from three tracks of the Sentinel-1A/B Interferometric Synthetic Aperture Radar (InSAR) satellites, using persistent scatter method. InSAR observations reveal an uplift of up to ˜70 mm over ˜20 months after the main shock, concentrated primarily at the downdip edge of the ruptured asperity. GPS observations also show uplift, as well as southward movement in the epicentral area, qualitatively similar to the coseismic deformation pattern. Kinematic inversions of GPS and InSAR data and forward models of stress-driven creep suggest that the observed postseismic transient is dominated by afterslip on a downdip extension of the seismic rupture. A poroelastic rebound may have contributed to the observed uplift and southward motion, but the predicted surface displacements are small. We also tested a wide range of viscoelastic relaxation models, including 1-D and 3-D variations in the viscosity structure. Models of a low-viscosity channel previously invoked to explain the long-term uplift and variations in topography at the plateau margins predict opposite signs of horizontal and vertical displacements compared to those observed. Our results do not preclude a possibility of deep-seated viscoelastic response beneath southern Tibet with a characteristic relaxation time greater than the observation period (2 years).
Bichisao, Marta; Stallone, Angela
2017-04-01
Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.
Geological evidence for Holocene earthquakes and tsunamis along the Nankai-Suruga Trough, Japan
Garrett, Ed; Fujiwara, Osamu; Garrett, Philip; Heyvaert, Vanessa M. A.; Shishikura, Masanobu; Yokoyama, Yusuke; Hubert-Ferrari, Aurélia; Brückner, Helmut; Nakamura, Atsunori; De Batist, Marc
2016-04-01
The Nankai-Suruga Trough, lying immediately south of Japan's densely populated and highly industrialised southern coastline, generates devastating great earthquakes (magnitude > 8). Intense shaking, crustal deformation and tsunami generation accompany these ruptures. Forecasting the hazards associated with future earthquakes along this >700 km long fault requires a comprehensive understanding of past fault behaviour. While the region benefits from a long and detailed historical record, palaeoseismology has the potential to provide a longer-term perspective and additional insights. Here, we summarise the current state of knowledge regarding geological evidence for past earthquakes and tsunamis, incorporating literature originally published in both Japanese and English. This evidence comes from a wide variety of sources, including uplifted marine terraces and biota, marine and lacustrine turbidites, liquefaction features, subsided marshes and tsunami deposits in coastal lakes and lowlands. We enhance available results with new age modelling approaches. While publications describe proposed evidence from > 70 sites, only a limited number provide compelling, well-dated evidence. The best available records allow us to map the most likely rupture zones of eleven earthquakes occurring during the historical period. Our spatiotemporal compilation suggests the AD 1707 earthquake ruptured almost the full length of the subduction zone and that earthquakes in AD 1361 and 684 were predecessors of similar magnitude. Intervening earthquakes were of lesser magnitude, highlighting variability in rupture mode. Recurrence intervals for ruptures of the a single seismic segment range from less than 100 to more than 450 years during the historical period. Over longer timescales, palaeoseismic evidence suggests intervals ranging from 100 to 700 years. However, these figures reflect thresholds of evidence creation and preservation as well as genuine recurrence intervals. At present, we have
Gunardi, Setiawan, Ezra Putranda
2015-12-01
Indonesia is a country with high risk of earthquake, because of its position in the border of earth's tectonic plate. An earthquake could raise very high amount of damage, loss, and other economic impacts. So, Indonesia needs a mechanism for transferring the risk of earthquake from the government or the (reinsurance) company, as it could collect enough money for implementing the rehabilitation and reconstruction program. One of the mechanisms is by issuing catastrophe bond, `act-of-God bond', or simply CAT bond. A catastrophe bond issued by a special-purpose-vehicle (SPV) company, and then sold to the investor. The revenue from this transaction is joined with the money (premium) from the sponsor company and then invested in other product. If a catastrophe happened before the time-of-maturity, cash flow from the SPV to the investor will discounted or stopped, and the cash flow is paid to the sponsor company to compensate their loss because of this catastrophe event. When we consider the earthquake