Sample records for m6 parkfield earthquake

  1. Early aftershock decay rate of the M6 Parkfield earthquake (United States)

    Peng, Z.; Vidale, J. E.


    Mainshock rupture is typically followed by its aftershocks that diminish in rate approximately as the reciprocal of the elapse time. However, it is notoriously difficult to observe aftershock activity in the noisy aftermath of larger earthquakes. Many aftershocks were missed in the existing seismicity catalogs in the initial few minutes (Kagan, 2004). Yet this period holds valuable information about the transition from mainshock rupture to sporadic aftershocks, and the friction laws that control earthquakes. The Parkfield section of the San Andreas fault is one of most densely seismometered places in the world. Many near-fault, non-clipped and continuous recordings of the M6 Parkfield earthquake and its aftermath have been recovered, providing an excellent opportunity for us to study the aftershock decay rates in the first few hundred seconds after the mainshock. We have so far analyzed recordings from station PKD and 13 stations in the Parkfield High Resolution Seismic Network. By scrutinizing the high-frequency signal, we are able to distinguish mainshock coda from early aftershocks. We find up to 10 times more aftershocks in the first 1000 s than in the USGS NCSN catalog. More than 30 events are detected in the first 200 s after the mainshock. None of these events are in the USGS NCSN catalog. Preliminary results suggest a strong deficit of aftershocks in the first 100 s after the mainshock relative to a 1/t aftershock rate decay. This pattern is consistent with a lack of seismicity in the first 120 s following the 10/31/2001 M5.1 Anza earthquake (Kilb et al., 2004), and our study of early aftershock rates using data from HiNet array in Japan (Vidale et al., 2004). Our observations will allow us to test the prediction of such an interval in rate-and-state friction models prior to the onset of the 1/t aftershock decay rate (Dieterich, 1994).

  2. Seismomagnetic effects from the long-awaited 28 September 2004 M 6.0 parkfield earthquake (United States)

    Johnston, M.J.S.; Sasai, Y.; Egbert, G.D.; Mueller, R.J.


    Precise measurements of local magnetic fields have been obtained with a differentially connected array of seven synchronized proton magnetometers located along 60 km of the locked-to-creeping transition region of the San Andreas fault at Parkfield, California, since 1976. The M 6.0 Parkfield earthquake on 28 September 2004, occurred within this array and generated coseismic magnetic field changes of between 0.2 and 0.5 nT at five sites in the network. No preseismic magnetic field changes exceeding background noise levels are apparent in the magnetic data during the month, week, and days before the earthquake (or expected in light of the absence of measurable precursive deformation, seismicity, or pore pressure changes). Observations of electric and magnetic fields from 0.01 to 20 Hz are also made at one site near the end of the earthquake rupture and corrected for common-mode signals from the ionosphere/magnetosphere using a second site some 115 km to the northwest along the fault. These magnetic data show no indications of unusual noise before the earthquake in the ULF band (0.01-20 Hz) as suggested may have preceded the 1989 ML 7.1 Loma Prieta earthquake. Nor do we see electric field changes similar to those suggested to occur before earthquakes of this magnitude from data in Greece. Uniform and variable slip piezomagnetic models of the earthquake, derived from strain, displacement, and seismic data, generate magnetic field perturbations that are consistent with those observed by the magnetometer array. A higher rate of longer-term magnetic field change, consistent with increased loading in the region, is apparent since 1993. This accompanied an increased rate of secular shear strain observed on a two-color EDM network and a small network of borehole tensor strainmeters and increased seismicity dominated by three M 4.5-5 earthquakes roughly a year apart in 1992, 1993, and 1994. Models incorporating all of these data indicate increased slip at depth in the region

  3. Seismomagnetic Effects from the Long-awaited September 28, 2004, M6 Parkfield Earthquake (United States)

    Johnston, M. J.; Sasai, Y.; Egbert, G. D.; Kappler, K.


    Precise measurements of local magnetic fields have been obtained with a differentially connected array of seven synchronized proton magnetometers located along 60 km of the locked-to-creeping transition region of the San Andreas Fault at Parkfield, CA. since 1984. The M6 Parkfield earthquake on September 28, 2004, occurred within this array and generated coseismic magnetic field changes of between 0.2 and 0.5 nT at five sites in the network. No preseismic magnetic field changes exceeding background noise levels are apparent in the magnetic data during the month, week and days before the earthquake (or expected in light of the absence of measurable precursive deformation, seismicity or pore pressure changes). Observations of electric and magnetic fields from 0.01 to 20 Hz are also made at one site near the end of the earthquake rupture and corrected for common-mode signals from the ionosphere/magnetosphere using a second site some 115 km to the northwest along the fault. These magnetic data show no indications of unusual noise before the earthquake in the ULF band (0.01 Hz to 20 Hz) as suggested may have preceded the 1989 Loma Prieta earthquake. Nor do we see electric field changes similar to those believed to occur before earthquakes of this magnitude from data in Greece. Uniform and variable slip piezomagnetic models of the earthquake, derived from strain, displacement and seismic data, generates magnetic field perturbations that are consistent with those observed by the magnetometer array. A higher rate of longer-term magnetic field change, consistent with increased loading in the region, is apparent since 1993. This accompanied an increased rate of secular shear strain observed on a 2-color EDM network and a small network of borehole tensor strainmeters and increased seismicity dominated by three M4.5-5 earthquakes roughly a year apart in 1992, 1993 and 1994. Models incorporating all of these data indicate increased slip at depth in the region and this may have

  4. Seismic Documentation for Rock Damage and Heal on the San Andreas Fault Involved in the 2004 M6 Parkfield Earthquake (United States)

    Malin, P. M.; Li, Y.; Chen, P.; Cochran, E. M.; Vidale, J. E.


    After the M6 Parkfield earthquake that occurred on 28 September 2004, we deployed a dense seismic array at the same sites as used in our experiment in the fall of 2002. The measurements using moving-window cross- correlation of waveforms for the repeated explosions and microearthquakes recorded in 2002 and 2004 show a decrease in shear velocity of at least ~2.5% within a ~200-m-wide zone across the San Andreas main fault trace most likely owing to co-seismic damage of fault rocks caused by dynamic rupture in this M6 earthquake. The width of the damage zone characterized by larger velocity changes is consistent with the low-velocity waveguide model on the SAF near Parkfield derived from fault-zone trapped waves [Li et al., 2004]. The estimated ratio between the P and S wave traveltime changes is 0.57 within the rupture zone and ~0.65 in the surrounding rocks, indicating wetter cracks within the damaged fault zone, probably due to the ground water percolating into the cracks opened in the mainshock. The measurements of traveltime changes for repeated aftershocks in 21 clusters, with a total of ~130 events, located at different depths along the rupture in 2004 show that the maximum shear velocity increased by ~1.2% within the damage zone in 3.5 months starting a week after the mainshock, indicating that the fault heals in the post-seismic stage due to the closure of cracks in the damaged rock. The data recorded at a seismograph installed in the SAFOD mainhole passing the San Andreas fault zone at ~3-km depths for repeated aftershocks in December of 2004 and later show that seismic velocities within the damage zone were changed by ~0.3% in a month, but no changes were registered at seismographs installed in the vertical pilot borehole drilled ~1.8 km away from the main fault trace for the same repeated events. We find that the healing rate is logarithmically decreasing through time with greater healing rate in the earlier stage after the mainshock. The magnitude of

  5. Seismicity rate changes along the central California coast due to stress changes from the 2003 M 6.5 San Simeon and 2004 M 6.0 Parkfield earthquakes (United States)

    Aron, A.; Hardebeck, J.L.


    We investigated the relationship between seismicity rate changes and modeled Coulomb static stress changes from the 2003 M 6.5 San Simeon and the 2004 M 6.0 Parkfield earthquakes in central California. Coulomb stress modeling indicates that the San Simeon mainshock loaded parts of the Rinconada, Hosgri, and San Andreas strike-slip faults, along with the reverse faults of the southern Los Osos domain. All of these loaded faults, except for the San Andreas, experienced a seismicity rate increase at the time of the San Simeon mainshock. The Parkfield earthquake occurred 9 months later on the loaded portion of the San Andreas fault. The Parkfield earthquake unloaded the Hosgri fault and the reverse faults of the southern Los Osos domain, which both experienced seismicity rate decreases at the time of the Parkfield event, although the decreases may be related to the decay of San Simeon-triggered seismicity. Coulomb stress unloading from the Parkfield earthquake appears to have altered the aftershock decay rate of the southern cluster of San Simeon after-shocks, which is deficient compared to the expected number of aftershocks from the Omori decay parameters based on the pre-Parkfield aftershocks. Dynamic stress changes cannot explain the deficiency of aftershocks, providing evidence that static stress changes affect earthquake occurrence. However, a burst of seismicity following the Parkfield earthquake at Ragged Point, where the static stress was decreased, provides evidence for dynamic stress triggering. It therefore appears that both Coulomb static stress changes and dynamic stress changes affect the seismicity rate.

  6. Seismic evidence for rock damage and healing on the San Andreas fault associated with the 2004 M 6.0 Parkfield earthquake (United States)

    Li, Y.-G.; Chen, P.; Cochran, E.S.; Vidale, J.E.; Burdette, T.


    We deployed a dense linear array of 45 seismometers across and along the San Andreas fault near Parkfield a week after the M 6.0 Parkfield earthquake on 28 September 2004 to record fault-zone seismic waves generated by aftershocks and explosions. Seismic stations and explosions were co-sited with our previous experiment conducted in 2002. The data from repeated shots detonated in the fall of 2002 and 3 months after the 2004 M 6.0 mainshock show ???1.0%-1.5% decreases in seismic-wave velocity within an ???200-m-wide zone along the fault strike and smaller changes (0.2%-0.5%) beyond this zone, most likely due to the coseismic damage of rocks during dynamic rupture in the 2004 M 6.0 earthquake. The width of the damage zone characterized by larger velocity changes is consistent with the low-velocity waveguide model on the San Andreas fault, near Parkfield, that we derived from fault-zone trapped waves (Li et al., 2004). The damage zone is not symmetric but extends farther on the southwest side of the main fault trace. Waveform cross-correlations for repeated aftershocks in 21 clusters, with a total of ???130 events, located at different depths and distances from the array site show ???0.7%-1.1% increases in S-wave velocity within the fault zone in 3 months starting a week after the earthquake. The velocity recovery indicates that the damaged rock has been healing and regaining the strength through rigidity recovery with time, most likely . due to the closure of cracks opened during the mainshock. We estimate that the net decrease in seismic velocities within the fault zone was at least ???2.5%, caused by the 2004 M 6.0 Parkfield earthquake. The healing rate was largest in the earlier stage of the postmainshock healing process. The magnitude of fault healing varies along the rupture zone, being slightly larger for the healing beneath Middle Mountain, correlating well with an area of large mapped slip. The fault healing is most prominent at depths above ???7 km.

  7. The Parkfield experiment; capturing what happens in an earthquake (United States)

    Hickman, Steve; Langbein, John; Stauffer, Peter H.


    To better understand what happens on and near a fault before, during, and after an earthquake, the U.S. Geological Survey (USGS) and the California Geological Survey began the Parkfield Earthquake Experiment in the 1980's. Researchers from the USGS and collaborating institutions have created a dense network of instruments on the San Andreas Fault at Parkfield, California, where moderate earthquakes have occurred at fairly regular intervals. Data from these instruments are revealing the earthquake process in unprecedented detail and will aid in predicting the time and severity of future shocks. The USGS and the National Science Foundation plan to expand the Parkfield Experiment by drilling a deep borehole and installing instruments at the actual depths where earthquakes initiate, creating a San Andreas Fault Observatory at Depth.

  8. Scientific goals of the Parkfield earthquake prediction experiment (United States)

    Thatcher, W.


    Several unique circumstances of the Parkfield experiment provide unprecedented opportunities for significant advances in understanding the mechanics of earthquakes. to our knowledge, there is no other seismic zone anywhere where the time, place, and magnitude of an impending earthquake are specified as precisely. Moreover, the epicentral region is located on continental crust, is readily accessible, and can support a range of dense monitoring networks that are sited either on or very close to the expected rupture surface. As a result, the networks located at Parkfield are several orders of magnitude more sensitive than any previously deployed for monitoring earthquake precursors (a preearthquake change in strain, seismicity, and other geophysical parameters). In this respect the design of the Parkfield experiment resembles the rationale for constructing a new, more powerful nuclear particle accelerator:in both cases increased capabilities will test existing theories, reveal new phenomena, and suggest new research directions. 

  9. Surface fault slip associated with the 2004 Parkfield, California, earthquake (United States)

    Rymer, M.J.; Tinsley, J. C.; Treiman, J.A.; Arrowsmith, J.R.; Ciahan, K.B.; Rosinski, A.M.; Bryant, W.A.; Snyder, H.A.; Fuis, G.S.; Toke, N.A.; Bawden, G.W.


    Surface fracturing occurred along the San Andreas fault, the subparallel Southwest Fracture Zone, and six secondary faults in association with the 28 September 2004 (M 6.0) Parkfield earthquake. Fractures formed discontinuous breaks along a 32-km-long stretch of the San Andreas fault. Sense of slip was right lateral; only locally was there a minor (1-11 mm) vertical component of slip. Right-lateral slip in the first few weeks after the event, early in its afterslip period, ranged from 1 to 44 mm. Our observations in the weeks following the earthquake indicated that the highest slip values are in the Middle Mountain area, northwest of the mainshock epicenter (creepmeter measurements indicate a similar distribution of slip). Surface slip along the San Andreas fault developed soon after the mainshock; field checks in the area near Parkfield and about 5 km to the southeast indicated that surface slip developed more than 1 hr but generally less than 1 day after the event. Slip along the Southwest Fracture Zone developed coseismically and extended about 8 km. Sense of slip was right lateral; locally there was a minor to moderate (1-29 mm) vertical component of slip. Right-lateral slip ranged from 1 to 41 mm. Surface slip along secondary faults was right lateral; the right-lateral component of slip ranged from 3 to 5 mm. Surface slip in the 1966 and 2004 events occurred along both the San Andreas fault and the Southwest Fracture Zone. In 1966 the length of ground breakage along the San Andreas fault extended 5 km longer than that mapped in 2004. In contrast, the length of ground breakage along the Southwest Fracture Zone was the same in both events, yet the surface fractures were more continuous in 2004. Surface slip on secondary faults in 2004 indicated previously unmapped structural connections between the San Andreas fault and the Southwest Fracture Zone, further revealing aspects of the structural setting and fault interactions in the Parkfield area.

  10. Coherency analysis of accelerograms recorded by the UPSAR array during the 2004 Parkfield earthquake

    DEFF Research Database (Denmark)

    Konakli, Katerina; Kiureghian, Armen Der; Dreger, Douglas


    Spatial variability of near-fault strong motions recorded by the US Geological Survey Parkfield Seismograph Array (UPSAR) during the 2004 Parkfield (California) earthquake is investigated. Behavior of the lagged coherency for two horizontal and the vertical components is analyzed by separately...

  11. GPS source solution of the 2004 Parkfield earthquake

    CERN Document Server

    Houlie, N; Kim, A


    We compute a series of finite-source parameter inversions of the fault rupture of the 2004 Parkfield earthquake based on 1 Hz GPS records only. We confirm that some of the co-seismic slip at shallow depth (<5 km) constrained by InSAR data processing results from early post-seismic deformation. We also show 1) that if located very close to the rupture, a GPS receiver can saturate while it remains possible to estimate the ground velocity (~1.2 m/s) near the fault, 2) that GPS waveforms inversions constrain that the slip distribution at depth even when GPS monuments are not located directly above the ruptured areas and 3) the slip distribution at depth from our best models agree with that recovered from strong motion data. The 95th percentile of the slip amplitudes for rupture velocities ranging from 2 to 5 km/s is, 55 +/- 6 cm.

  12. Long-term afterslip of the M6.0, 2004 Parkfield, California, earthquake—Implications for forecasting amount and duration of afterslip on other major creeping faults (United States)

    Lienkaemper, James J.; McFarland, Forrest S.


    We present the longest record of surface afterslip on a continental strike‐slip fault for the 2004 M 6.0 Parkfield, California, earthquake, from which we can derive critical information about the duration and predictability of afterslip relevant to urban displacement hazard applications. Surface slip associated with this event occurred entirely postseismically along the interseismically creeping (0.6–1.5  cm/yr) main trace of the San Andreas fault. Using the first year of afterslip data, the program AFTER correctly predicted the cumulative surface afterslip (maximum ∼35  cm) eventually attained. By 1 yr postearthquake, observed afterslip had accumulated to only ∼74% of its modeled final value uf in units of length. The 6‐yr data suggested final slip would be reached everywhere by ∼6–12  yrs.Parkfield’s afterslip lasted much longer (∼6–12  yrs) than afterslip following a 2014 M 6.0 event in Napa, California, where no interseismic creep was known, and its afterslip neared completion (∼97% of uf) by 1 yr. The uncertainty in uf for the Napa event fell to ≤2  cm in only three months, versus in 2 yrs for the Parkfield event, mostly because duration of the power‐law stage of afterslip at Parkfield is much longer, ∼1000 (493–1666) days versus ∼100 (35–421) days for Napa. Because the urban Hayward fault near San Francisco, California, like the Parkfield section, exhibits interseismic creep in a similar geological regime, significant afterslip might last for up to a decade following an anticipated M≥6.7 earthquake, potentially delaying postearthquake recovery.

  13. Surface slip associated with the 2004 Parkfield, California, earthquake measured on alinement arrays (United States)

    Lienkaemper, J.J.; Baker, B.; McFarland, F.S.


    Although still continuing, surface slip from the 2004 Parkfield earth-quake as measured on alinement arrays appears to be approaching about 30-35 cm between Parkfield and Gold Hill. This includes slip along the main trace and the Southwest Fracture Zone (SWFZ). Slip here was higher in 1966 at about 40 cm. The distribution of 2004 slip appears to have a shape similar to that of the 1966 event, but final slip is expected to be lower in 2004 by about 3-15 cm, even when continuing slip is accounted for. Proportionately, this difference is most notable at the south end at Highway 46, where the 1966 event slip was 13 cm compared to the 2004 slip of 4 cm. Continuous Global Positioning System and creepmeters suggest that significant surface coseismic slip apparently occurred mainly on the SWFZ and perhaps on Middle Mountain (the latter possibly caused by shaking) (Langbein et al., 2005). Creepmeters indicate only minor (<0.2 cm) surface coseismic slip occurred on the main trace between Parkfield and Gold Hill. We infer that 3-6 cm slip accumulated across our arrays in the first 24 hr. At Highway 46, slip appears complete, whereas the remaining sites are expected to take 2-6 years to reach their background creep rates. Following the 1966 event, afterslip at one site persisted as much as 5-10 years. The much longer recurrence intervals between the past two Parkfield earthquakes and the decreasing slip per event may suggest that larger slip deficits are now growing along the Parkfield segment.

  14. Finite-Source Inversion for the 2004 Parkfield Earthquake using 3D Velocity Model Green's Functions (United States)

    Kim, A.; Dreger, D.; Larsen, S.


    We determine finite fault models of the 2004 Parkfield earthquake using 3D Green's functions. Because of the dense station coverage and detailed 3D velocity structure model in this region, this earthquake provides an excellent opportunity to examine how the 3D velocity structure affects the finite fault inverse solutions. Various studies (e.g. Michaels and Eberhart-Phillips, 1991; Thurber et al., 2006) indicate that there is a pronounced velocity contrast across the San Andreas Fault along the Parkfield segment. Also the fault zone at Parkfield is wide as evidenced by mapped surface faults and where surface slip and creep occurred in the 1966 and the 2004 Parkfield earthquakes. For high resolution images of the rupture process"Ait is necessary to include the accurate 3D velocity structure for the finite source inversion. Liu and Aurchuleta (2004) performed finite fault inversions using both 1D and 3D Green's functions for 1989 Loma Prieta earthquake using the same source paramerization and data but different Green's functions and found that the models were quite different. This indicates that the choice of the velocity model significantly affects the waveform modeling at near-fault stations. In this study, we used the P-wave velocity model developed by Thurber et al (2006) to construct the 3D Green's functions. P-wave speeds are converted to S-wave speeds and density using by the empirical relationships of Brocher (2005). Using a finite difference method, E3D (Larsen and Schultz, 1995), we computed the 3D Green's functions numerically by inserting body forces at each station. Using reciprocity, these Green's functions are recombined to represent the ground motion at each station due to the slip on the fault plane. First we modeled the waveforms of small earthquakes to validate the 3D velocity model and the reciprocity of the Green"fs function. In the numerical tests we found that the 3D velocity model predicted the individual phases well at frequencies lower than 0

  15. The ShakeMaps of the Amatrice, M6, earthquake

    Directory of Open Access Journals (Sweden)

    Licia Faenza


    Full Text Available In this paper we describe the performance of the ShakeMap software package and the fully automatic procedure, based on manually revised location and magnitude, during the main event of the Amatrice sequence with special emphasis to the M6 main shock, that struck central Italy on the 24th August 2016 at 1:36:32 UTC. Our results show that the procedure we developed in the last years, with real-time data exchange among those institutions acquiring strong motion data, allows to provide a faithful description of the ground motion experienced throughout a large region in and around the epicentral  area. The prompt availability of the rupture fault model, within three hours after the earthquake occurrence, provided a better descriptions of the level of strong ground motion throughout the affected area.  Progressive addition of  station data and  manual verification of the data insures improvements in the description of the experienced ground motions.  In particular, comparison between the MCS intensity shakemaps and preliminary field macroseismic reports show favourable similarities.  Finally the overall  spatial pattern of the ground motion of the main shock is consistent with reported rupture directivity toward NW and reduced levels of ground shaking toward SW probably linked to the peculiar source effects of the earthquake.

  16. Three-dimensional compressional wavespeed model, earthquake relocations, and focal mechanisms for the Parkfield, California, region (United States)

    Thurber, C.; Zhang, H.; Waldhauser, F.; Hardebeck, J.; Michael, A.; Eberhart-Phillips, D.


    We present a new three-dimensional (3D) compressional vvavespeed (V p) model for the Parkfield region, taking advantage of the recent seismicity associated with the 2003 San Simeon and 2004 Parkfield earthquake sequences to provide increased model resolution compared to the work of Eberhart-Phillips and Michael (1993) (EPM93). Taking the EPM93 3D model as our starting model, we invert the arrival-time data from about 2100 earthquakes and 250 shots recorded on both permanent network and temporary stations in a region 130 km northeast-southwest by 120 km northwest-southeast. We include catalog picks and cross-correlation and catalog differential times in the inversion, using the double-difference tomography method of Zhang and Thurber (2003). The principal Vp features reported by EPM93 and Michelini and McEvilly (1991) are recovered, but with locally improved resolution along the San Andreas Fault (SAF) and near the active-source profiles. We image the previously identified strong wavespeed contrast (faster on the southwest side) across most of the length of the SAF, and we also improve the image of a high Vp body on the northeast side of the fault reported by EPM93. This narrow body is at about 5- to 12-km depth and extends approximately from the locked section of the SAP to the town of Parkfield. The footwall of the thrust fault responsible for the 1983 Coalinga earthquake is imaged as a northeast-dipping high wavespeed body. In between, relatively low wavespeeds (model to derive absolute locations for about 16,000 earthquakes from 1966 to 2005 and high-precision double-difference locations for 9,000 earthquakes from 1984 to 2005, and also to determine focal mechanisms for 446 earthquakes. These earthquake locations and mechanisms show that the seismogenic fault is a simple planar structure. The aftershock sequence of the 2004 mainshock concentrates into the same structures defined by the pre-2004 seismicity, confirming earlier observations (Waldhauser et al., 2004

  17. The Characteristic Analysis and Seismic Triggering Study of the M6.2 and M6.1 Dayao Earthquake Sequences in 2003

    Institute of Scientific and Technical Information of China (English)

    Hua Wei; Liu Jie; Zheng Sihua; Chen Zhangli


    The high-resolution hypocenter locations of the mainshocks on July 21 (M6.2) and October 16, 2003 ( M6.1 ) and their aftershock sequences are determined in Dayao, Yunnan by using a double-difference earthquake location algorithm. The results show that the epicenters of the two mainshocks are very close to each other and the distribution of the aftershock sequence appears to be very linear. The distribution of the earthquake sequence is very consistent with the focal mechanism, and both malnshocks are of nearly vertical right-lateral fault. Unlike most other double earthquakes in the Yunnan area, the aftershock distribution of the M6.2 and M6.1 Dayao earthquakes does not appear to be a conjugated distribution but to be in a line, and there are some stacks in the two earthquake sequences. It can be inferred that they are all controlled by the same fault. The distribution of aftershocks is asymmetrical with respect to the mainshock location and appears to be unilateral. The aftershocks of the M6.2 mainshock centralize in the northwest of M6.2 earthquake and the aftershocks of the M6.1 earthquake are in the southeast of the mainshock, moreover, the M6.1 earthquake appears to be another rupture on the southeastern extension of the same fault as the M6.2 earthquake. The results of Coulomb failure static stress changes Aσf show that the earthquake on July 21 (M6.2) apparently triggered the earthquake on October 16 (M6.1), the two mainshocks have stress triggering to their off-fault aftershocks to different extents, and the M6.5 earthquake that occurred in Yao'an in 2000 also triggered the occurrence of the two Dayao earthquakes.

  18. The 28th September 2004 Parkfield earthquake revisited through high-rate GPS data inversion. (United States)

    Houlié, N.; Dreger, D.; Ahyi, K.; Romanowicz, B.


    Increasingly, Global Positioning System (GPS) data can also be used in real time to complement seismic data in providing robust real-time continuous earthquake information, and potentially, early warning. The occurrence of the Parkfield earthquake on the 28th of September 2004 provides an opportunity to test the sensitivity and reactivity of network, data processing algorithms, and the implementation of GPS static and temporal solutions in finite-source inversions. Incorporation of GPS data in realtime processing algorithms is important for several reasons. First, static deformation with adequate station coverage can be used to independently determine the orientation and dimension of fault rupture, as well as the scalar seismic moment. This processing complements routine moment tensor (MT) processing, providing needed redundancy, but goes beyond the MT with the potential for causative fault plane identification and determination of fault rupture dimensions. The dimensions of the rupture plane derived from GPS data can then be used to improve ShakeMap by accounting for rupture finiteness. Second, the rapidly determined deformation may also be integrated into joint inversions with seismic waveform data for kinematic rupture models. This can be accomplished using static deformation estimates, as well as displacement time series derived from high-rate GPS data. Thirdly, GPS provides a strong motion displacement meter capability for the largest earthquakes. Double integration of acceleration to displacement to recover the broadband time series with static offset can be problematic, whereas GPS potentially can measure it directly. We first present the calibration of a GPS time series by comparing it with records from seismic sensors for the Parkfield event, validating the use of the GPS in the near-field during a large event. Finite-source inversion results based on static GPS, GPS time series, as well as combinations of these data with seismic records will be compared

  19. 3-D P- and S-wave velocity structure and low-frequency earthquake locations in the Parkfield, California region (United States)

    Zeng, Xiangfang; Thurber, Clifford H.; Shelly, David R.; Harrington, Rebecca M.; Cochran, Elizabeth S.; Bennington, Ninfa L.; Peterson, Dana; Guo, Bin; McClement, Kara


    To refine the 3-D seismic velocity model in the greater Parkfield, California region, a new data set including regular earthquakes, shots, quarry blasts and low-frequency earthquakes (LFEs) was assembled. Hundreds of traces of each LFE family at two temporary arrays were stacked with time-frequency domain phase weighted stacking method to improve signal-to-noise ratio. We extend our model resolution to lower crustal depth with LFE data. Our result images not only previously identified features but also low velocity zones (LVZs) in the area around the LFEs and the lower crust beneath the southern Rinconada Fault. The former LVZ is consistent with high fluid pressure that can account for several aspects of LFE behaviour. The latter LVZ is consistent with a high conductivity zone in magnetotelluric studies. A new Vs model was developed with S picks that were obtained with a new autopicker. At shallow depth, the low Vs areas underlie the strongest shaking areas in the 2004 Parkfield earthquake. We relocate LFE families and analyse the location uncertainties with the NonLinLoc and tomoDD codes. The two methods yield similar results.

  20. Finite-fault analysis of the 2004 Parkfield, California, earthquake using Pnl waveforms (United States)

    Mendoza, C.; Hartzell, S.


    We apply a kinematic finite-fault inversion scheme to Pnl displacement waveforms recorded at 14 regional stations (Δsingle Mw 5.0 aftershock. Slip is modeled on a rectangular fault subdivided into 2×2 km subfaults assuming a constant rupture velocity and a 0.5 sec rise time. A passband filter of 0.1–0.5 Hz is applied to both data and subfault responses prior to waveform inversion. The SGF inversions are performed such that the final seismic moment is consistent with the known magnitude (Mw 6.0) of the earthquake. For these runs, it is difficult to reproduce the entire Pnl waveform due to inaccuracies in the assumed crustal structure. Also, the misfit between observed and predicted vertical waveforms is similar in character for different rupture velocities, indicating that neither the rupture velocity nor the exact position of slip sources along the fault can be uniquely identified. The pattern of coseismic slip, however, compares well with independent source models derived using other data types, indicating that the SGF inversion procedure provides a general first-order estimate of the 2004 Parkfield rupture using the vertical Pnl records. The best-constrained slip model is obtained using the single-aftershock EGF approach. In this case, the waveforms are very well reproduced for both vertical and horizontal components, suggesting that the method provides a powerful tool for estimating the distribution of coseismic slip using the regional Pnl waveforms. The inferred slip model shows a localized patch of high slip (55 cm peak) near the hypocenter and a larger slip area (~50 cm peak) extending between 6 and 20 km to the northwest.

  1. Study on S wave splitting in Dayao earthquake sequence with M=6.2 and M=6.1 in Yunnan in 2003

    Institute of Scientific and Technical Information of China (English)

    HUA Wei; LIU Jie; CHEN Zhang-li; ZHENG Si-hua


    The polarization direction of fast wave and the delay time between fast and slow wave were measured for two earthquake sequences occurred continuously on 21 July (M=6.2) and 16 October (M=6.1) in Dayao, Yunnan in 2003 using cross-correlation coefficient method, after determining the high-resolution hypocentral locations of the phenomena of S wave splitting are obvious in the two earthquake sequences, and the average polarization directions of fast wave in most stations are almost consistent with regional maximum horizontal compressive stress direction except the station Santai. There are bimodal fast directions in the polarization directions at station Santai and the mean polarization direction is N80°E, indicating an inconsistent phenomenon referred to regional maxiparison of S wave splitting results in the two earthquake sequences show that the polarization direction in M=6.2earthquake sequence is more scattered and its average fast direction is 20° larger than that of M=6.1 sequence, and zation direction may be due to the stress disturbance imposed by the M=6.2 and the M=6.1 mainshocks on regional background stress field.

  2. Aftershock distribution as a constraint on the geodetic model of coseismic slip for the 2004 Parkfield earthquake (United States)

    Bennington, Ninfa; Thurber, Clifford; Feigl, Kurt; ,


    Several studies of the 2004 Parkfield earthquake have linked the spatial distribution of the event’s aftershocks to the mainshock slip distribution on the fault. Using geodetic data, we find a model of coseismic slip for the 2004 Parkfield earthquake with the constraint that the edges of coseismic slip patches align with aftershocks. The constraint is applied by encouraging the curvature of coseismic slip in each model cell to be equal to the negative of the curvature of seismicity density. The large patch of peak slip about 15 km northwest of the 2004 hypocenter found in the curvature-constrained model is in good agreement in location and amplitude with previous geodetic studies and the majority of strong motion studies. The curvature-constrained solution shows slip primarily between aftershock “streaks” with the continuation of moderate levels of slip to the southeast. These observations are in good agreement with strong motion studies, but inconsistent with the majority of published geodetic slip models. Southeast of the 2004 hypocenter, a patch of peak slip observed in strong motion studies is absent from our curvature-constrained model, but the available GPS data do not resolve slip in this region. We conclude that the geodetic slip model constrained by the aftershock distribution fits the geodetic data quite well and that inconsistencies between models derived from seismic and geodetic data can be attributed largely to resolution issues.

  3. How to predict Italy L'Aquila M6.3 earthquake (United States)

    Guo, Guangmeng


    According to the satellite cloud anomaly appeared over eastern Italy on 21-23 April 2012, we predicted the M6.0 quake occurred in north Italy successfully. Here checked the satellite images in 2011-2013 in Italy, and 21 cloud anomalies were found. Their possible correlation with earthquakes bigger than M4.7 which located in Italy main fault systems was statistically examined by assuming various lead times. The result shows that when the leading time interval is set to 23≤ΔT≤45 days, 8 of the 10 quakes were preceded by cloud anomalies. Poisson random test shows that AAR (anomaly appearance rate) and EOR (EQ occurrence rate) is much higher than the values by chance. This study proved the relation between cloud anomaly and earthquake in Italy. With this method, we found that L'Aquila earthquake can also be predicted according to cloud anomaly.

  4. Characteristics of crustal strain associated with M=6.4 Baotou earthquake in 1996

    Institute of Scientific and Technical Information of China (English)

    郭良迁; 薄万举; 胡新康; 王敏


    Based on the horizontal crustal strain derived from GPS data and the rate accumulation intensity calculated from across-fault vertical deformation, the strain characteristics in the periods of 1992~1995, 1995~1996 and 1996~1999 in Baotou-Datong area is studied in the paper. From the comparison between the crustal strains before and after the M=6.4 Baotou earthquake occurred on May 3, 1996, it is considered that the high-magnitude area with predominant compressive strain might be the seismogenic zone for a coming strong earthquake. The area with the simultaneous higher surface strain, principal compressive strain, shear strain and tendency accumulation might be the place with higher risk of strong earthquakes. Generally, the area with low strain and predominant tensile strain might have a small possibility for strong earthquake development, which belongs to a stable area. The evolution of horizontal strain obtained from GPS measurements carried out in Baotou-Datong area in the period of 1992~1999 reflects the total developing and ending processes of the seismic episode from 1996 to 1998. The area with high and predominant compressive strain and the strain gradient zone can be considered as one of the indicators for determining the strong earthquake risk area in the future.

  5. Coseismic and initial postseismic deformation from the 2004 Parkfield, California, earthquake, observed by global positioning system, electronic distance meter, creepmeters, and borehole strainmeters (United States)

    Langbein, J.; Murray, J.R.; Snyder, H.A.


    Global Positioning System (GPS), electronic distance meter, creepmeter, and strainmeter measurements spanning the M 6.0 Parkfield, California, earthquake are examined. Using these data from 100 sec through 9 months following the main-shock, the Omori's law, with rate inversely related to time, l/t p and p ranging between 0.7 and 1.3, characterizes the time-dependent deformation during the post-seismic period; these results are consistent with creep models for elastic solids. With an accurate function of postseismic response, the coseismic displacements can be estimated from the high-rate, 1-min sampling GPS; and the coseismic displacements are approximately 75% of those estimated from the daily solutions. Consequently, fault-slip models using daily solutions overestimate coseismic slip. In addition, at 2 months and at 8 months following the mainshock, postseismic displacements are modeled as slip on the San Andreas fault with a lower bound on the moment exceeding that of the coseismic moment.

  6. Changes in repeating earthquake slip behavior following the 2004 Parkfield main shock from waveform empirical Green's functions finite-source inversion (United States)

    Kim, Ahyi; Dreger, Douglas S.; Taira, Taka'aki; Nadeau, Robert M.


    Finite-source inversions are performed using small earthquake waveforms as empirical Green's functions (eGf) to investigate the rupture process of repeating earthquakes along the San Andreas Fault in Parkfield, California. The eGf waveform inversion method is applied to a repeating Mw 2.1 Parkfield earthquake sequence using three-component velocity waveforms recorded by an array of borehole seismometers. The obtained models show a circular slip distribution with a ~20 m radius, a 3.0-4.2 cm average slip of the main asperity, and peak displacement of 10.6-13.5 cm. The static stress drop distribution shows that the main asperity has a peak stress drop of 69.5-94.7 MPa. The inversion results support an earlier finding by Dreger et al. (2007) that high-strength asperities exist in the rupture areas of the Mw 2.1 events at Parkfield. In addition, notable temporal peak slip and stress drop reduction was observed after the 2004 Parkfield event while the average value remains constant (~12 MPa) over time. These events may represent mechanically strong sections of the fault, surrounded by regions that are undergoing continuous deformation (creep), Given repeated loading of the strong asperities, it would be expected that these similar repeating earthquakes should also have very similar slip distributions since surrounding regions are deforming aseismically. There are small differences in the waveforms of these repeating earthquakes, and this could be because of rupture nucleation points not being in exactly the same location within the region of the fault that is capable of stick-slip behavior. Our result indicates that waveform slip inversion is needed to reveal spatial and temporal variations of the stress drop within the rupture area to improve understanding of fault healing and rupture mechanics.

  7. Apply ETAS in Earthquake Early Warning - A case study of M6.0 South Napa Earthquake (United States)

    Yin, L.; Heaton, T. H.


    Earthquake Early Warning (EEW) is a trade-off between time and accuracy. We aim to increase the alerting time without loosing its reliability. This can be achieved by using prior information to classify a pick to be a true or false event, then issue alerts immediately after the first trigger. Since earthquakes cluster in time and location, potential aftershock occurrences can be predicted using the Epidemic-Type Aftershock Sequence Model (ETAS). We show that by applying the prior information provided by ETAS in the Bayesian updating process of EEW, we can significantly improve the alerting time. As an example, the epicenter estimation for the aftershock events from the M6.0 South Napa Earthquake is performed using ETAS to illustrate the accuracy of aftershock prediction. For instance, in an aftershock sequence, the most triggers at the closest stations will turn out to be real earthquake. As a result, during the aftershock sequence of the South Napa earthquake, warnings can be issued after observations of only one or two stations.

  8. Jet streams anomalies as possible short-term precursors of earthquakes with M>6.0

    Directory of Open Access Journals (Sweden)

    Hong-Chun Wu


    Full Text Available Satellite data of thermal images revealed the existence of thermal fields, connected with big linear structures and systems of crust faults. The measuring height of outgoing longwave radiation is located to the range of jet stream. This work describes a possible link between strong earthquakes and jet streams in two regions. The front or tail ends of jet groups maintain their position for 6 or more hours in the vicinity of epicenters of strong (M>6.0 earthquakes in 2006-2010. The probability of observing a stationary jet stream behavior is estimated in 93.6% of the cases on one sixhour map and in 26.7% of cases - on two adjacent maps. The median of distribution of distances between epicenters and the relevant positions of jet stream corresponds to 36.5 km. Estimates of cumulative probability of realization of prediction were 24.2% for 10 days, 48.4% for 20 days, 66.1% for 30 days, 87.1% for 40 days, 93.5% for 50 days and 100% during 70 days. The observed precursory effects are of considerable interest for possible use for real short-term prediction of earthquakes.

  9. E-DECIDER Rapid Response to the M 6.0 South Napa Earthquake (United States)

    Glasscoe, M. T.; Parker, J. W.; Pierce, M. E.; Wang, J.; Eguchi, R. T.; Huyck, C. K.; Hu, Z.; Chen, Z.; Yoder, M. R.; Rundle, J. B.; Rosinski, A.


    E-DECIDER initiated rapid response mode when the California Earthquake Clearinghouse was activated the morning following the M6 Napa earthquake. Data products, including: 1) rapid damage and loss estimates, 2) deformation magnitude and slope change maps, and 3) aftershock forecasts were provided to the Clearinghouse partners within 24 hours of the event via XchangeCore Web Service Data Orchestration sharing. NASA data products were provided to end-users via XchangeCore, EERI and Clearinghouse websites, and ArcGIS online for Napa response, reaching a wide response audience. The E-DECIDER team helped facilitate rapid delivery of NASA products to stakeholders and participated in Clearinghouse Napa earthquake briefings to update stakeholders on product information. Rapid response products from E-DECIDER can be used to help prioritize response efforts shortly after the event has occurred. InLET (Internet Loss Estimation Tool) post-event damage and casualty estimates were generated quickly after the Napa earthquake. InLET provides immediate post-event estimates of casualties and building damage by performing loss/impact simulations using USGS ground motion data and FEMA HAZUS damage estimation technology. These results were provided to E-DECIDER by their collaborators, ImageCat, Inc. and the Community Stakeholder Network (CSN). Strain magnitude and slope change maps were automatically generated when the Napa earthquake appeared on the USGS feed. These maps provide an early estimate of where the deformation has occurred and where damage may be localized. Using E-DECIDER critical infrastructure overlays with damage estimates, decision makers can direct response effort that can be verified later with field reconnaissance and remote sensing-based observations. Earthquake aftershock forecast maps were produced within hours of the event. These maps highlight areas where aftershocks are likely to occur and can also be coupled with infrastructure overlays to help direct response

  10. California Earthquake Clearinghouse Activation for August 24, 2014, M6.0 South Napa Earthquake (United States)

    Rosinski, A.; Parrish, J.; Mccrink, T. P.; Tremayne, H.; Ortiz, M.; Greene, M.; Berger, J.; Blair, J. L.; Johnson, M.; Miller, K.; Seigel, J.; Long, K.; Turner, F.


    The Clearinghouse's principal functions are to 1) coordinate field investigations of earth scientists, engineers, and other participating researchers; 2) facilitate sharing of observations through regular meetings and through the Clearinghouse website; and 3) notify disaster responders of crucial observations or results. Shortly after 3:20 a.m., on August 24, 2014, Clearinghouse management committee organizations, the California Geological Survey (CGS), the Earthquake Engineering Research Institute (EERI), the United States Geological Survey (USGS), the California Office of Emergency Services (CalOES), and the California Seismic Safety Commission (CSSC), authorized activation of a virtual Clearinghouse and a physical Clearinghouse location. The California Geological Survey, which serves as the permanent, lead coordination organization for the Clearinghouse, provided all coordination with the state for all resources required for Clearinghouse activation. The Clearinghouse physical location, including mobile satellite communications truck, was opened at a Caltrans maintenance facility located at 3161 Jefferson Street, in Napa. This location remained active through August 26, 2014, during which time it drew the participation of over 100 experts from more than 40 different organizations, and over 1730 remote visitors via the Virtual Clearinghouse and online data compilation map. The Clearinghouse conducted three briefing calls each day with the State Operations Center (SOC) and Clearinghouse partners, and also conducted nightly briefings, accessible to remote participants via webex, with field personnel. Data collected by field researchers was compiled into a map through the efforts of EERI and USGS volunteers in the Napa Clearinghouse. EERI personnel continued to provide updates to the compilation map over an extended period of time following de-activation of the Clearinghouse. In addition, EERI managed the Clearinghouse website. Two overflights were conducted, for

  11. Dynamically Triggered Earthquakes in the Geysers Region following the 2014 M6.0 South Napa Earthquake (United States)

    Meng, X.; Peng, Z.; Aiken, C.; Kilb, D.


    The 08/24/2014 M6.0 South Napa earthquake is the largest seismic event to strike the San Francisco Bay Area since the 10/17/1989 M6.9 Loma Prieta earthquake. The South Napa event caused severe damage near the epicenter. Based on the Northern California Seismic Network (NCSN) catalog, we find a clear increase of seismicity near the Geysers Geothermal Field following the South Napa event, which is located along its rupture directivity path ~50 km NNW from the hypocenter. Visually inspecting 10 Hz high-pass filtered waveforms at seismic stations near Geysers, we can identify many local earthquakes during the surface waves of the mainshock event that are missing from the NCSN catalog. To obtain a more complete catalog, we apply a recently developed matched filter technique to detect new events within continuous seismic recordings from 74 seismic stations near the Geysers. We use 4000 local earthquakes listed in the NCSN catalog from 06/01/2014 to 09/10/2014 as templates and systematically scan continuous data within ±7 days from the South Napa mainshock. As a result, we detect ~10 times more earthquakes than in the NCSN catalog, and the magnitude of completeness reduces from 0.75 to -0.6. Of the 8091 new events, 28 occurred within the mainshock wavetrain. Depending on the filter used, the first triggered event has an inferred magnitude in the range 3.6-4.0. The intensive seismic activity near the Geysers gradually decays with a p-value of ~0.7 and returns to pre-shock level in about one day. We fit the seismicity rate in the week prior to the South Napa event with the Epidemic Type Aftershock Sequence (ETAS) model and extrapolate to obtain a post-mainshock rate. The observed post-mainshock seismicity rate clearly deviates from the ETAS prediction, which suggests that not all increased seismicity near the Geysers can be explained as aftershocks of the first triggered event. Instead these new events may be associated with stress transients (e.g. creep) or fluid

  12. Calculation of the Rate of M>6.5 Earthquakes for California and Adjacent Portions of Nevada and Mexico (United States)

    Frankel, Arthur; Mueller, Charles


    One of the key issues in the development of an earthquake recurrence model for California and adjacent portions of Nevada and Mexico is the comparison of the predicted rates of earthquakes with the observed rates. Therefore, it is important to make an accurate determination of the observed rate of M>6.5 earthquakes in California and the adjacent region. We have developed a procedure to calculate observed earthquake rates from an earthquake catalog, accounting for magnitude uncertainty and magnitude rounding. We present a Bayesian method that corrects for the effect of the magnitude uncertainty in calculating the observed rates. Our recommended determination of the observed rate of M>6.5 in this region is 0.246 ? 0.085 (for two sigma) per year, although this rate is likely to be underestimated because of catalog incompleteness and this uncertainty estimate does not include all sources of uncertainty.

  13. Seismo-Ionospheric Precursor in the GIM TEC of the 24 August 2014 M6 Napa Earthquake (United States)

    Wu, T. Y.; Liu, T. J. Y.; Liu, J. Y.


    This study examines seismo-ionospheric precursors (SIPs) in the global ionosphere map (GIM) of the total electron content (TEC) associated with the 24 August 2014 M6 South Napa earthquake and statistical evidence of SIPs of the GPS TEC in western USA during 2000-2014. The temporal SIP in the GIM TEC around the epicenter significantly decreasing (negative anomaly) on 22 August. To discriminate the global effect, such as solar flares, magnetic storms, etc., and the local effect, such as earthquakes, 5183 lattices on the GIM are employed to conduct a global search of the SIP distribution. Anomalies of both GIM TEC and associated gradients specifically and continuously appearing over the epicenter suggest that the SIP relate to the 2014 South Napa earthquake. A simulation is further carried out to produce the SIP in GIM TEC before the earthquake. Results indicate that the eastward electric field generated over the epicenter area during the earthquake preparation period to be essential.

  14. Near-Field Deformation Associated with the M6.0 South Napa Earthquake Surface Rupture (United States)

    Brooks, B. A.; Hudnut, K. W.; Glennie, C. L.; Ericksen, T.


    We characterize near-field deformation associated with the surface rupture of the M6.0 South Napa earthquake from repeat mobile laser scanning (MLS) surveys. Starting the day after the main shock, we operated, sometime simultaneously, short (~75 m range) and medium (~400m range) range laser scanners on a truck or backpack. We scanned most of the length of the principal and secondary surface ruptures at speeds less than 10 km/hr. Scanning occurred primarily in either suburban subdivisions or cultivated vineyards of varying varietals with differing leaf patterns and stages of maturity. Spot-spacing is dense enough (100s of points/m^2) to permit creation of 10-25cm digital elevation models of much of the surface rupture. Scanned features of the right-lateral rupture include classic mole tracks through a variety of soil types, en echelon cracks, offset vine rows, and myriad types of pavement-related deformation. We estimate coseismic surface displacements ranging from 5 to 45 cm by examining offset cultural features and vine rows and by comparing the MLS data with preexisting airborne laser scans from 2003 using point-cloud and solid-modeling methodologies. Additionally, we conducted repeat MLS scans to measure the magnitude and spatial variation of fault afterslip, exceeding 20 cm in some places, particularly in the southern portion of the rupture zone. We anticipate these data sets, in conjunction with independently collected ground-based alinement arrays and space-based geodetic data will contribute significant insight into topics of current debate including assessing the most appropriate material models for shallow fault zones and how shallow and deeper fault slip relate to one another.

  15. GPS station short-term dynamic characteristics of micro displacement before Menyuan M6.4 earthquake

    Directory of Open Access Journals (Sweden)

    Wei Feng


    Full Text Available Continuous observation data from 24 GPS stations are selected in the area (33.0°N–41.0°N, 95.0°E–105.0°E for this study (the period is from Jan. 1, 2015 to Jan. 20, 2016. Three components, NS, EW and UD, of the daily solutions are filtered by the Hilbert–Huang transform (HHT with frequency band of 5.787 × 10−7–7.716 × 10−8 Hz (20–150 days in period. And short-term dynamic characteristics of micro displacement before Menyuan M6.4 earthquake are studied by using the temporal dependencies and cross spectrum analysis. The results show that before the earthquake the horizontal undulatory motions are higher than the average level in the series data which indicate the disturbance feature of regional stress before the earthquake. Three GPS stations on Qinghai-Tibet Plateau with their setting perpendicular to the seismogenic fault have consistent movement. The increase of amplitude of the horizontal micro motion observed before the quake is conducive to the earthquake occurrence. However, we could not be sure if the undulatory motion triggered the earthquake. It is quite necessary to build more GPS continuous observation stations and optimize the monitoring network so as to improve the understanding of the short-term dynamic crustal variation before earthquake.

  16. Geodetic constraints on the 2014 M 6.0 South Napa earthquake (United States)

    Barnhart, William D.; Murray, Jessica R.; Yun, S H; Svarc, Jerry L.; Samsonov, SV; Fielding, EJ; Brooks, Benjamin A.; Milillo, Pietro


    On 24 August 2014, the M 6.0 South Napa earthquake shook much of the San Francisco Bay area, leading to significant damage in the Napa Valley. The earthquake occurred in the vicinity of the West Napa fault (122.313° W, 38.22° N, 11.3 km), a mapped structure located between the Rodger’s Creek and Green Valley faults, with nearly pure right‐lateral strike‐slip motion (strike 157°, dip 77°, rake –169°;, last accessed December 2014) (Fig. 1). The West Napa fault previously experienced an M 5 strike‐slip event in 2000 but otherwise exhibited no previous definitive evidence of historic earthquake rupture (Rodgers et al., 2008; Wesling and Hanson, 2008). Evans et al. (2012) found slip rates of ∼9.5  mm/yr along the West Napa fault, with most slip rate models for the Bay area placing higher slip rates and greater earthquake potential on the Rodger’s Creek and Green Valley faults, respectively (e.g., Savage et al., 1999; d’Alessio et al., 2005; Funning et al., 2007).

  17. SELF and VLF electromagnetic emissions that preceded the M6.2 Central Italy earthquake occurred on August 24, 2016 (United States)

    Cataldi, Daniele; Cataldi, Gabriele; Straser, Valentino


    On August 24, 2016 at 01:36:32 UTC a destructive earthquake hit Central Italy with a magnitude of M6.2. The authors of this study have recorded some electromagnetic signals that have preceded this strong earthquake. These signals were recorded through two electromagnetic monitoring stations realized by Gabriele Cataldi and Daniele Cataldi, located near the town of Albano Laziale (Rome, Italy) and near the city of Lariano (Rome, Italy) and can monitor the radio spectrum 24h7 between 0.001 Hz and 96 kHz (SELF-LF band). The electromagnetic monitoring allowed to identify two interesting types of electromagnetic anomalies: the first electromagnetic anomaly was recorded on August 18, 2016 between 02:47 UTC and 06:21 UTC, in the VLF band prevalently between 18kHz and 26kHz; the second electromagnetic anomaly was registered between 08:00 UTC on August 23, 2016 and 05:00 UTC on August 24, 2016, prevalently between 0.01 and 0.7Hz: the most intense signals were recorded at 08:50 UTC on August 23, 2016 and approximately 1 hour before the strong earthquake. The Earth's electromagnetic background monitoring in the SELF-VLF band (0Hztechnological) have allowed us to understand that there are actually two families of pre-seismic radio emissions: 1) radio emissions identified as Earth's geomagnetic field disturbances related to "near Earth" solar wind proton density increase variations, and for this reason it can be seen from any point on the Earth (this is "no local" type emissions); 2) radio signals are not connected directly to the solar and geomagnetic activity: these radio signals are probably generated by piezoelectricity phenomena occurring near the focal area of the earthquake and are detectable near earthquake epicenter (this is a "local" type emissions). It is therefore clear that the monitoring of solar activity and Earth's geomagnetic activity is an activity of fundamental importance to be able to have a general understanding of pre-seismic radio signals nature. In fact

  18. Near-Field Deformation Associated with the South Napa Earthquake (M 6.0) Using Differential Airborne LiDAR (United States)

    Hudnut, K. W.; Glennie, C. L.; Brooks, B. A.; Hauser, D. L.; Ericksen, T.; Boatwright, J.; Rosinski, A.; Dawson, T. E.; Mccrink, T. P.; Mardock, D. K.; Hoirup, D. F., Jr.; Bray, J.


    Pre-earthquake airborne LiDAR coverage exists for the area impacted by the M 6.0 South Napa earthquake. The Napa watershed data set was acquired in 2003, and data sets were acquired in other portions of the impacted area in 2007, 2010 and 2014. The pre-earthquake data are being assessed and are of variable quality and point density. Following the earthquake, a coalition was formed to enable rapid acquisition of post-earthquake LiDAR. Coordination of this coalition took place through the California Earthquake Clearinghouse; consequently, a commercial contract was organized by Department of Water Resources that allowed for the main fault rupture and damaged Browns Valley area to be covered 16 days after the earthquake at a density of 20 points per square meter over a 20 square kilometer area. Along with the airborne LiDAR, aerial imagery was acquired and will be processed to form an orthomosaic using the LiDAR-derived DEM. The 'Phase I' airborne data were acquired using an Optech Orion M300 scanner, an Applanix 200 GPS-IMU, and a DiMac ultralight medium format camera by Towill. These new data, once delivered, will be differenced against the pre-earthquake data sets using a newly developed algorithm for point cloud matching, which is improved over prior methods by accounting for scan geometry error sources. Proposed additional 'Phase II' coverage would allow repeat-pass, post-earthquake coverage of the same area of interest as in Phase I, as well as an addition of up to 4,150 square kilometers that would potentially allow for differential LiDAR assessment of levee and bridge impacts at a greater distance from the earthquake source. Levee damage was reported up to 30 km away from the epicenter, and proposed LiDAR coverage would extend up to 50 km away and cover important critical lifeline infrastructure in the western Sacramento River delta, as well as providing full post-earthquake repeat-pass coverage of the Napa watershed to study transient deformation.

  19. Validation of a ground motion synthesis and prediction methodology for the 1988, M=6.0, Saguenay Earthquake

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, L.; Jarpe, S.; Kasameyer, P.; Foxall, W.


    We model the 1988, M=6.0, Saguenay earthquake. We utilize an approach that has been developed to predict strong ground motion. this approach involves developing a set of rupture scenarios based upon bounds on rupture parameters. rupture parameters include rupture geometry, hypocenter, rupture roughness, rupture velocity, healing velocity (rise times), slip distribution, asperity size and location, and slip vector. Scenario here refers to specific values of these parameters for an hypothesized earthquake. Synthetic strong ground motion are then generated for each rupture scenario. A sufficient number of scenarios are run to span the variability in strong ground motion due to the source uncertainties. By having a suite of rupture scenarios of hazardous earthquakes for a fixed magnitude and identifying the hazard to the site from the one standard deviation value of engineering parameters we have introduced a probabilistic component to the deterministic hazard calculation, For this study we developed bounds on rupture scenarios from previous research on this earthquake. The time history closest to the observed ground motion was selected as a model for the Saguenay earthquake.

  20. Stress-based aftershock forecasting: the 2008 M=7.9 Wenchuan, and 2013 M=6.6 Lushan earthquakes (United States)

    Parsons, T.


    Immediately after the 12 May 2008 M=7.9 Wenchuan earthquake, static stress change calculations were made on the on major faults surrounding the rupture zone. The purpose was two-fold: (1) to identify the most likely locations (stress increases) of dangerous aftershocks, and (2) to conduct a prospective test of stress mapping as a rapid-response forecast tool. The occurrence of the 20 April M=6.6 Lushan earthquake in the Longmen fault zone near Ya'an was consistent with the static stress forecast, but a formal evaluation of the post Wenchuan forecast performance was not favorable because the anticipated aftershock distribution was violated, with clear seismicity rate increases in stress shadow zones. Here I look at reconciling these results and ask the question, are static stress change calculations more applicable to larger aftershocks? A single case such as the Wenchuan-Lushan pairing could readily be a coincidence, so I look at additional large continental earthquakes and their aftershock magnitude relations. Results show (1) the most probable place that high magnitude aftershocks will occur is in areas with the highest aftershock activity, (2) high magnitude aftershocks are most likely to happen where stress change calculations are greatest, and (3) high magnitude aftershocks are most likely to happen on well developed fault zones. All three of these points are fairly obvious, but a conclusion that can be drawn from the 2008 M=7.9 Wenchuan and 2013 M=6.6 Lushon pair is that all three are necessary considerations. The location of the 2013 M=6.6 Lushon earthquake was consistent with stress change calculations, although there was virtually no precursory activity in the immediate vicinity. Therefore a forecast based only on elevated activity rates would not have anticipated its location.

  1. The Parkfield Stress Drop Controversy (United States)

    Abercrombie, R. E.; Nadeau, R. M.


    Nadeau et al. (1995) found that the seismicity on the San Andreas fault at Parkfield is highly clustered. Individual clusters consist of a sequence of near periodically repeating small earthquakes of similar seismic moment. Nadeau and Johnston (1998) compared the moments and timing of these repeating earthquakes (Mw 1000 MPa) for the small earthquakes (Mw patches of high Δ σ would be resolvable by standard seismic methods. However, to date nobody has used seismic methods to determine source parameters for these controversial small earthquakes at Parkfield. We use closely located earthquakes of different sizes (for example, the sub-clusters of cluster CL14, Nadeau et al., 1995, Mw-0.2 to 1), recorded on the HRSN borehole network to analyse the source parameters. The smaller earthquakes are used as empirical Green's functions to resolve source processes of the larger events. Preliminary results from the earthquakes in cluster CL14 result in a source dimension of about 25 m and Δ σ of about 1 MPa for the Mw1 earthquakes, assuming that rupture velocity is the same as that for large earthquakes. We also resolve source-time functions for these earthquakes at most stations and so we can investigate the directivity and velocity of the rupture. Finally we compare the source parameter estimates from the seismic modeling, with those from recurrence and creep rate, and assess the validity of the various proposed models.

  2. Slip-rate increase at Parkfield in 1993 detected by high-precision EDM and borehole tensor strainmeters (United States)

    Langbein, J.; Gwyther, R.L.; Hart, R.H.G.; Gladwin, M.T.


    On two of the instrument networks at Parkfield, California, the two-color Electronic Distance Meter (EDM) network and Borehole Tensor Strainmeter (BTSM) network, we have detected a rate change starting in 1993 that has persisted at least 5 years. These and other instruments capable of measuring crustal deformation were installed at Parkfield in anticipation of a moderate, M6, earthquake on the San Andreas fault. Many of these instruments have been in operation since the mid 1980s and have established an excellent baseline to judge changes in rate of deformation and the coherence of such changes between instruments. The onset of the observed rate change corresponds in time to two other changes at Parkfield. From late 1992 through late 1994, the Parkfield region had an increase in number of M4 to M5 earthquakes relative to the preceding 6 years. The deformation-rate change also coincides with the end of a 7-year period of sub-normal rainfall. Both the spatial coherence of the rate change and hydrological modeling suggest a tectonic explanation for the rate change. From these observations, we infer that the rate of slip increased over the period 1993-1998.On two of the instrument networks at Parkfield, California, the two-color Electronic Distance Meter (EDM) network and Borehole Tensor Strainmeter (BTSM) network, we have detected a rate change starting in 1993 that has persisted at least 5 years. These and other instruments capable of measuring crustal deformation were installed at Parkfield in anticipation of a moderate, M6, earthquake on the San Andreas fault. Many of these instruments have been in operation since the mid 1980s and have established an excellent baseline to judge changes in rate of deformation and the coherence of such changes between instruments. The onset of the observed rate change corresponds in time to two other changes at Parkfield. From late 1992 through late 1994, the Parkfield region had an increase in number of M4 to M5 earthquakes

  3. Soil radon and electromagnetic anomalies before the Ileia(Greece) M6.8 earthquake (United States)

    Nikolopoulos, D.; Vogiannis, E.; Louizi, A.; Zisos, A.


    Radon (222Rn) is a radioactive gas generated by the decay of the naturally occurring 238U series. It is considered very important from radiological point of view, since it accounts for more than half of the natural exposure of the general public. Radon has been used as trace gas in several studies of Earth, hydrogeology and atmosphere, due to its 3.82-day half-life (which allows migration at long distances) and its alpha decay (which enables low level of detection). It has been accounted in the search of earthquake precursors, volcanic processes, fluid circulation in karstic sources and in the study of natural ventilation of underground cavities. Radon anomalies impending great earthquakes have been observed in groundwater, thermal waters soil gas and in underground tunnels. Ileia is a very active tectonic site located in SW Greece, dominated by extensional active seismicity structures (e.g. Alfeios, Neda, Melpeia, Kiparissia-Aetos). Its instrumental and felt seismicity is very high, with more than 600 earthquakes of magnitude greater than 4.0 R in the last 100 years two of which occurred during the last 15 years and were very destructive (5.8 R on 26/3/93 and 6.8 R on 8/6/08 respectively). Hence, it is an area benefiting from the installation of a geophysical monitoring station, where radon exhalation associated with the accumulation or release of tectonic strain can be studied. In the aforementioned consensus, a station for the surveillance of soil radon has been installed in Kardamas Ileias, 3 km south from Amaliada which is the second highly populated city. The station consists of a high precision (calibration certified) active instrument (Alpha Guard-AG, Genitron Ltd.), equipped with an appropriate unit designed for pumping and measurement of radon in soil gas (Soil gas Unit, Genitron Ltd.). Soil radon is driven into AG via a 1-m probe (to minimize meteorological influences) and a 25-m radon proof 25-mm tube (to avoid simultaneous measurement of soil 220Rn

  4. Surface Fault Rupture from the M6.0 South Napa Earthquake of Aug. 24, 2014 (United States)

    Ponti, D. J.; Dawson, T. E.; Schwartz, D. P.; Brooks, B. A.; DeLong, S. B.; Hecker, S.; Hudnut, K. W.; Kelson, K. I.; Lienkaemper, J. J.; Prentice, C. S.; Rosa, C. M.; Rubin, R. S.; Seitz, G. G.; Sickler, R. R.; Wesling, J. R.


    The South Napa earthquake produced the largest and most extensive coseismic surface rupture of any documented California earthquake of similar magnitude. More than 14 km of complex surface faulting, extending from the Napa River at Cuttings Wharf northward beyond the north boundary of Alston Park in the city of Napa, occurred on two principal sub-parallel N-NW trending fault strands. Other minor sub-parallel rupture zones (≤1.5 km in length with ~1-3 cm displacements) were identified near the principal strands. The surface rupture lies primarily NW of the epicenter and W of most of the mapped traces of the West Napa fault zone, but rupture was locally coincident with portions of some mapped late Quaternary and older fault traces. Geomorphic expressions of prior faulting are observed intermittently along the main traces. Surface displacements are predominantly right lateral and typically expressed as discontinuous en echelonleft-stepping fractures within zones that range from pipelines, and residential structures produced significant damage. The ~7 km-long eastern strand had coseismic dextral offsets of 2-8 cm. Its southern end lies 7.5 km NW of the epicenter and 1.1 km E of the western strand, while its northern end approaches the western strand where the two appear to merge a few hundred meters south of Alston Park. Afterslip has been documented along the western strand but was not observed on the eastern strand. It was most rapid in the middle third of the western strand, increasing initial slip by ≥20 cm one day after the mainshock. Repeated measurements suggest total slip may reach ~40 cm along half of the western strand. The complex character and locations of surface rupture produced by this event have significant implications for current approaches to fault hazard mapping in California. Additional contributors: USGS: N. Avdievitch, M. Bennett, B. Collins, T. Holzer, A. Pickering, J. Tinsley. CGS: D. Branum, B. Bryant, C. Davenport, M. Delattre, W

  5. Ground motion observations of the South Napa earthquake (M6.0 August 24, 2014) (United States)

    Baltay, A.


    The South Napa earthquake generated peak ground motions in excess of 50%g and 50 cm/s in Napa Valley and also along strike to the south, and was recorded at 17 stations within 20 km rupture distance (Rrup) of the finite fault plane, 115 stations within 50 km, and 246 within 100 km. We compare the densely recorded ground motions to existing ground motion prediction equations (GMPEs) to understand both the spatial distribution of ground-motion amplitudes and also the relative excitation and attenuation terms from the earthquake. Using the ground-motion data as reported by ShakeMap, we examine the peak ground acceleration (PGA) and velocity, as well as the pseudo-spectral acceleration (PSA) at 0.3, 1.0 and 3.0 seconds, adjusted empirically to a single site condition of 760 m/s. Overall, the ground motions on the north-south components are larger than those on the east-west, consistent with both the generally north-south strike of the fault and the rupture directivity. At the higher frequencies (PGA and PSA of 0.3 s), the close data are very consistent with the GMPEs, implying a median stress drop near 5 MPa. For the longer period data, the GMPEs underpredict the data at close stations. At all frequencies, the distance attenuation seems to be stronger than the GMPEs would predict, which could either be a station coverage bias, given that most of the stations are to the south of the epicenter, or may indicate that the attenuation structure in the Napa and delta region is stronger than the average attenuation in California, on which the GMPEs were built. The spatial plot of the ground motion residuals is positive to the north, in both Napa and Sonoma Valley, consistent with both the directivity and basin effect. More interestingly, perhaps, is that there is strong ground motion to the south, as well, in the along-strike direction, particularly for PSA at 1.0s. These strongly positive residuals align along an older, Quaternary fault structure associated with the Franklin

  6. Offline Performance of the Filter Bank EEW Algorithm in the 2014 M6.0 South Napa Earthquake (United States)

    Meier, M. A.; Heaton, T. H.; Clinton, J. F.


    Medium size events like the M6.0 South Napa earthquake are very challenging for EEW: the damage such events produce can be severe, but it is generally confined to relatively small zones around the epicenter and the shaking duration is short. This leaves a very short window for timely EEW alerts. Algorithms that wait for several stations to trigger before sending out EEW alerts are typically not fast enough for these kind of events because their blind zone (the zone where strong ground motions start before the warnings arrive) typically covers all or most of the area that experiences strong ground motions. At the same time, single station algorithms are often too unreliable to provide useful alerts. The filter bank EEW algorithm is a new algorithm that is designed to provide maximally accurate and precise earthquake parameter estimates with minimum data input, with the goal of producing reliable EEW alerts when only a very small number of stations have been reached by the p-wave. It combines the strengths of single station and network based algorithms in that it starts parameter estimates as soon as 0.5 seconds of data are available from the first station, but then perpetually incorporates additional data from the same or from any number of other stations. The algorithm analyzes the time dependent frequency content of real time waveforms with a filter bank. It then uses an extensive training data set to find earthquake records from the past that have had similar frequency content at a given time since the p-wave onset. The source parameters of the most similar events are used to parameterize a likelihood function for the source parameters of the ongoing event, which can then be maximized to find the most likely parameter estimates. Our preliminary results show that the filter bank EEW algorithm correctly estimated the magnitude of the South Napa earthquake to be ~M6 with only 1 second worth of data at the nearest station to the epicenter. This estimate is then

  7. Brief Communication: An exclusive example of surface latent heat flux variation before Russia M6.1 earthquake

    Directory of Open Access Journals (Sweden)

    Y. Jie


    Full Text Available Recently surface latent heat flux (SLHF data is widely used to study the anomalies before earthquakes. Most researches use the daily SLHF data, here we use both daily data and high temporal resolution (four times one day SLHF data, and compare the SLHF change with satellite image at the first time. We check the data from 1 September to 30 October 2011 and the result shows that there is really a very high SLHF anomaly (bigger than 2 σ just 5 days before the M6.1 Russia earthquake which occurred on 14 October 2011. It should be considered as a preseismic precursor if judged with previously published methods. But our comparison between SLHF change and satellite image shows that the SLHF anomaly is just caused by a thick cloud. This result tells us that scientists must know the data's meaning before they use it, if not, they may get a wrong conclusion. Based on this example, we suggest that previously published SLHF anomaly before earthquake should be reanalyzed by our method to exclude the false anomaly.

  8. S-wave triggering of tremor beneath the Parkfield, California, section of the San Andreas fault by the 2011 Tohoku, Japan earthquake: observations and theory (United States)

    Hill, David P.; Peng, Zhigang; Shelly, David R.; Aiken, Chastity


    The dynamic stresses that are associated with the energetic seismic waves generated by the Mw 9.0 Tohoku earthquake off the northeast coast of Japan triggered bursts of tectonic tremor beneath the Parkfield section of the San Andreas fault (SAF) at an epicentral distance of ∼8200  km. The onset of tremor begins midway through the ∼100‐s‐period S‐wave arrival, with a minor burst coinciding with the SHSH arrival, as recorded on the nearby broadband seismic station PKD. A more pronounced burst coincides with the Love arrival, followed by a series of impulsive tremor bursts apparently modulated by the 20‐ to 30‐s‐period Rayleigh wave. The triggered tremor was located at depths between 20 and 30 km beneath the surface trace of the fault, with the burst coincident with the S wave centered beneath the fault 30 km northwest of Parkfield. Most of the subsequent activity, including the tremor coincident with the SHSH arrival, was concentrated beneath a stretch of the fault extending from 10 to 40 km southeast of Parkfield. The seismic waves from the Tohoku epicenter form a horizontal incidence angle of ∼14°, with respect to the local strike of the SAF. Computed peak dynamic Coulomb stresses on the fault at tremor depths are in the 0.7–10 kPa range. The apparent modulation of tremor bursts by the small, strike‐parallel Rayleigh‐wave stresses (∼0.7  kPa) is likely enabled by pore pressure variations driven by the Rayleigh‐wave dilatational stress. These results are consistent with the strike‐parallel dynamic stresses (δτs) associated with the S, SHSH, and surface‐wave phases triggering small increments of dextral slip on the fault with a low friction (μ∼0.2). The vertical dynamic stresses δτd do not trigger tremor with vertical or oblique slip under this simple Coulomb failure model.

  9. Long Term Monitoring of EM Signals Near Parkfield CA (United States)

    Kappler, K.; Morrison, H.; Egbert, G.


    Fluctuations of resistivity and anomalous electromagnetic (EM) signals have often been reported as precursors to earthquakes. Most of these reports are based on anecdotal observations of unusual phenomena associated with distant earthquakes, with anomalous signals that are often orders of magnitude larger than expected based on laboratory measurements. In an attempt to assess the validity of these reports, and to understand how such signals might be generated, anomalous EM signals and resistivity have been monitored since 1995 using magnetotelluric (MT) instruments at the site of the focused earthquake prediction experiment at Parkfield, CA. This EM monitoring array was fully operational and producing high quality data when the long awaited 28 Sept Mw=6.0 Parkfield earthquake occured. The Parkfield MT site, consisting of three magnetic induction coils and replicated orthogonal electric dipoles, was situated within a few kilometers of the northwestern end of the surface rupture, providing unprecedented observations of EM signals at a well calibrated site in very close proximity to a moderate (M~6) earthquake. A second MT site near Hollister, CA was also functioning well before, during, and after the earthquake, providing a callibrated reference for cancellation of normal EM variations due to ionospheric and magnetospheric sources. Analysis of this data using a variety of techniques has not revealed any anomalous signals which were unambiguos precursors to the 28 September Earthquake. In particular, residual E and B fields computed in both the time and frequency domains over the year 2004 using the remote Hollister site for prediction have been analyzed statistically, revealing no long term trends or changes in anomalous EM signal or noise levels in the months to days preceeding the earthquake. Similarly, no anomalous bursts of EM activity are seen immediately preceeding the earthquake. Based on our analysis of these residuals we conclude that any anomalous magnetic

  10. Investigation of the M6.6 Niigata-Chuetsu Oki, Japan, earthquake of July 16, 2007 (United States)

    Kayen, Robert; Collins, Brian D.; Abrahamson, Norm; Ashford, Scott; Brandenberg, Scott J.; Cluff, Lloyd; Dickenson, Stephen; Johnson, Laurie; Tanaka, Yasuo; Tokimatsu, Kohji; Kabeyasawa, Toshimi; Kawamata, Yohsuke; Koumoto, Hidetaka; Marubashi, Nanako; Pujol, Santiago; Steele, Clint; Sun, Joseph I.; Tsai, Ben; Yanev, Peter; Yashinsky, Mark; Yousok, Kim


    The M6.6 mainshock of the Niigata Chuetsu Oki (offshore) earthquake occurred at 10:13 a.m. local time on July 16, 2007, and was followed by a sequence of aftershocks that were felt during the entire time of the reconnaissance effort. The mainshock had an estimated focal depth of 10 km and struck in the Japan Sea offshore Kariwa. Analysis of waveforms from source inversion studies indicates that the event occurred along a thrust fault with a NE trend. The fault plane is either a strike of 34 degrees with a dip of 51 degrees or a strike of 238 degrees with a dip of 41 degrees. Which of these two planes is associated with the mainshock rupture is unresolved, although attenuation relationship analysis indicates that the northwest-dipping fault is favored. The quake affected an approximately 100-km-wide area along the coastal areas of southwestern Niigata prefecture. The event triggered ground failures as far as the Unouma Hills, located in central Niigata approximately 50 km from the shore and the source area of the 2004 Niigata Chuetsu earthquake. The primary event produced tsunami run-ups that reached maximum runup heights of about 20 centimeters along the shoreline of southern Niigata Prrefecture.

  11. Solar wind triggering of geomagnetic disturbances and strong (M>6.8) earthquakes during the November - December 2004 period

    CERN Document Server

    Anagnostopoulos, G; Antoniou, P


    This paper brings space weather prediction close to earthquake (EQ) prediction research. The results of this paper support conclusions of previously presented statistical studies that solar activity influences the seismic activity, this influence is mediated through rapid geomagnetic disturbances and the geomagnetic disturbances are related with increases of solar wind speed. Our study concern an example of 40 days with direct response of a series of 7 strong-to-giant (M=6.8-9.3) EQs (including the Andaman-Sumatra EQ) to solar wind speed increases and subsequent geomagnetic fast disturbances. Our analysis for 10 M>6 EQs from November 23 to December 28, 2004 suggests a mean time response delay of EQs to fast geomagnetic disturbances of ~1.5 days. The two giant EQs during this period occurred after the two fastest geomagnetic variations, as revealed by the ratio of the daily Kp index variation over a day {\\Delta}Kp/{\\Delta}t (12 and 15, respectively). It suggests that the fast disturbance of the magnetosphere, ...

  12. Long-term monitoring of ULF electromagnetic fields at Parkfield, CA

    Energy Technology Data Exchange (ETDEWEB)

    Kappler, K.N.; Morrison, H.F.; Egbert, G.D.


    Electric and magnetic fields in the (10{sup -4}-1.0) Hz band were monitored at two sites adjacent to the San Andreas Fault near Parkfield and Hollister, California from 1995 to present. A data window [2002-2005], enclosing the September 28, 2004 M6 Parkfield earthquake, was analyzed to determine if anomalous electric or magnetic fields, or changes in ground conductivity, occurred before the earthquake. The data were edited, removing intervals of instrument malfunction leaving 875 days in the four-year period. Frequent, spike-like disturbances were common, but were not more frequent around the time of the earthquake; these were removed before subsequent processing. Signal to noise amplitude spectra, estimated via magnetotelluric processing showed the behavior of the ULF fields to be remarkably constant over the period of analysis. These first-order plots make clear that most of the recorded energy is coherent over the spatial extent of the array. Three main statistical techniques were employed to separate local anomalous electrical or magnetic fields from the dominant coherent natural fields: transfer function estimates between components at each site were employed to subtract the dominant field, and look deeper at the 'residual' fields; the data were decomposed into principal components to identify the dominant coherent array modes; and the technique of canonical coherences was employed to distinguish anomalous fields which are spatially broad from anomalies which occur at a single site only, and furthermore to distinguish anomalies which are present in both the electric and magnetic fields from those which are present in only one field type. Standard remote reference apparent resistivity estimates were generated daily at Parkfield. A significant seasonal component of variability was observed suggesting local distortion due to variations in near surface resistance. In all cases, high levels of sensitivity to subtle electromagnetic effects were

  13. Transient stresses al Parkfield, California, produced by the M 7.4 Landers earthquake of June 28, 1992: implications for the time-dependence of fault friction

    Directory of Open Access Journals (Sweden)

    J. B. Fletcher


    Full Text Available he M 7.4 Landers earthquake triggered widespread seismicity in the Western U.S. Because the transient dynamic stresses induced at regional distances by the Landers surface waves are much larger than the expected static stresses, the magnitude and the characteristics of the dynamic stresses may bear upon the earthquake triggering mechanism. The Landers earthquake was recorded on the UPSAR array, a group of 14 triaxial accelerometers located within a 1-square-km region 10 km southwest of the town of Parkfield, California, 412 km northwest of the Landers epicenter. We used a standard geodetic inversion procedure to determine the surface strain and stress tensors as functions of time from the observed dynamic displacements. Peak dynamic strains and stresses at the Earth's surface are about 7 microstrain and 0.035 MPa, respectively, and they have a flat amplitude spectrum between 2 s and 15 s period. These stresses agree well with stresses predicted from a simple rule of thumb based upon the ground velocity spectrum observed at a single station. Peak stresses ranged from about 0.035 MPa at the surface to about 0.12 MPa between 2 and 14 km depth, with the sharp increase of stress away from the surface resulting from the rapid increase of rigidity with depth and from the influence of surface wave mode shapes. Comparison of Landers-induced static and dynamic stresses at the hypocenter of the Big Bear aftershock provides a clear example that faults are stronger on time scales of tens of seconds than on time scales of hours or longer.

  14. Heterogeneous stress field in the source area of the 2003 M6.4 Northern Miyagi Prefecture, NE Japan, earthquake (United States)

    Yoshida, Keisuke; Hasegawa, Akira; Okada, Tomomi


    We investigated a detailed spatial distribution of principal stress axis orientations in the source area of the 2003 M6.4 Northern Miyagi Prefecture earthquake that occurred in the forearc of northeastern Japan. Aftershock hypocentres were precisely relocated by applying the double difference method to arrival time data obtained at temporary stations as well as at surrounding routine stations. We picked many P-wave polarity data from seismograms at these stations, which enabled us to obtain 312 well-determined focal mechanism solutions. Stress tensor inversions were performed by using these focal mechanism data. The results show that quite a lot of focal mechanisms are difficult to explain by the uniform stress field, especially near the large slip area of the main-shock rupture. Stress tensor inversions at the location of individual earthquakes show that σ1 axes are orientated mainly to WSW-ENE in the northern part of the source area, while they are oriented to NW-SE in the southern part. This spatial pattern is roughly similar to those of the static stress change by the main shock, which suggests that the observed spatially heterogeneous stress field was formed by the static stress change. If this is the case, the deviatoric stress magnitude before the main shock was very small. Another possibility is the heterogeneous stress field observed after the main shock had existed even before the main shock, although we do not know why it was formed. Unfavourable orientation of the main shock fault with respect to this stress field suggests that the fault is not strong in this case too.

  15. Source parameters of the M 6.5 Skyros Island (North Aegean Sea earthquake of July 26, 2001

    Directory of Open Access Journals (Sweden)

    A. Kiratzi


    Full Text Available Teleseismic body wave modelling, time domain moment tensor inversion of regional waveforms and spectral analysis of the far-field P-wave pulses are used to derive the source parameters of the July 26, 2001 Skyros earthquake (M 6.5. Its epicentre is located south of the Sporades Islands in the North Aegean Sea (Greece. Previous focal mechanism solutions indicate motion on strike-slip faults. The time domain moment tensor inversion is applied for the first time to the regional waveforms of the recently established broadband network in Greece. Its application gave results which are highly consistent with teleseismic waveform modelling. The results of this study, in combination with the distribution of aftershocks, indicate left-lateral strike slip motion on a NW-SE striking fault with parameters: fault plane (strike = 151°, dip = 83°, rake = 7° and auxiliary plane (strike = 60°, dip = 84°, rake = 173°, depth 12 km and M 0 = 5.98e18 N m. Moreover, the time domain moment tensor inversion technique yielded a pure double couple source with negligible CLVD. The spectral analysis of the far-field P-wave pulses resulted in a fault length L ~ 32 km, stress drop ~ 9 bars and average displacement u ~ 30 cm.These values are in very good agreement with those estimated from empirical scaling relations applicable to the Aegean area.

  16. Preliminary simulation of a M6.5 earthquake on the Seattle Fault using 3D finite-difference modeling (United States)

    Stephenson, William J.; Frankel, Arthur D.


    A three-dimensional finite-difference simulation of a moderate-sized (M 6.5) thrust-faulting earthquake on the Seattle fault demonstrates the effects of the Seattle Basin on strong ground motion in the Puget lowland. The model area includes the cities of Seattle, Bremerton and Bellevue. We use a recently developed detailed 3D-velocity model of the Seattle Basin in these simulations. The model extended to 20-km depth and assumed rupture on a finite fault with random slip distribution. Preliminary results from simulations of frequencies 0.5 Hz and lower suggest amplification can occur at the surface of the Seattle Basin by the trapping of energy in the Quaternary sediments. Surface waves generated within the basin appear to contribute to amplification throughout the modeled region. Several factors apparently contribute to large ground motions in downtown Seattle: (1) radiation pattern and directivity from the rupture; (2) amplification and energy trapping within the Quaternary sediments; and (3) basin geometry and variation in depth of both Quaternary and Tertiary sediments

  17. Numerical Shake Prediction for Earthquake Early Warning: More Precise and Rapid Prediction even for Deviated Distribution of Ground Shaking of M6-class Earthquakes (United States)

    Hoshiba, M.; Ogiso, M.


    In many methods of the present EEW systems, hypocenter and magnitude are determined quickly, and then the strengths of ground motions are predicted using the hypocentral distance and magnitude based on a ground motion prediction equation (GMPE), which usually leads the prediction of concentric distribution. However, actual ground shaking is not always concentric, even when site amplification is corrected. At a common site, the strengths of shaking may be much different among earthquakes even when their hypocentral distances and magnitudes are almost the same. For some cases, PGA differs more than 10 times, which leads to imprecise prediction in EEW. Recently, Numerical Shake Prediction method was proposed (Hoshiba and Aoki, 2015), in which the present ongoing wavefield of ground shaking is estimated using data assimilation technique, and then future wavefield is predicted based on physics of wave propagation. Information of hypocentral location and magnitude is not required in this method. Because future is predicted from the present condition, it is possible to address the issue of the non-concentric distribution. Once the deviated distribution is actually observed in ongoing wavefield, future distribution is predicted accordingly to be non-concentric. We will indicate examples of M6-class earthquakes occurred at central Japan, in which strengths of shaking were observed to non-concentrically distribute. We will show their predictions using Numerical Shake Prediction method. The deviated distribution may be explained by inhomogeneous distribution of attenuation. Even without attenuation structure, it is possible to address the issue of non-concentric distribution to some extent once the deviated distribution is actually observed in ongoing wavefield. If attenuation structure is introduced, we can predict it before actual observation. The information of attenuation structure leads to more precise and rapid prediction in Numerical Shake Prediction method for EEW.

  18. Application of scaling-rule theory in crustal rock fracture to studying characteristics of seismological precursors associated with M=6.1 Shandan-Minle earthquake

    Institute of Scientific and Technical Information of China (English)

    RONG Dai-lu; LI Ya-rong; HAN Xiao-ming


    In the paper, we introduce Allegre's scaling-rule theory of rock fracture and the probability to develop a method for predicting earthquake occurrence time on its basis. As an example, we study the characteristics of seismological precursors (seismic spatial correlation length and coda Qc) associated with the earthquake (M=6.1) occurred in Shandan-Minle, Gansu Province. The results show an increasing trend of seismic spatial correlation length and coda Qc before the earthquake. And a power exponent relation is used to fit the increasing variation form of these two parameters. The study has provided a basis for creating a method and finding indexes to predict the earthquake occurrence time by using the monitored seismic spatial correlation length and coda Qc.

  19. Predicted liquefaction in the greater Oakland area and northern Santa Clara Valley during a repeat of the 1868 Hayward Fault (M6.7-7.0) earthquake (United States)

    Holzer, Thomas L.; Noce, Thomas E.; Bennett, Michael J.


    Probabilities of surface manifestations of liquefaction due to a repeat of the 1868 (M6.7-7.0) earthquake on the southern segment of the Hayward Fault were calculated for two areas along the margin of San Francisco Bay, California: greater Oakland and the northern Santa Clara Valley. Liquefaction is predicted to be more common in the greater Oakland area than in the northern Santa Clara Valley owing to the presence of 57 km2 of susceptible sandy artificial fill. Most of the fills were placed into San Francisco Bay during the first half of the 20th century to build military bases, port facilities, and shoreline communities like Alameda and Bay Farm Island. Probabilities of liquefaction in the area underlain by this sandy artificial fill range from 0.2 to ~0.5 for a M7.0 earthquake, and decrease to 0.1 to ~0.4 for a M6.7 earthquake. In the greater Oakland area, liquefaction probabilities generally are less than 0.05 for Holocene alluvial fan deposits, which underlie most of the remaining flat-lying urban area. In the northern Santa Clara Valley for a M7.0 earthquake on the Hayward Fault and an assumed water-table depth of 1.5 m (the historically shallowest water level), liquefaction probabilities range from 0.1 to 0.2 along Coyote and Guadalupe Creeks, but are less than 0.05 elsewhere. For a M6.7 earthquake, probabilities are greater than 0.1 along Coyote Creek but decrease along Guadalupe Creek to less than 0.1. Areas with high probabilities in the Santa Clara Valley are underlain by young Holocene levee deposits along major drainages where liquefaction and lateral spreading occurred during large earthquakes in 1868 and 1906.

  20. On the recent M=6.1 earthquake occurred at Kefalonia island (South-West Greece) on 26 January 2014: Manifestations of an Earth system in critical state

    CERN Document Server

    Contoyiannis, Y; Kopanas, J; Antonopoulos, G; Koulouras, G; Eftaxias, K; Nomicos, C


    In this paper we show, in terms of fracture-induced electromagnetic emissions (EME) recorded two days prior to the earthquake of Kefalonia (Cephalonia), Greece [(38.22o N, 20.53oE), 26 January 2014, M=6.1] that the Earth system around the focal area came to critical condition two days before the earthquake occurence. Specifically, the MHz EME recorded by the remote telemetric stations on the island of Kefalonia and the neighboring island of Zante came simultaneously to critical conditions. The analysis was performed by means of the method of critical fluctuations (MCF) revealing critical features.

  1. Quantifying slip balance in the earthquake cycle: Coseismic slip model constrained by interseismic coupling

    KAUST Repository

    Wang, Lifeng


    The long-term slip on faults has to follow, on average, the plate motion, while slip deficit is accumulated over shorter time scales (e.g., between the large earthquakes). Accumulated slip deficits eventually have to be released by earthquakes and aseismic processes. In this study, we propose a new inversion approach for coseismic slip, taking interseismic slip deficit as prior information. We assume a linear correlation between coseismic slip and interseismic slip deficit, and invert for the coefficients that link the coseismic displacements to the required strain accumulation time and seismic release level of the earthquake. We apply our approach to the 2011 M9 Tohoku-Oki earthquake and the 2004 M6 Parkfield earthquake. Under the assumption that the largest slip almost fully releases the local strain (as indicated by borehole measurements, Lin et al., 2013), our results suggest that the strain accumulated along the Tohoku-Oki earthquake segment has been almost fully released during the 2011 M9 rupture. The remaining slip deficit can be attributed to the postseismic processes. Similar conclusions can be drawn for the 2004 M6 Parkfield earthquake. We also estimate the required time of strain accumulation for the 2004 M6 Parkfield earthquake to be ~25 years (confidence interval of [17, 43] years), consistent with the observed average recurrence time of ~22 years for M6 earthquakes in Parkfield. For the Tohoku-Oki earthquake, we estimate the recurrence time of~500-700 years. This new inversion approach for evaluating slip balance can be generally applied to any earthquake for which dense geodetic measurements are available.

  2. Afterslip behavior following the M6.0, 2014 South Napa earthquake with implications for afterslip forecasting on other seismogenic faults (United States)

    Lienkaemper, James J.; DeLong, Stephen B.; Domrose, Carolyn J; Rosa, Carla M.


    The M6.0, 24 Aug. 2014 South Napa, California, earthquake exhibited unusually large slip for a California strike-slip event of its size with a maximum coseismic surface slip of 40-50 cm in the north section of the 15 km-long rupture. Although only minor (Napa afterslip suggests how we might approach the scientific and engineering challenges of afterslip from a much larger M~7 earthquake anticipated on the nearby, urban Hayward Fault. However, we expect its afterslip to last much longer than one year.The M6.0, 24 Aug. 2014 South Napa, California, earthquake exhibited unusually large slip for a California strike-slip event of its size with a maximum coseismic surface slip of 40-50 cm in the north section of the 15 km-long rupture. Although only minor (Napa afterslip suggests how we might approach the scientific and engineering challenges of afterslip from a much larger M~7 earthquake anticipated on the nearby, urban Hayward Fault. However, we expect its afterslip to last much longer than one year.

  3. Total electron content variations over southern Europe before and during the M 6.3 Abruzzo earthquake of April 6, 2009

    Directory of Open Access Journals (Sweden)

    Spyrous D. Spatalas


    Full Text Available

    Total electron content (TEC data of 14 global positioning system (GPS stations of the EUREF network were provided by the IONOLAB. These were analyzed using wavelet analysis and discrete Fourier analysis to investigate the TEC variations over southern Europe in the month before the catastrophic Abruzzo earthquake of M 6.3 of April 6, 2009. The main conclusions of this analysis are: (a TEC oscillations in a broad range of frequencies occurred randomly over a broad area of several hundred kilometers from the earthquake; (b Morning and evening extensions of the day-time TEC values were seen for all of the EUREF stations of this program shortly before, during and shortly after the main earthquake period; (c High frequency oscillations (f $ 0.0003 Hz, period T $ 60 m appear to indicate the location of the earthquake, although with questionable accuracy, while the fractal characteristics of the frequency distribution indicates the locus of the earthquake with relatively greater accuracy. We conclude that the lithosphere-atmosphere-ionosphere coupling mechanism through acoustic or gravity waves might explain this phenomenology.

  4. Non-Volcanic Tremor Near Parkfield, CA Systematically Excited by Teleseismic Waves (United States)

    Peng, Z.; Vidale, J. E.; Rubinstein, J. L.; Gomberg, J.


    Non-volcanic tremor triggered by teleseismic waves was discovered recently along the subduction zones in Japan and Cascadia, and along the transform plate boundary in CA. Here we summarize non-volcanic tremor along the San Andreas fault (SAF) near Parkfield, CA triggered by the surface waves of regional and teleseismic events. We analyze 10 M ≥ 8.0 earthquakes since 2001, the M6.7 Nenana Mountain and M7.9 Denali, Alaska earthquakes in 2002 and the 2005 M7.2 Mendocino, California earthquake. We identify triggered tremor as bursts of high-frequency (~ 3-15 Hz), non-impulsive seismic energy that is coherent among many stations, and has a significant component in phase with the passing of the surface waves. We qualitatively judge the clarity of tremor observations and find the strongest, most coherent examples for the M7.9 Denali, M8.3 Hokkaido, M9.1 Sumatra, and M8.1 Kuril Islands earthquakes. The M6.7 Nenana Mountain earthquake did not trigger visible tremor, and the evidence for triggered tremor for the remaining 8 events is equivocal. The identification of tremor does not correlate strongly with peak ground velocity, but may correlate with cumulative energy density for long- period (≥ 30 s) surface waves. The observations suggest that longer-period waves may be a more effective trigger, most likely due to a better penetration to depth where tremors occur. Our observation, in concert with those of Gomberg et al., Vidale et al., and Rubinstein et al. [this meeting], suggests that non-volcanic tremor triggered by teleseismic waves is much more widespread than previously thought, and the effective stress, or the frictional coefficient is very low at depth along the SAF near Parkfield.

  5. Rapid Response Products of The ARIA Project for the M6.0 August 24, 2014 South Napa Earthquake (United States)

    Yun, S. H.; Owen, S. E.; Hua, H.; Milillo, P.; Fielding, E. J.; Hudnut, K. W.; Dawson, T. E.; Mccrink, T. P.; Jo, M. J.; Barnhart, W. D.; Manipon, G. J. M.; Agram, P. S.; Moore, A. W.; Jung, H. S.; Webb, F.; Milillo, G.; Rosinski, A.


    A magnitude 6.0 earthquake struck southern Napa county northeast of San Francisco, California, on Aug. 24, 2014, causing significant damage in the city of Napa and nearby areas. One day after the earthquake, the Advanced Rapid Imaging and Analysis (ARIA) team produced and released observations of coseismic ground displacement measured with continuous GPS stations of the Plate Boundary Observatory (operated by UNAVCO for the National Science Foundation) and the Bay Area Rapid Deformation network (operated by Berkeley Seismological Laboratory). Three days after the earthquake (Aug. 27), the Italian Space Agency's (ASI) COSMO-SkyMed (CSK) satellite acquired their first post-event data. On the same day, the ARIA team, in collaboration with ASI and University of Basilicata, produced and released a coseismic interferogram that revealed ground deformation and surface rupture. The depiction of the surface rupture - discontinuities of color fringes in the CSK interferogram - helped guide field geologists from the US Geological Survey and the California Geological Survey (CGS) to features that may have otherwise gone undetected. Small-scale cracks were found on a runway of the Napa County Airport, as well as bridge damage and damaged roads. ARIA's response to this event highlighted the importance of timeliness for mapping surface deformation features. ARIA's rapid response products were shared through Southern California Earthquake Center's response website and the California Earthquake Clearinghouse. A damage proxy map derived from InSAR coherence of CSK data was produced and distributed on Aug. 27. Field crews from the CGS identified true and false positives, including mobile home damage, newly planted grape vines, and a cripple wall failure of a house. Finite fault slip models constrained from CSK interferograms and continuous GPS observations reveal a north-propagating rupture with well-resolved slip from 0-10.5 km depth. We also measured along-track coseismic

  6. Coseismic deformation, field observations and seismic fault of the 17 November 2015 M = 6.5, Lefkada Island, Greece earthquake (United States)

    Ganas, Athanassios; Elias, Panagiotis; Bozionelos, George; Papathanassiou, George; Avallone, Antonio; Papastergios, Asterios; Valkaniotis, Sotirios; Parcharidis, Issaak; Briole, Pierre


    On November 17, 2015 07:10:07 UTC a strong, shallow Mw6.5 earthquake, occurred on the island of Lefkada along a strike-slip fault with right-lateral sense of slip. The event triggered widespread environmental effects at the south and western part of the island while, the intensity and severity of these earthquake-induced deformations is substantially decreased towards the eastern part of the island. Relocation of seismicity and inversion of geophysical (GPS, InSAR) data indicate that the seismic fault runs parallel to the west coast of Lefkada, along the Aegean - Apulia plate boundary. The fault plane strikes N20 ± 5°E and dips to east with an angle of about 70 ± 5°. Coseismic deformation was measured in the order of tens of centimeters of horizontal motion by continuous GPS stations of NOANET (the NOA GPS network) and by InSAR (Sentinel 1 A image pairs). A coseismic uniform-slip model was produced from inversion of InSAR data and permanent GPS stations. The earthquake measured Mw = 6.5 using both the geodetic moment produced by the slip model, as well as the PGD relation of Melgar et al. (2015, GRL). In the field we observed no significant vertical motion of the shoreline or surface expression of faulting, this is consistent with the predictions of the model. The interferograms show a large decorrelation area that extends almost along all the western coast of Lefkada. This area correlates well with the mapped landslides. The 2003-2015 pattern of seismicity in the Ionian Sea region indicates the existence of a 15-km seismic gap offshore NW Cephalonia.

  7. Rupture complexity of the M6.0 Amatrice Earthquake probed by 1D and 3D velocity models (United States)

    Tinti, E.; Scognamiglio, L.; Casarotti, E.; Magnoni, F.; Quintiliani, M.; Michelini, A.; Cocco, M.


    On 24th August 2016 a ML 6.0 earthquake occurred in the Central Apennines (Italy) between Amatrice and Norcia causing heavy damages and nearly 300 fatalities. The main shock and most of the aftershocks show NNW-SSE striking focal mechanisms in agreement with the current NE-SW extensional tectonic setting of Central Apennines. To image the rupture history of the Amatrice earthquake, we invert the ground velocity time histories obtained from 26 three components strong motion accelerometers located within 45 km from the fault, filtered between 0.02 and 0.5 Hz. The inferred slip distribution is heterogeneous and characterized by two shallow slip patches located up-dip and NW from the hypocenter. The rupture history shows a bilateral propagation and a relatively high rupture velocity (3.1 km/s), producing evident directivity effects both N-NW and SE of the hypocenter and characterizing the recorded near-source peak ground motions. The retrieved rupture model provides a good fit to observed ground velocities up to 1 Hz, corroborating the contribution of rupture directivity and slip heterogeneity to ground shaking and damage pattern. We highlight that fault dimensions and peak slip values are relatively large for a moderate-magnitude earthquake. Finally, we have performed a forward modeling of seismic wave propagation in a 3D crustal model, using the imaged rupture history as the source model, to verify the effects of topography and velocity model on the calculated ground motions, and interpret the inferred source heterogeneity.

  8. Google earth mapping of damage from the Nigata-Ken-Chuetsu M6.6 earthquake of 16 July 2007 (United States)

    Kayen, Robert E.; Steele, WM. Clint; Collins, Brian; Walker, Kevin


    We describe the use of Google Earth during and after a large damaging earthquake thatstruck the central Japan coast on 16 July 2007 to collect and organize damage information and guide the reconnaissance activities. This software enabled greater real-time collaboration among scientists and engineers. After the field investigation, the Google Earth map is used as a final reporting product that was directly linked to the more traditional research report document. Finally, we analyze the use of the software within the context of a post-disaster reconnaissance investigation, and link it to student use of GoogleEarth in field situations

  9. Coseismic deformation and slip model of the 17 November 2015 M=6.5 earthquake, Lefkada Island, Greece (United States)

    Ganas, Athanassios; Melgar, Diego; Briole, Pierre; Geng, Jianghui; Papathanassiou, George; Bozionelos, George; Avallone, Antonio; Valkaniotis, Sotirios; Mendonidis, Evangelos; Argyrakis, Panagiotis; Moshou, Alexandra; Elias, Panagiotis


    On November 17, 2015 a strong, shallow earthquake, Mw 6.5, occurred on the island of Lefkada along a strike-slip fault with right-lateral sense of slip. The event triggered widespread environmental effects that were mainly reported at the south and western part of the island while moving towards the eastern part, the intensity and severity of these earthquake-induced deformations were decreased. Coseismic deformation was measured in the order of tens of centimeters of horizontal motion by continuous GPS stations of NOANET (the NOA GPS network) and by InSAR (Sentinel 1A image pairs). Released interferograms from various groups show a large decorrelation area that extends almost along all the western coast of Lefkada, observation which provides strong support of landsliding. We also found extensive landslides during field work and no surface ruptures. A coseismic slip model was produced from the ascending InSAR, which it's cleaner than the GPS only and both data sets have ~90% variance reduction. The fault dips to the east-southeast at an angle of 65-70 degrees.

  10. Fluid‐driven seismicity response of the Rinconada fault near Paso Robles, California, to the 2003 M 6.5 San Simeon earthquake (United States)

    Hardebeck, Jeanne L.


    The 2003 M 6.5 San Simeon, California, earthquake caused significant damage in the city of Paso Robles and a persistent cluster of aftershocks close to Paso Robles near the Rinconada fault. Given the importance of secondary aftershock triggering in sequences of large events, a concern is whether this cluster of events could trigger another damaging earthquake near Paso Robles. An epidemic‐type aftershock sequence (ETAS) model is fit to the Rinconada seismicity, and multiple realizations indicate a 0.36% probability of at least one M≥6.0 earthquake during the next 30 years. However, this probability estimate is only as good as the projection into the future of the ETAS model. There is evidence that the seismicity may be influenced by fluid pressure changes, which cannot be forecasted using ETAS. The strongest evidence for fluids is the delay between the San Simeon mainshock and a high rate of seismicity in mid to late 2004. This delay can be explained as having been caused by a pore pressure decrease due to an undrained response to the coseismic dilatation, followed by increased pore pressure during the return to equilibrium. Seismicity migration along the fault also suggests fluid involvement, although the migration is too slow to be consistent with pore pressure diffusion. All other evidence, including focal mechanisms and b‐value, is consistent with tectonic earthquakes. This suggests a model where the role of fluid pressure changes is limited to the first seven months, while the fluid pressure equilibrates. The ETAS modeling adequately fits the events after July 2004 when the pore pressure stabilizes. The ETAS models imply that while the probability of a damaging earthquake on the Rinconada fault has approximately doubled due to the San Simeon earthquake, the absolute probability remains low.

  11. A Teachable Moment in Earth Deformation: An Undergraduate Strain Module Incorporating GPS Measurement of the August 24, 2014 M6.0 South Napa Earthquake (United States)

    Resor, P. G.; Cronin, V. S.; Hammond, W. C.; Pratt-Sitaula, B.; Olds, S. E.


    The August 24, 2014 M 6.0 South Napa Earthquake was the largest earthquake to occur in the San Francisco Bay Area, home to more than 7 million people, in almost 25 years. The event occurred within an area of dense GPS instrumentation including continuous stations from the EarthScope Plate Boundary Observatory, Bay Area Regional Deformation Network and other networks. Coseismic displacements of up to 3 cm were rapidly estimated within one day after the event, providing a map of Earth shape change at over one hundred stations around the epicenter. The earthquake thus presets as an excellent "teachable moment" to introduce students to basic geoscience concepts, modern geophysical methods, and the state of knowledge in earthquake science. We have developed an example exercise that uses GPS-derived interseismic velocities and coseismic offsets to explore deformation in the vicinity of the earthquake rupture. This exercise builds on the UNAVCO education resource "Infinitesimal Strain Analysis Using GPS Data" (, a module designed to introduce undergraduate geoscience majors to concepts of crustal deformation using GPS velocity data. In the module students build their intuition about infinitesimal strain through manipulation of physical models, apply this intuition to interpret maps of GPS velocity vectors, and ultimately calculate the instantaneous deformation rate of triangles on the Earth's surface defined by three GPS sites. The South Napa data sets provide an example with clear societal relevance that can be used to explore the basic concepts of deformation, but may also be extended to explore topics such as strain accumulation, release, and transfer associated with the earthquake cycle. The UNAVCO module could be similarly extended to create additional exercises in response to future events with clear geodetic signals.

  12. Investigation of post-seismic deformation resulting from the 24th August 2016 Central Italy Earthquake (M 6.2) (United States)

    McCaffrey, K. J. W.; Gregory, L. C.; Walters, R. J.; Roberts, G.; Wilkinson, M. W.; Wedmore, L. N. J.; Michetti, A.; Vittori, E.; Livio, F.; Faure Walker, J.; Mildon, Z. K.


    Our initial satellite (InSAR) observations and field studies (Walters et al in prep) following the August 24th, 2016, Mw 6.2 Central Italy earthquake revealed that co-seismic rupture occurred on two normal fault structures, the Vettore and Laga faults, which were previously thought to be separate structures. The Laga and Vettore faults cross a major inactive thrust fault, which emplaced Lower Jurassic limestone over the Miocene flysch to the south. Our preliminary InSAR analysis indicated co-seismic slip at the surface along the Vettore fault. In the field we confirmed an c. 5 km long semi-continuous surface rupture, with remarkably consistent 15-20 cm offset downthrow to the SW on an average azimuth of 235° located along the Vettore fault bedrock scarp. The rupture becomes a series of discontinuous cracks where the fault crosses from limestone bedrock along the Vettore section into underlying flysch. A continuous surface rupture was not found elsewhere in the region, and is either not present or obscured along the Laga-Amatrice fault. We present results from our field campaign and models derived from Sentinel-1 InSAR data during the post-seismic period (images collected on average every 1.5 days) to quantify the spatial distribution and temporal evolution of the post-seismic deformation on and around the causative faults. In the field, we conducted an initial survey of two sites along the surface rupture using the Structure From Motion (SfM) photogrammetric technique, and installed 6 GNSS instruments mounted in short-baseline pairs across the Vettore fault. We will show preliminary measurements of ongoing near-field post-seismic deformation and shallow afterslip from the GNSS and time-lapse TLS (Lidar) and SfM datasets, complementing the wide-area InSAR results. Our results will provide insights into the formation of bedrock scarps from repeated earthquakes and the interpretation of Holocene slip-rate data from these scarps. The coseismic and ongoing post

  13. Plate Boundary Observatory Strainmeter Recordings of The M6.0 August 24, 2014 South Napa Earthquake (United States)

    Hodgkinson, Kathleen; Mencin, David; Phillips, David; Mattioli, Glen; Meertens, Charles


    The 2014 Mw6.0 South Napa earthquake nucleated at 11 km depth near the West Napa fault, one of a complex system of sub-parallel major right lateral faults north of San Francisco that together accommodate much of the relative motion between the Pacific and North American tectonic plates. The South Napa event was the largest to have shaken the San Francisco Bay Area (SFBA) in almost 25 years. A major goal of the NSF-funded EarthScope Plate Boundary Observatory (PBO), installed and maintained by UNAVCO, was to enable researchers to study the interaction between the faults that form a plate boundary zone, and in particular, to investigate the role that aseismic transients contribute to strain accumulation and release. To realize this goal, PBO includes borehole tensor strainmeters (BSMs) installed in several targeted regions, including on to the north and east of San Francisco. Two PBO BSMs have been operating in the SFBA since 2008: B057, north of San Francisco and 30 km from the epicenter, and B054, 3 km from the Hayward Fault and 40 km from the epicenter. We find the coseismic strains recorded by B057 are close to those predicted using elastic half-space dislocation theory and the seismically determined focal mechanism, while a more complicated variable slip model may be required for observations from B054. Months after the event, B057 continued to record a significant postseismic signal. In this presentation we document the coseismic signals recorded by the PBO BSMs and characterize the temporal behavior of the postseismic signal at B057. The PBO network includes over 1100 GPS, 75 BSMs, 79 seismometers and arrays of tiltmeters, pore pressure sensors and meteorological instrumentation. UNAVCO generates an Earthscope Level 2 processed strain time-series combined into areal and shear strains for the PBO BSM network; the raw data are available from the IRIS DMC in mSEED format. For events of interest, such as the South Napa earthquake, UNAVCO generates a 1-sps

  14. Criticality features in ultra-low frequency magnetic fields prior to the 2013 M6.3 Kobe earthquake

    Directory of Open Access Journals (Sweden)

    Stelios M. Potirakis


    Full Text Available The nonlinear criticality of ultra-low frequency (ULF magnetic variations is investigated before a particular earthquake (EQ occurred in Kobe on April 12, 2013, by applying the “natural time” analysis on a few ULF parameters: Fh, Fz and Dh. The first two refer to radiation from the lithosphere, and the last parameter corresponds to depression of horizontal component as a signature of ionospheric perturbation. A recent paper of our team has indicated, using the same data as in this paper but by means of conventional statistical analysis, a clear effect of depression in the horizontal component as an ionospheric signature. But there seems to be no convincing signature of lithospheric ULF radiation according to the specific analysis, so this paper aims at extending our study on the electromagnetic data recorded prior to the specific EQ by trying to find any significant phenomenon in ULF effects (both lithospheric radiation and the depression of horizontal component using the critical, natural time analysis. The natural time analysis has yielded that criticality at Shigaraki (SGA, as the station closest to the EQ epicenter, is reached on March 27-29 for Fh and March 27 to April 1 for Fz (about two weeks before the EQ. But, the criticality for Dh was not observed at SGA probably due to high noise, on the other hand such criticality was observed at Kanoya (KNY because of its known property of a wider range of detection of ULF depression.

  15. Space conditions during a month of a sequence of six M > 6.8 earthquakes ending with the tsunami of 26 December 2004

    Directory of Open Access Journals (Sweden)

    A. Papandreou


    Full Text Available This paper examines space and seismological data for the time period about one month before the giant Sumatra-Andaman strong (9.3 earthquake (EQ. The combination of seismological and space data reveals some interesting features for this time period: (1 six successive high speed solar wind streams obviously triggering a sudden increase of geomagnetic activity were all followed by strong to giant (M > 6.8 EQs, (2 the 6 strong EQs present certain spatial-temporal constraints, with the epicentre of the EQs occuring at the edges of the Pacific Plate (the Sumatra-Andaman EQ occurred at the end of this series of EQs, eastward of the first one, in a clockwise direction, (3 the EQs occurred after a sudden increase of geomagnetic activity, as inferred from the 3 h-Kp index, following a quiet geomagnetic period and (4 the time delay of the M > 6.2 earthquakes (in the broad area examined from the last maximum sudden Kp increase was on average ~1.5 days. These findings from the study of the Earth's space environment during the month preceding the Sumatra-Andaman giant (9.3 EQ provide new information for a possible better understanding of the Sun-magnetosphere-lithosphere coupling.

  16. Numerical simulation of dynamic Coulomb stress changes induced by M6.5 earthquake in Wuding, Yunnan and its relationship with aftershocks

    Institute of Scientific and Technical Information of China (English)

    HU Xiong-lin; WU Xiao-ping; YANG Run-hai; FU Hong; HU Jia-fu; HUANG Yong


    Based on the discrete wavenumber method, we calculate the fields of dynamic Coulomb rupture stress changes and static stress changes caused by M6.5 earthquake in Wuding, and study their relationship with the subsequent aftershocks. The results show that the spatial distribution patterns of the positive region of dynamic stress peak value and static stress peak value are similarly asymmetric, which are basically identical with distribution features of aftershock. The dynamic stress peak value and the static stress in the positive region are more than 0.1 MPa and 0.01 MPa of the triggering threshold, respectively, which indicates that the dynamic and static stresses are helpful for the occurrence of aftershock. This suggests that both influences of dynamic and static stresses should be con-sidered other than only either of them when studying aftershock triggering in near field.

  17. Estimating the probability of occurrence of earthquakes (M>6) in the Western part of the Corinth rift using fault-based and classical seismotectonic approaches. (United States)

    Boiselet, Aurelien; Scotti, Oona; Lyon-Caen, Hélène


    -SISCOR Working Group. On the basis of this consensual logic tree, median probability of occurrences of M>=6 events were computed for the region of study. Time-dependent models (Brownian Passage time and Weibull probability distributions) were also explored. The probability of a M>=6.0 event is found to be greater in the western region compared to the eastern part of the Corinth rift, whether a fault-based or a classical seismotectonic approach is used. Percentile probability estimates are also provided to represent the range of uncertainties in the results. The percentile results show that, in general, probability estimates following the classical approach (based on the definition of seismotectonic source zones), cover the median values estimated following the fault-based approach. On the contrary, the fault-based approach in this region is still affected by a high degree of uncertainty, because of the poor constraints on the 3D geometries of the faults and the high uncertainties in their slip rates.

  18. Building Damage Assessment Using Multisensor Dual-Polarized Synthetic Aperture Radar Data for the 2016 M 6.2 Amatrice Earthquake, Italy

    Directory of Open Access Journals (Sweden)

    Sadra Karimzadeh


    Full Text Available On 24 August 2016, the M 6.2 Amatrice earthquake struck central Italy, well-known as a seismically active region, causing considerable damage to buildings in the town of Amatrice and the surrounding area. Damage from this earthquake was assessed quantitatively by means of multitemporal synthetic aperture radar (SAR coherence and SAR intensity methods using dual-polarized SAR data obtained from the Sentinel-1 (VV, VH and ALOS-2 (HH, HV satellites. We developed linear discriminant functions based on three items: (1 the differential coherence values; (2 the differential backscattering intensity values of pre- and post-event images; and (3 a binary damage map of the optical pre- and post-event imagery. The accuracy of the proposed model was 84% for the Sentinel-1 data and 76% for the ALOS-2 data. The damage proxy maps deduced from the linear discriminant functions can be useful in the parcel-by-parcel assessment of building damage and development of spatial models for the allocation of urban search and rescue operations.

  19. Surface deformation due to the M6.5 Lefkada earthquake (17 November 2015) exploiting SENTINEL-1 and GNSS observations. Implications for seismic hazard. (United States)

    Elias, Panagiotis; Ganas, Athanassios; Briole, Pierre; Parcharidis, Isaak; Avallone, Antonio; Roukounakis, Nikos; Argyrakis, Panagiotis; Roger, Marine; Cheloni, Daniele; Tolomei, Cristiano; Mendonidis, Evangelos; Moraitini, Evelyn; Papanikolaou, Marios; Papastergios, Asterios


    The 17 November 2015 M=6.5 Lefkada earthquake in the Ionian sea, Greece, produced tens of centimetres of co-seismic motion in both Lefkada and Cephalonia islands. We present the full picture of the co-seismic displacements as mapped by space geodetic techniques, Sentinel 1A INSAR and permanent GNSS stations. We use this data together with the constraints from seismology to invert for fault localisation , size and slip distribution. We observed post-seismic displacements throughout most of southern Lefkada and northern Cephalonia islands recorded at the two NOA GNSS stations of PONT and SPAN and four additional permanent and six campaign GNSS stations established after the earthquake. Those displacements range from a few centimetres near the epicentre to a few millimetres far from the fault. We model the post-seismic displacements as due to uniform slip on the same fault plane that ruptured during the main event. The model shows a right-lateral afterslip along the fault but with slightly larger extension in comparison to the co-seismic slip, less shallow and deeper. This transient strain followed the main event during a short period of 80 days as modelled with an exponential law. Currently, the post-seismic deformation is being investigated by exploiting multi-temporal Sentinel 1A/B InSAR processed among others with ESA's Geohazards Exploitation Platform and SNAP software. The first challenging issue is the coherence which is not high in the area due to the vegetation cover. The second one is the correction of the tropospheric component. We estimate it using the tropospheric delay at the permanent GNSS stations and by using an meteorological model based on the WRF refined at the spatial resolution of 1 km. The earthquakes occurred in the Central Ionian area since 1983, studied both by seismology and space geodesy imply a seismic gap offshore NW Cephalonia that needs to be monitored.

  20. A stochastic estimate of ground motion at Oceano, California, for the M 6.5 22 December 2003 San Simeon earthquake, derived from aftershock recordings (United States)

    Di, Alessandro C.; Boatwright, J.


    The U.S. Geological Survey deployed a digital seismic station in Oceano, California, in February 2004, to investigate the cause of damage and liquefaction from the 22 December 2003 M 6.5 San Simeon earthquake. This station recorded 11 M > 2.8 aftershocks in almost 8 weeks. We analyze these recordings, together with recordings of the mainshock and the same aftershocks obtained from nearby stations in Park Hill and San Luis Obispo, to estimate the mainshock ground motion in Oceano. We estimate the Fourier amplitude spectrum using generalized spectral ratio analysis. We test a set of aftershocks as Green's functions by comparing simulated and recorded acceleration amplitude spectra for the mainshock at San Luis Obispo and Park Hill. We convolve the aftershock accelerograms with a stochastic operator to simulate the duration and phase of the mainshock accelerograms. This approximation allows us to extend the range of aftershocks that can be used as Green's functions to events nearly three magnitude units smaller than the mainshock. Our realizations for the mainshock accelerogram at Oceano yield peak ground accelerations distributed as 28% ?? 4%g. We interpret these realizations as upper bounds for the actual ground motion, because our analysis assumes a linear response, whereas the presence of liquefaction indicates that the ground behaved nonlinearly in Oceano.

  1. Afterslip-dominated surface rupture in the M6.0 South Napa Earthquake as constrained by structure-from-motion analysis and terrestrial laser scanning (United States)

    DeLong, S. B.; Pickering, A.; Scharer, K. M.; Hudnut, K. W.; Lienkaemper, J. J.


    Near-fault surface deformation associated with the August 24, 2014 M6.0 South Napa earthquake included both coseismic and post-seismic slip. Initial synthesis of field observations and initial measurement and modeling of afterslip from traditional survey methods indicate that coseismic slip was minimal (Road, on discontinuous left-stepping en echelon ruptures. By August 26, the surface rupture became nearly continuous, and cultural features extracted from the TLS point clouds indicate horizontal slip magnitudes between 15 and 27 cm, increasing northward. By September 22, slip magnitudes had increased to between 26 and 46 cm. The lower slip magnitudes are to the south at Withers Road, and the general trend is increased slip to the north, but there is more slip variability along the fault trace in the September 15 data. From August 26 to September 15, the west side of the fault trace uplifted between 0.5 and 5 cm relative to east side. Increased relief on the surface rupture itself indicated a slight compressional component of the deformation. These results confirm that post-event air photos can be useful for rapid 3D mapping, and that the unparalleled accuracy of TLS data can be used to quantify even very subtle deformation patterns in three dimensions and document changes through time.

  2. Tectonic Seasonal Loading Inferred from cGPS Measurements as a Potential Trigger for the M6.0 South Napa Earthquake (United States)

    Kraner, M.; Holt, W. E.; Borsa, A. A.


    Measurements from continuous global positioning system (cGPS) networks continue to unfold details about transient strain signals [Mavrommatis et al., 2014; Heki, 2003]. Linking these transient strain signals to seismic events remains elusive, as it requires detailed information about the steady-state tectonic loading sources, faulting geometries, and strain distribution with depth. Here we use cGPS measurements to uncover a regional strain transient peaking just prior to the M6.0 August 24, 2014 South Napa earthquake. This signal appears to have produced a coulomb stress increase, favoring slip on the West Napa faulting system. Analysis of cGPS time series during the interseismic period from 2006 to 2014 shows a stacked summer dilatational lobe of +142 ± 64 x 10-9 in the 100 km2 earthquake region. The Napa region is part of a broad, long wavelength, zone of positive dilatational strain and coulomb stress increase peaking each summer season. Summer transients are associated with horizontal displacements of 3-5 mm directed eastward toward the Sacramento Basin and of 1-3 mm directed southwest toward the San Francisco Bay and Pacific Ocean. Winter transients involve the opposite of these motions, causing negative dilatational strains and negative coulomb stress changes in the Napa region. We observe a significant increase in summer seismicity rates (greater than 95% confidence for a Chi-square test) within regions of positive coulomb stress change in Northern California. Large scale models of vertical hydrologic loading predict some components of the long-wavelength horizontal signal in Northern California, but this loading accounts for only 20 - 30% of the total anomalous signal. We hypothesize that the remaining signal is associated with smaller-scale seasonal groundwater fluctuations in local basins (e.g., the Sonoma and Napa sub-basins) along with thermoelastic effects. We provide details regarding the amount of thermoelastic strain from the elastic portion of the

  3. A Stochastic Estimate of Ground Motion at Oceano, California, for the M6.5 December 22, 2003, San Simeon Earthquake, Derived from Aftershock Recordings (United States)

    di Alessandro, C.; Boatwright, J.


    The U.S. Geological Survey deployed a digital seismic station in Oceano, California, in February 2004, to investigate the cause of damage and liquefaction from the 22 December 2003 M6.5 San Simeon earthquake. This station recorded 11 M\\> 2.8 aftershocks in almost eight weeks. We use these recordings, together with recordings of the main shock and the same aftershocks obtained from nearby stations in Park Hill and San Luis Obispo, to estimate the mainshock ground motion in Oceano. We estimate the Fourier amplitude spectrum using a generalized spectral ratio analysis that averages the spectral ratios from both stations for all the co-recorded aftershocks. We test three aftershocks as Green's functions by comparing simulated and recorded acceleration amplitude spectra for the main shock at Park Hill and San Luis Obispo. Instead of deconvolving the aftershock recordings from the mainshock recordings to estimate a source-time function, we convolve the aftershock accelerograms with a stochastic operator to simulate the duration and phase of the mainshock accelerograms. These stochastic operators are determined as sets of delta functions whose delays are randomly generated from a gamma distribution with a shape parameter of 1. We choose the scale parameter by fitting Husid plots of the Park Hill and San Luis Obsipo mainshock accelerograms. This stochastic approach allows us to extend the range of aftershocks that can be used as Green's functions to events nearly three magnitude units smaller than the main shock. Our realizations for the mainshock accelerogram at Oceano yield PGAs distributed as 28±4% g. We interpret these realizations as upper bounds for the actual ground motion because our analysis assumes that the ground behaved linearly, while the liquefaction and lateral spreading indicates that the ground behaved non-linearly. Geotechnical analysis of the site indicates that a PGA of 25% g would have initiated the liquefaction.

  4. Stress-based aftershock forecasts made within 24h post mainshock: Expected north San Francisco Bay area seismicity changes after the 2014M=6.0 West Napa earthquake (United States)

    Parsons, Thomas E.; Segou, Margaret; Sevilgen, Volkan; Milner, Kevin; Field, Ned; Toda, Shinji; Stein, Ross S.


    We calculate stress changes resulting from the M= 6.0 West Napa earthquake on north San Francisco Bay area faults. The earthquake ruptured within a series of long faults that pose significant hazard to the Bay area, and we are thus concerned with potential increases in the probability of a large earthquake through stress transfer. We conduct this exercise as a prospective test because the skill of stress-based aftershock forecasting methodology is inconclusive. We apply three methods: (1) generalized mapping of regional Coulomb stress change, (2) stress changes resolved on Uniform California Earthquake Rupture Forecast faults, and (3) a mapped rate/state aftershock forecast. All calculations were completed within 24 h after the main shock and were made without benefit of known aftershocks, which will be used to evaluative the prospective forecast. All methods suggest that we should expect heightened seismicity on parts of the southern Rodgers Creek, northern Hayward, and Green Valley faults.

  5. Stress-based aftershock forecasts made within 24 h postmain shock: Expected north San Francisco Bay area seismicity changes after the 2014 M = 6.0 West Napa earthquake (United States)

    Parsons, Tom; Segou, Margaret; Sevilgen, Volkan; Milner, Kevin; Field, Edward; Toda, Shinji; Stein, Ross S.


    We calculate stress changes resulting from the M = 6.0 West Napa earthquake on north San Francisco Bay area faults. The earthquake ruptured within a series of long faults that pose significant hazard to the Bay area, and we are thus concerned with potential increases in the probability of a large earthquake through stress transfer. We conduct this exercise as a prospective test because the skill of stress-based aftershock forecasting methodology is inconclusive. We apply three methods: (1) generalized mapping of regional Coulomb stress change, (2) stress changes resolved on Uniform California Earthquake Rupture Forecast faults, and (3) a mapped rate/state aftershock forecast. All calculations were completed within 24 h after the main shock and were made without benefit of known aftershocks, which will be used to evaluative the prospective forecast. All methods suggest that we should expect heightened seismicity on parts of the southern Rodgers Creek, northern Hayward, and Green Valley faults.

  6. Shallow reflection imaging by PSDM of dense, wide-aperture data: application to the causative fault of the 1980, M6.9, southern Italy earthquake (United States)

    Castiello, Antonio; Bruno, Pier Paolo; Improta, Luigi


    Shallow reflection imaging of active faults in unconsolidated deposits is a challenging task. Main factors hindering seismic imaging are the presence of steep-dipping reflectors and strong lateral velocity changes across the fault-zone, which often make standard CDP processing inappropriate. This drawback can be in principle overcome by Prestack Depth Migration (PSDM). However, performance of PSDM strongly relies on the availability of an accurate background velocity model, which is critical to account properly for the seismic wave propagation and ray-path bending in the depth domain. Such a velocity model cannot be obtained by standard seismic reflection acquisition geometries due to the small-aperture of the receiver/shot array and to the difficulty in collecting good-quality near-vertical reflection data in the very near-surface. Consequently, PSDM of shallow reflection data is very rare in the scientific literature. Recent applications of PSDM to very complex crustal structures have revealed that the use of non-conventional, dense wide-aperture acquisition geometries allows to successfully face the problem of the background velocity estimation. In this study, we investigate if the PSDM of dense, wide-aperture data can be an effective strategy for shallow imaging of complex structures, such as fault zone. We target the Irpinia Fault (IF), source of the 1980, M6.9, southern Italy normal-faulting earthquake. A 256 m long, ultrahigh resolution wide-aperture profile has been collected across the 1980 fault scarp in a small intermountain basin in the Southern Apennines range (Pantano di San Gregorio Magno). The source and receiver spacing is 3 m and 1.5 m, respectively, and the source is provided by a Buffalo gun. The survey aims at imaging the first 100 m of subsurface and at providing valuable information on the fault zone architecture below a collocated paleoseismic trench. The presence of unconsolidated deposits above a limestone basin substratum translates into

  7. The 11/25/88, M=6 Saguenay Earthquake near Chicoutimi, Quebec: Evidence for anisotropic wave propagation in northeastern North America (United States)

    Hough, S. E.; Jacob, K. H.; Friberg, P. A.


    On November 25, 1988, a magnitude 6 earthquake occurred in the province of Quebec, Canada. This earthquake triggered nine digital strong motion instruments in New York and Maine at epicentral distances of 200 to 820 km which were installed as part of an effort by the National Center for Earthquake Engineering Research (NCEER) to study ground motions and wave propagation in eastern North America. We calculate Q(f) at discrete frequencies from 0.6 to 26 Hz, assuming that geometrical spreading causes a l/r0.5 decay in spectral amplitudes. Of the nine stations, four are in the Adirondack Mountains in New York and three are in eastern Maine. If we calculate Q(f) for these two clusters of stations separately, we obtain higher values for the Adirondack stations. The Quebec-Adirondack path is along the strike of the predominant structural trends in northeastern North America, in the Grenville Province crust, while the Quebec-Maine path is at high angle to the structural grain and crosses the boundary between the Grenville and the Appalachian provinces. We thus have instrumental data in support of earlier observations based on contours of intensity from historic earthquakes: Seismic wave propagation in northeastern North America is more efficient along the predominantly NE-SW striking geological trends. We address possible biases due to site effects.

  8. Time-dependent neo-deterministic seismic hazard scenarios: Preliminary report on the M6.2 Central Italy earthquake, 24th August 2016

    CERN Document Server

    Peresan, Antonella; Romashkova, Leontina; Magrin, Andrea; Soloviev, Alexander; Panza, Giuliano F


    A scenario-based Neo-Deterministic approach to Seismic Hazard Assessment (NDSHA) is available nowadays, which permits considering a wide range of possible seismic sources as the starting point for deriving scenarios by means of full waveforms modeling. The method does not make use of attenuation relations and naturally supplies realistic time series of ground shaking, including reliable estimates of ground displacement, readily applicable to complete engineering analysis. Based on the neo-deterministic approach, an operational integrated procedure for seismic hazard assessment has been developed that allows for the definition of time dependent scenarios of ground shaking, through the routine updating of earthquake predictions, performed by means of the algorithms CN and M8S. The integrated NDSHA procedure for seismic input definition, which is currently applied to the Italian territory, combines different pattern recognition techniques, designed for the space-time identification of strong earthquakes, with al...

  9. Damage Proxy Map from InSAR Coherence Applied to February 2011 M6.3 Christchurch Earthquake, 2011 M9.0 Tohoku-oki Earthquake, and 2011 Kirishima Volcano Eruption (United States)

    Yun, S.; Agram, P. S.; Fielding, E. J.; Simons, M.; Webb, F.; Tanaka, A.; Lundgren, P.; Owen, S. E.; Rosen, P. A.; Hensley, S.


    Under ARIA (Advanced Rapid Imaging and Analysis) project at JPL and Caltech, we developed a prototype algorithm to detect surface property change caused by natural or man-made damage using InSAR coherence change. The algorithm was tested on building demolition and construction sites in downtown Pasadena, California. The developed algorithm performed significantly better, producing 150 % higher signal-to-noise ratio, than a standard coherence change detection method. We applied the algorithm to February 2011 M6.3 Christchurch earthquake in New Zealand, 2011 M9.0 Tohoku-oki earthquake in Japan, and 2011 Kirishima volcano eruption in Kyushu, Japan, using ALOS PALSAR data. In Christchurch area we detected three different types of damage: liquefaction, building collapse, and landslide. The detected liquefaction damage is extensive in the eastern suburbs of Christchurch, showing Bexley as one of the most significantly affected areas as was reported in the media. Some places show sharp boundaries of liquefaction damage, indicating different type of ground materials that might have been formed by the meandering Avon River in the past. Well reported damaged buildings such as Christchurch Cathedral, Canterbury TV building, Pyne Gould building, and Cathedral of the Blessed Sacrament were detected by the algorithm. A landslide in Redcliffs was also clearly detected. These detected damage sites were confirmed with Google earth images provided by GeoEye. Larger-scale damage pattern also agrees well with the ground truth damage assessment map indicated with polygonal zones of 3 different damage levels, compiled by the government of New Zealand. The damage proxy map of Sendai area in Japan shows man-made structure damage due to the tsunami caused by the M9.0 Tohoku-oki earthquake. Long temporal baseline (~2.7 years) and volume scattering caused significant decorrelation in the farmlands and bush forest along the coastline. The 2011 Kirishima volcano eruption caused a lot of ash

  10. Global Positioning System constraints on crustal deformation before and during the 21 February 2008 Wells, Nevada M6.0 earthquake (United States)

    Hammond, William C.; Blewitt, Geoffrey; Kreemer, Corné; Murray-Moraleda, Jessica R.; Svarc, Jerry L.; dePolo, Craig M.; LaPointe, Daphne D.


    Using Global Positioning System (GPS) data from permanent sites and U.S. Geological Survey (USGS) campaign data we have estimated co-seismic displacements and secular background crustal deformation patterns associated with the 21 February 2008 Wells Nevada earthquake. Estimated displacements at nearby permanent GPS sites ELKO (84 km distant) and GOSH (81 km distant) are 1.0±0.2 mm and 1.1±0.3 mm, respectively. The magnitude and direction are in agreement with those predicted from a rupture model based on InSAR measurements of the near-field co-seismic surface displacement. Analysis of long GPS time series (>10 years) from the permanent sites within 250 km of the epicenter indicate the eastern Nevada Basin and Range undergoes steady tectonic transtension with rates on the order of 1 mm/year over approximately 250 km. The azimuth of maximum horizontal crustal extension is consistent with the azimuth of the Wells earthquake co-seismic slip vector. The orientation of crustal shear is consistent with deformation associated with Pacific/North America plate boundary relative motion seen elsewhere in the Basin and Range. In response to the event, we deployed a new GPS site with the capability to telemeter high rate, low latency data that will in the future allow for rapid estimation of surface displacement should aftershocks or postseismic deformations occur. We estimated co-seismic displacements using campaign GPS data collected before and after the event, however in most cases their uncertainties were larger than the offsets. Better precision in co-seismic displacement could have been achieved for the campaign sites if they had been surveyed more times or over a longer interval to better estimate their pre-event velocity.

  11. Tearing the terroir: Details and implications of surface rupture and deformation from the 24 August 2014 M6.0 South Napa earthquake, California (United States)

    DeLong, Stephen B.; Donnellan, Andrea; Ponti, Daniel J.; Rubin, Ron S.; Lienkaemper, James J.; Prentice, Carol S.; Dawson, Timothy E.; Seitz, Gordon G.; Schwartz, David P.; Hudnut, Kenneth W.; Rosa, Carla M.; Pickering, Alexandra J; Parker, Jay W.


    The Mw 6.0 South Napa earthquake of 24 August 2014 caused slip on several active fault strands within the West Napa Fault Zone (WNFZ). Field mapping identified 12.5 km of surface rupture. These field observations, near-field geodesy and space geodesy, together provide evidence for more than ~30 km of surface deformation with a relatively complex distribution across a number of subparallel lineaments. Along a ~7 km section north of the epicenter, the surface rupture is confined to a single trace that cuts alluvial deposits, reoccupying a low-slope scarp. The rupture continued northward onto at least four other traces through subparallel ridges and valleys. Postseismic slip exceeded coseismic slip along much of the southern part of the main rupture trace with total slip 1 year postevent approaching 0.5 m at locations where only a few centimeters were measured the day of the earthquake. Analysis of airborne interferometric synthetic aperture radar data provides slip distributions along fault traces, indicates connectivity and extent of secondary traces, and confirms that postseismic slip only occurred on the main trace of the fault, perhaps indicating secondary structures ruptured as coseismic triggered slip. Previous mapping identified the WNFZ as a zone of distributed faulting, and this was generally borne out by the complex 2014 rupture pattern. Implications for hazard analysis in similar settings include the need to consider the possibility of complex surface rupture in areas of complex topography, especially where multiple potentially Quaternary-active fault strands can be mapped.

  12. The (Un)Productivity of the 2014 M6.0 South Napa Aftershock Sequence (United States)

    Llenos, A. L.


    The M6.0 South Napa mainshock produced fewer aftershocks than expected for a California earthquake of its magnitude, which became apparent a few days into the sequence. In the first 4.5 days, only 59 M≥1.8 aftershocks had occurred, the largest of which was a M3.9 that happened a little over two days after the mainshock. In contrast, during the same time period the 2004 M6.0 Parkfield earthquake had over 220 M≥1.8 aftershocks, 6 of which were M≥4. Here I investigate the aftershock productivity and other sequence statistics of the South Napa sequence and compare it with other M~6 California mainshock-aftershock sequences. By focusing on similar size events, they have similar finite extents within the seismotectonic environment. While the productivities of these sequences vary quite a bit, the b-values of the magnitude-frequency distributions all fall in the 0.6-0.8 range for the northern California sequences, slightly lower than the b-value of ~1 typical of southern California seismicity. Despite the relatively low productivity of the South Napa sequence, I show that the Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) describes the sequence well and investigate whether the ETAS model parameters suggest that low-productivity sequences are typical for the region. I also explore how quickly after a mainshock these types of models can capture the low productivity of the sequence. The productivity of a sequence is a critical parameter in determining the aftershock probabilities reported in the days following the mainshock. Therefore, the sooner an accurate representation of the aftershock productivity can be obtained, the sooner more accurate aftershock probability reports can be produced.

  13. The Parkfield tremors reveal slow and fast ruptures on the same asperity (United States)

    Veedu, Deepa Mele; Barbot, Sylvain


    The deep extension of the San Andreas Fault is believed to be creeping, but the recent observations of tectonic tremors from these depths indicate a complex deformation style. In particular, an isolated tremor source near Parkfield has been producing a sequence of low-frequency earthquakes that indicates an uncommon mechanism of stress accumulation and release. The tremor pattern regularly oscillated between three and six days from mid-2003 until it was disrupted by the 2004 magnitude 6.0 Parkfield earthquake. After that event, the tremor source ruptured only about every three days, but over the next two years it gradually returned to its initial alternating recurrence pattern. The mechanism that drives this recurrence pattern is unknown. Here we use physics-based models to show that the same tremor asperity—the region from which the low-frequency earthquakes radiate—can regularly slip in slow and fast ruptures, naturally resulting in recurrence intervals alternating between three and six days. This unusual slip behaviour occurs when the tremor asperity size is close to the critical nucleation size of earthquakes. We also show that changes in pore pressure following the Parkfield earthquake can explain the sudden change and gradual recovery of the recurrence intervals. Our findings suggest a framework for fault deformation in which the same asperity can release tectonic stress through both slow and fast ruptures.

  14. Olivetti M6 640

    CERN Multimedia


    The M6-640 is the highest performance personal computer workstation in the Suprema range with multimedia, document imaging and communications capabilities. It has a 90MHz Pentium processor with 256Kb of secondary cache. It can accommodate up to 128Mb RAM and supports hard disks of up to 1Gb through an IDE interface.

  15. Local Postseismic Relaxation Observed After the 1992 Landers (M=7.3), 1999 Hector Mine (M=7.1), 2002 Denali (M=7.9), and 2003 San Simeon (M=6.5) Earthquakes (United States)

    Svarc, J. L.; Savage, J. C.


    The U. S. Geological Survey has observed the local postseismic deformation following the 1992 Landers (M=7.3), 1999 Hector Mine (M=7.1), 2002 Denali (M=7.9), and 2003 San Simeon (M=6.5) earthquakes. The observations consist of repeated campaign-style GPS surveys of geodetic arrays (aperture ˜ 50 km) in the epicentral area of each earthquake. The data span the intervals from 0.037 to 5.6, 0.0025 to 4.5, 0.022 to 1.6, and 0.005 to 0.55 yr postearthquake for the Landers, Hector Mine, Denali, and San Simeon earthquakes, respectively. We have reduced the observations to positions of the monuments measured relative to another monument within the array. The temporal dependence of the relative displacements for each monument can be approximated by a+bt+c(1-exp[-t/d]) where a, b, c, and d are constants particular to that monument and t is the time after the earthquake. The relaxation times d were found to be 0.367±0.062, 0.274±0.024, 0.145±0.017, and 0.032±0.002 yr for the Landers, Hector Mine, Denali, and San Simeon earthquakes, respectively. The observed increase in d with the duration of the time series fit suggests that the relaxation process involves more than a single relaxation time. An alternative function a'+b't+c'log(1+t/d') where a', b', c', and d' are constants particular to each monument furnishes a better fit to the data. This logarithmic form of the relaxation (Lomnitz creep function), identical to the calculated response of a simple spring-slider system subject to rate-state friction [Marone et al., 1991], contains a continuous spectrum of relaxation times. In fitting data the time constant d' is determined by observations within the first few days postseismic and consequently is poorly defined. Adequate fits to the data are found by simply setting d'=0.001 yr and determining a', b', and c' by linear least squares. That the temporal dependence is so readily fit by both exponential and logarithmic functions suggests that the temporal dependence by itself

  16. Seismological evidence of an active footwall shortcut thrust in the Northern Itoigawa-Shizuoka Tectonic Line derived by the aftershock sequence of the 2014 M 6.7 Northern Nagano earthquake (United States)

    Panayotopoulos, Yannis; Hirata, Naoshi; Hashima, Akinori; Iwasaki, Takaya; Sakai, Shin'ichi; Sato, Hiroshi


    A destructive M 6.7 earthquake struck Northern Nagano prefecture on November 22, 2014. The main shock occurred on the Kamishiro fault segment of the northern Itoigawa-Shizuoka Tectonic Line (ISTL). We used data recorded at 41 stations of the local seismographic network in order to locate 2118 earthquakes that occurred between November 18 and November 30, 2014. To estimate hypocenters, we assigned low Vp models to stations within the Northern Fossa Magna (NFM) basin thus accounting for large lateral crustal heterogeneities across the Kamishiro fault. In order to further improve accuracy, the final hypocenter locations were recalculated inside a 3D velocity model using the double-difference method. We used the aftershock activity distribution and focal mechanism solutions of major events in order to estimate the source fault area of the main shock. Our analysis suggests that the shallow part of the source fault corresponds to the surface trace of the Kamishiro fault and dips 30°-45° SE, while the deeper part of the source fault corresponds to the downdip portion of the Otari-Nakayama fault, a high angle fault dipping 50°-65° SE that formed during the opening of the NFM basin in the Miocene. Along its surface trace the Otari-Nakayama fault has been inactive during the late Quaternary. We verified the validity of our model by calculating surface deformation using a simple homogeneous elastic half-space model and comparing it to observed surface deformation from satellite interferometry, assuming large coseismic slip in the areas of low seismicity and small coseismic slip in the areas of high seismicity. Shallowing of the source fault from 50°-65° to 30°-45° in the upper 4 km, in the areas where both surface fault traces are visible, is a result of footwall shortcut thrusting by the Kamishiro fault off the Otari-Nakayama fault.

  17. Ultrashallow seismic imaging of the causative fault of the 1980, M6.9, southern Italy earthquake by pre-stack depth migration of dense wide-aperture data (United States)

    Bruno, Pier Paolo; Castiello, Antonio; Improta, Luigi


    A two-step imaging procedure, including pre-stack depth migration (PSDM) and non-linear multiscale refraction tomography, was applied to dense wide-aperture data with the aim of imaging the causative fault of the 1980, M6.9, Irpinia normal faulting earthquake in a very complex geologic environment. PSDM is often ineffective for ultrashallow imaging (100 m of depth and less) of laterally heterogeneous media because of the difficulty in estimating a correct velocity model for migration. Dense wide-aperture profiling allowed us to build accurate velocity models across the fault zone by multiscale tomography and to record wide-angle reflections from steep reflectors. PSDM provided better imaging with respect to conventional post-stack depth migration, and improved definition of fault geometry and apparent cumulative displacement. Results indicate that this imaging strategy can be very effective for near-surface fault detection and characterization. Fault location and geometry are in agreement with paleoseismic data from two nearby trenches. The estimated vertical fault throw is only 29-38 m. This value, combined with the vertical slip rate determined by trench data, suggests a young age (97-127 kyr) of fault inception.

  18. A Look Inside the San Andreas fault at Parkfield Through Vertical Seismic Profiling (United States)

    Chavarria, J.A.; Malin, P.; Catchings, R.D.; Shalev, E.


    The San Andreas Fault Observatory at Depth pilot hole is located on the southwestern side of the Parkfield San Andreas fault. This observatory includes a vertical seismic profiling (VSP) array. VSP seismograms from nearby micro-earthquakes contain signals between the P and S waves. These signals may be P and S waves scattered by the local geologic structure. The collected scattering points form planar surfaces that we interpret as the San Andreas fault and four other secondary faults. The scattering process includes conversions between P and S waves, the strengths of which suggest large contrasts in material properties, possibly indicating the presence of cracks or fluids.

  19. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities (United States)

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.


    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  20. Ionospheric anomalies related to the (M = 7.3), August 27, 2012, Puerto earthquake, (M = 6.8), August 30, 2012 Jan Mayen Island earthquake, and (M = 7.6), August 31, 2012, Philippines earthquake: two-dimensional principal component analysis. (United States)

    Lin, Jyh-Woei


    Two-dimensional principal component analysis (2DPCA) and principal component analysis (PCA) are used to examine the ionospheric total electron content (TEC) data during the time period from 00:00 on August 21 to 12: 45 on August 31 (UT), which are 10 days before the M = 7.6 Philippines earthquake at 12:47:34 on August 31, 2012 (UT) with the depth at 34.9 km. From the results by using 2DPCA, a TEC precursor of Philippines earthquake is found during the time period from 4:25 to 4:40 on August 28, 2012 (UT) with the duration time of at least 15 minutes. Another earthquake-related TEC anomaly is detectable for the time period from 04:35 to 04:40 on August 27, 2012 (UT) with the duration time of at least 5 minutes during the Puerto earthquake at 04: 37:20 on August 27, 2012 (UT) (M(w) = 7.3) with the depth at 20.3 km. The precursor of the Puerto earthquake is not detectable. TEC anomaly is not to be found related to the Jan Mayen Island earthquake (M w = 6.8) at 13:43:24 on August 30, 2012 (UT). These earthquake-related TEC anomalies are detectable by using 2DPCA rather than PCA. They are localized nearby the epicenters of the Philippines and Puerto earthquakes.

  1. The physical nature of thermal anomalies observed before strong earthquakes (United States)

    Pulinets, S. A.; Ouzounov, D.; Karelin, A. V.; Boyarchuk, K. A.; Pokhmelnykh, L. A.

    The paper examines the effect of air ionization on the thermal balance of the boundary layer of atmosphere. In seismically active areas the increased radon emanation from active faults and cracks before earthquakes is the primary source of air ionization. The problem is analyzed both on microscopic and macroscopic levels and in both cases the significant changes of the air relative humidity and air temperature are obtained. This happens due to the water molecules attachment to the newly formed ions (or in other words, condensation) which leads to the excretion of the latent heat. Obtained results permit us to explain the changes of the surface temperature and the surface latent heat flux increase before earthquakes observed by remote sensing satellites, as well as ground based measurements of the air temperature and relative humidity variations before the Colima earthquake (M7.6) of 2003 in Mexico, Hector Mine earthquake (M7.1) of 1999 in USA and Parkfield earthquake (M6) of 2004 in USA. These findings are also supported by the results of active experiments where the installation of artificial ionization of atmosphere is used.

  2. Geotechnical Extreme Events Reconnaissance Report on the Performance of Structures in Densely Urbanized Areas Affected by Surface Fault Rupture During the August 24, 2014 M6 South Napa Earthquake, California, USA. (United States)

    Cohen-Waeber, J.; Lanzafame, R.; Bray, J.; Sitar, N.


    The August 24, 2014, M­w 6.0 South Napa earthquake is the largest seismic event to have occurred in the San Francisco Bay Region, California, USA, since the Mw 6.9 1989 Loma Prieta earthquake. The event epicenter occurred at the South end of the Napa Valley, California, principally rupturing northwest along parts of the active West Napa fault zone. Bound by two major fault zones to the East and West (Calaveras and Rogers Creek, respectively), the Napa Valley is filled with up to 170 m. of alluvial deposits and is considered to be moderately to very highly susceptible to liquefaction and has the potential for violent shaking. While damage due to strong ground shaking was significant, remarkably little damage due to liquefaction or landslide induced ground deformations was observed. This may be due to recent drought in the region. Instead, the South Napa earthquake is the first to produce significant surface rupture in this area since the Mw 7.9 1906 San Andreas event, and the first in Northern California to rupture through a densely urbanized environment. Clear expressions of surface fault rupture extended approximately 12 - 15 km northward from the epicenter and approximately 1-2 km southeast with a significant impact to infrastructure, including roads, lifelines and residential structures. The National Science Foundation funded Geotechnical Extreme Events Reconnaissance (GEER) Association presents here its observations on the performance of structures affected by surface fault rupture, in a densely populated residential neighborhood located approximately 10 km north of the epicenter. Based on the detailed mapping of 27 residential structures, a preliminary assessment of the quantitative descriptions of damage shows certain characteristic interactions between surface fault rupture and the overlying infrastructure: 48% of concrete slabs cracked up to 8 cm wide, 19% of structures shifted up to 11 cm off of their foundation and 44% of foundations cracked up to 3 cm

  3. Testing for Changes in Crustal Velocity at the Tocopilla Earthquake, Northern Chile (United States)

    Richter, T.; Asch, G.; Kind, R.


    We use two different techniques to investigate the region between Antofagasta and Arica in northern Chile for crustal velocity changes. Data are taken from the 19 broadband stations of the IPOC project (Integrated Plate Boundary Observatory Chile) operating partly since 2006 by GFZ and Institut de Physique du Globe de Paris (IPGP). In the neighborhood of the seismic stations an M7.0 earthquake occurred near Tocopilla on 14 November 2007. Other studies have shown that in the course of such earthquakes seismic velocities may be changing (e.g. Brenguier et al. 2008). The first method is testing for phase shifts in receiver functions. To avoid varying travel paths of different events we compare events located in small source regions. Although temporal variations have been found in receiver functions for the Parkfield M6.0 and San Simeon M6.5 earthquakes (Audet 2006) we cannot find any variations exceeding the noise level of our dataset at the time of the M7.0 earthquake near Tocopilla. Therefore the data is analyzed with the help of cross-correlation technique of ambient seismic noise (Bensen et al. 2007). Compared to the first method it has the advantage of regularly available correlation functions (e.g. 1 per day). We report on first results.

  4. Earthquake

    Institute of Scientific and Technical Information of China (English)


    正A serious earthquake happened in Wenchuan, Sichuan. Over 60,000 people died in the earhtquake, millins of people lost their homes. After the earthquake, people showed their love in different ways. Some gave food, medicine and everything necessary, some gave money,

  5. 7th U.S. / Japan Natural Resources (UJNR) Panel on Earthquake Research: Abstract Volume and Technical Program (United States)

    Detweiler, Shane T.; Ellsworth, William L.


    The U.S. / Japan Natural Resources (UJNR) Panel on Earthquake Research promotes advanced study toward a more fundamental understanding of the earthquake process and hazard estimation. The Panel promotes basic and applied research to improve our understanding of the causes and effects of earthquakes and to facilitate the transmission of research results to those who implement hazard reduction measures on both sides of the Pacific and around the world. Meetings are held every other year, and alternate between countries with short presentation on current research and local field trips being the highlights. The 5th Joint Panel meeting was held at Asilomar, California in October, 2004. The technical sessions featured reports on the September 28, 2004 Parkfield, California earthquake, progress on earthquake early warning and rapid post-event assessment technology, probabilistic earthquake forecasting and the newly discovered phenomenon of nonvolcanic tremor. The Panel visited the epicentral region of the M 6.0 Parkfield earthquake and viewed the surface ruptures along the San Andreas Fault. They also visited the San Andreas Fault Observatory at Depth (SAFOD), which had just completed the first phase of drilling into the fault. The 6th Joint Panel meeting was held in Tokushima, Japan in November, 2006. The meeting included very productive exchanges of information on approaches to systematic observation of earthquake processes. Sixty eight technical papers were presented during the meeting on a wide range of subjects, including interplate earthquakes in subduction zones, slow slip and nonvolcanic tremor, crustal deformation, recent earthquake activity and hazard mapping. Through our discussion, we reaffirmed the benefits of working together to achieve our common goal of reducing earthquake hazard, continued cooperation on issues involving densification of observation networks and the open exchange of data among scientific communities. We also reaffirmed the importance of

  6. San Andreas Fault, California, M 5.5 or greater Earthquakes 1800-2000 (United States)

    Toppozada, T.; Branum, D.; Reichle, M.; Hallstrom, C.


    The San Andreas fault has been the most significant source of major California earthquakes since 1800. From 1812 to 1906 it generated four major earthquakes of M 7.2 or greater in two pairs on two major regions of the fault. A pair of major earthquakes occurred on the Central to Southern region, where the 1857 faulting overlapped the 1812 earthquake faulting. And a pair of major earthquakes occurred on the Northern region, where the 1906 faulting overlapped the 1838 earthquake faulting. The 1812 earthquake resulted from a rupture of up to about 200 km, from the region of Cajon Pass to as far as about 50 km west of Fort Tejon (Sieh and others, 1989). This rupture is the probable source of both the destructive 1812.12.8 "San Juan Capistrano" and the 1812.12.21 "Santa Barbara Channel" earthquakes. The 1838 earthquake's damage effects throughout the Bay area, from San Francisco to Santa Clara Valley and Monterey, were unequalled by any Bay area earthquake other than the 1906 event. The mainshock's effects, and numerous strong probable aftershocks in the San Juan Bautista vicinity in the following three years, suggest 1838 faulting from San Francisco to San Juan Bautista, and M about 7.4. The 630 km length of the San Andreas fault between San Francisco and Cajon Pass ruptured in the 1838 and 1857 earthquakes, except for about 75 km between Bitterwater and San Juan Bautista. The 1840-1841 probable aftershocks of the 1838 event occurred near San Juan Bautista, and the foreshocks and aftershocks of the 1857 event occurred near Bitterwater. In the Bitterwater area, strong earthquakes continued to occur until the 1885 earthquake of M 6.5. Near Parkfield, 40 to 70 km southeast of Bitterwater, M 5.5 or greater earthquakes have occurred from the 1870s to the 1960s. In the total Bitterwater to Parkfield zone bracketing the northern end of the 1857 rupture, the seismicity and moment release has decreased steadily since 1857, and has tended to migrate southeastward with time. The

  7. 安居富民工程在地震灾害中的减灾效益初探--以2012年新疆新源、和静交界6.6级地震为例%Research on Disaster Reduction Benefit of Earthquake Resistant Project---Taking the M6.6 Earthquake in Xinyuan-Hejing Junction in 2012 for an Example

    Institute of Scientific and Technical Information of China (English)

    刘军; 宋立军; 胡伟华; 李志强; 谭明


    Earthquake damage characteristics the M6.6 Earthquake in Xinyuan-Hejing Junction on June 30, 2012 are introduced.Damage characteristics especially of the welfarehousing are studied as references.Earthquake damage cases of hazard bearing bodies before the earthquake resistant project in the same intensity of earthquake damage are derived by the seismic damage model.Parameters as damage area,number of casualties,homeless peo-ple and direct economic loss of houses of various types of housing are calculated and compared with the field data in the earthquake.The results show that,earthquake resistant project could ensure the safety of people′s lives and property in a destructive earthquake.Compared to the previous construction investment,the engineering is of obvi-ous mitigation benefits.%介绍了2012年6月30日新疆新源、和静交界6.6级地震的震害特征,并以此次地震中安居房的震害的为参照,通过模型反导出灾区在安居富民工程改造前同等强度地震破坏情形下承灾体的震害,分别计算了安居工程实施前各类房屋破坏面积、死伤人数、失去住所人数和房屋直接经济损失等参数,并与现场调查的安居房在此次地震中的震害进行对比分析。结果表明,安居富民工程在破坏性地震中为人民生命财产安全的保障及政府的救灾投入的统筹发挥着重要作用,相对于前期建设投资,安居富民工程具有显著的减灾效益。

  8. The ShakeMaps of the Amatrice, M6, earthquake

    National Research Council Canada - National Science Library

    Licia Faenza; Valentino Lauciani; Alberto Michelini


    In this paper we describe the performance of the ShakeMap software package and the fully automatic procedure, based on manually revised location and magnitude, during the main event of the Amatrice...

  9. Astronomical alignments as the cause of ~M6+ seismicity

    CERN Document Server

    Omerbashich, Mensur


    I here demonstrate empirically my georesonator concept in which tidally induced magnification of Earth masses' resonance causes seismicity. To that end, I show that all strong (~M6+) earthquakes of 2010 occurred during the Earth's long (t>3 day) astronomical alignments within our solar system. I then show that the same holds true for all very strong (~M8+) earthquakes of the decade of 2000s. Finally, the strongest (M8.6+) earthquakes of the past century are shown to have occurred during the Earth's multiple long alignments, whereas half of the high-strongest (M9+) ones occurred during the Full Moon. I used the comet C/2010 X1 (Elenin), as it has been adding to robustness in terms of very strong seismicity since 2007 (in terms of strongest seismicity: since 1965). The Elenin will continue intensifying the Earth's very strong seismicity until August-October, 2011. Approximate forecast of earthquakes based on my discoveries is feasible. This demonstration proves my hyperresonator concept, arrived at earlier as a...

  10. The ethics of earthquake prediction. (United States)

    Sol, Ayhan; Turan, Halil


    Scientists' responsibility to inform the public about their results may conflict with their responsibility not to cause social disturbance by the communication of these results. A study of the well-known Brady-Spence and Iben Browning earthquake predictions illustrates this conflict in the publication of scientifically unwarranted predictions. Furthermore, a public policy that considers public sensitivity caused by such publications as an opportunity to promote public awareness is ethically problematic from (i) a refined consequentialist point of view that any means cannot be justified by any ends, and (ii) a rights view according to which individuals should never be treated as a mere means to ends. The Parkfield experiment, the so-called paradigm case of cooperation between natural and social scientists and the political authorities in hazard management and risk communication, is also open to similar ethical criticism. For the people in the Parkfield area were not informed that the whole experiment was based on a contested seismological paradigm.

  11. Mapping the rupture process of moderate earthquakes by inverting accelerograms (United States)

    Hellweg, M.; Boatwright, J.


    We present a waveform inversion method that uses recordings of small events as Green's functions to map the rupture growth of moderate earthquakes. The method fits P and S waveforms from many stations simultaneously in an iterative procedure to estimate the subevent rupture time and amplitude relative to the Green's function event. We invert the accelerograms written by two moderate Parkfield earthquakes using smaller events as Green's functions. The first earthquake (M = 4.6) occurred on November 14, 1993, at a depth of 11 km under Middle Mountain, in the assumed preparation zone for the next Parkfield main shock. The second earthquake (M = 4.7) occurred on December 20, 1994, some 6 km to the southeast, at a depth of 9 km on a section of the San Andreas fault with no previous microseismicity and little inferred coseismic slip in the 1966 Parkfield earthquake. The inversion results are strikingly different for the two events. The average stress release in the 1993 event was 50 bars, distributed over a geometrically complex area of 0.9 km2. The average stress release in the 1994 event was only 6 bars, distributed over a roughly elliptical area of 20 km2. The ruptures of both events appear to grow spasmodically into relatively complex shapes: the inversion only constrains the ruptures to grow more slowly than the S wave velocity but does not use smoothness constraints. Copyright 1999 by the American Geophysical Union.

  12. Depth-Dependent Low-Velocity Structure of the San Andreas Fault near the SAFOD Drilling Site at Parkfield from Fault-Zone Seismic Waves (United States)

    Alvarez, M.; Li, Y.; Vidale, J.; Cochran, E.


    Coordinated by the SAFOD PIs, we used 96 PASSCAL short-period three-component seismometers in linear arrays deployed across and along the San Andreas fault (SAF) near the town of Parkfield and the SAFOD drilling site in 2002 and 2003, respectively. The data recorded for near-surface explosions detonated in the experiments (Li and Vidale), PASO project (Thurber and Roecker) and refraction profiling (Hole), and local earthquakes show fault-zone trapped waves clearly for the source and receivers located close to the fault. The time duration of the dominant trapped energy after S-arrivals increases with the event-to-array distance and focal depth progressively. Using a finite-difference code, we first synthesize fault-zone trapped waves generated by explosions to determine the shallowest 1 or 2 km fault zone structure with the velocity constraints from seismic profiling of the shallow SAF at Parkfield [Catchings et al., 2002]. We then strip shallow effects to resolve deeper structure of the fault zone, and synthesize trapped waves from earthquakes at depths between 2.5 and 11 km to complete a model of the SAF with depth-variable structure in 3-D. We also use the P-first arrivals and polarity as additional information in modeling of velocities and location of the material interface with the structural constraints from seismic tomography at Parkfield [Thurber et al., 2004] to the bed-rock velocities. In grid-search modeling, we tested various values for fault zone depth, width, velocity, Q, and source location. The best-fit model parameters from this study show evidence of a damaged core zone on the main SAF, which likely extends to seismogenic depths. The zone is marked by a low-velocity waveguide ~150 m wide, in which Q is 10-50 and shear velocities are reduced by 30-45% from wall-rock velocities. We also find some seismic energy trapped partitioned in the branching faults that connect to the San Andreas main fault at a shallow depth near Parkfield.

  13. Except in Highly Idealized Cases, Repeating Earthquakes and Laboratory Earthquakes are Neither Time- nor Slip-Predictable (United States)

    Rubinstein, J. L.; Ellsworth, W. L.; Beeler, N. M.; Chen, K. H.; Lockner, D. A.; Uchida, N.


    Sequences of repeating earthquakes in California, Taiwan and Japan are characterized by interevent times that are more regular than expected from a Poisson process, and are better described by a 2-parameter renewal model (mean rate and variability) of independent and identically distributed intervals that only depends on the time of the last event. Using precise measurements of the relative size of earthquakes in each repeating earthquake family we examine the additional predictive power of the time- and slip-predictable models. We find that neither model offers statistically significant predictive power over a renewal model. In a highly idealized laboratory system, we find that earthquakes are both time- and slip-predictable, but with the addition of a small amount of the complexity (e.g., an uneven fault surface) the time- and slip-predictable models offer little or no advantage over a much simpler renewal model that has constant slip or constant recurrence intervals. Given that repeating natural and laboratory earthquakes are not well explained by either time- or slip-predictability, we conclude that these models are too idealized to explain the recurrence behavior of natural earthquakes. These models likely fail because their key assumptions (1 -- constant loading rate, 2 -- constant failure threshold OR constant final stress, and 3 - the fault is locked throughout the loading cycle) are too idealized to apply in a complex, natural system. While the time- and slip-predictable models do not appear to work for natural earthquakes, we do note that moment (slip) scales with recurrence time according to the mean magnitude of each repeating earthquake family in Parkfield, CA, but not in the other locations. While earthquake size and recurrence time are related in Parkfield, the simplest slip-predictable model still doesn’t work because fitting a linear trend to the data predicts a non-zero earthquake size at instantaneous recurrence time. This scaling, its presence

  14. Analysis of nonvolcanic tremor on the San Andreas Fault near Parkfield, CA using U.S. Geological Survey Parkfield Seismic Array (United States)

    Fletcher, Jon B.; Baker, Lawrence M.


    background tremor signal and lasts about 5 s. These impulsive wavelets are similar to low-frequency earthquakes signals seen in Japan but appear to be broader band rather than just higher in low-frequency energy. They may be more appropriately called high-energy tremor (HET). HET signals at UPSAR correlate well with the record of this event from station GHIB of the HRSN borehole array at Parkfield and HETs typically have a higher cross-correlation coefficient than the rest of the tremor event. The amplitudes of a large HET are consistent with a magnitude of 0.1 when compared with a M2.3 event that had about the same epicenter. Polarizations of the tremor episode at UPSAR are mostly just north of east. Both linearity and azimuth evolve over time suggesting a change in tremor source location over time and linearity is typically higher at the HETs.

  15. Orientation of three-component geophones in the San Andreas Fault observatory at depth Pilot Hole, Parkfield, California (United States)

    Oye, V.; Ellsworth, W.L.


    To identify and constrain the target zone for the planned SAFOD Main Hole through the San Andreas Fault (SAF) near Parkfield, California, a 32-level three-component (3C) geophone string was installed in the Pilot Hole (PH) to monitor and improve the locations of nearby earthquakes. The orientation of the 3C geophones is essential for this purpose, because ray directions from sources may be determined directly from the 3D particle motion for both P and S waves. Due to the complex local velocity structure, rays traced from explosions and earthquakes to the PH show strong ray bending. Observed azimuths are obtained from P-wave polarization analysis, and ray tracing provides theoretical estimates of the incoming wave field. The differences between the theoretical and the observed angles define the calibration azimuths. To investigate the process of orientation with respect to the assumed velocity model, we compare calibration azimuths derived from both a homogeneous and 3D velocity model. Uncertainties in the relative orientation between the geophone levels were also estimated for a cluster of 36 earthquakes that was not used in the orientation process. The comparison between the homogeneous and the 3D velocity model shows that there are only minor changes in these relative orientations. In contrast, the absolute orientations, with respect to global North, were significantly improved by application of the 3D model. The average data residual decreased from 13?? to 7??, supporting the importance of an accurate velocity model. We explain the remaining residuals by methodological uncertainties and noise and with errors in the velocity model.

  16. Geochemistry of formation fluids from the SAFOD wells, Parkfield, California (United States)

    Thordsen, J. J.; Evans, W. C.; Kharaka, Y. K.


    Downhole and surface samples of water and gas were obtained from both the SAFOD pilot (open hole at total depth of 2.2 km) and the adjacent SAFOD well (open holes at well depths of 1443-1470 m, 3042-3059 m, and ~3.7 km), in order to investigate the origin of the fluids and their role in the dynamics of the San Andreas fault. The drilling fluids were tagged with fluorescein and Rhodamine WT tracers, and samples of well-filling solutions were analysed to allow for calculation of the contaminating effects of the drilling muds, concentrated ‘KCl’ and ‘CaCl2’ drilling solutions, dilute groundwater, and well cement. We used an evacuated Kuster sampler, and positive-displacement Westport samplers that allow for accurate determination of gas concentrations. Chemical data and water-level measurements in the SAFOD pilot as well as the shallow zone of SAFOD well indicated that no significant amount of formation water was produced. Moderate volumes of formation water and gas, however, were produced from the deeper sections of the SAFOD well. Results of chemical and isotope analyses show contamination and mixing with variable amounts of drilling mud and filling solutions. Mixing proportions, geochemical modeling, and comparison of water and gas data with those of samples obtained from wells and springs from the Parkfield area and California oil fields are used to calculate the compositions of formation water. Results show a Na-Ca-Cl type water with a salinity of ~23,000 mg/L TDS, very low Mg (2.1 mg/L) and carbonate alkalinity (~200 mg/L), but moderate SO4, high boron, and very high organic-acid anions, especially succinate. The least contaminated formation water obtained was from the well turnover prior to Phase III coring, on 6/24/2007. During this turnover, ‘CaCl2’ drilling fluid was pumped to the bottom of the well, which displaced to the surface at least 730 vertical meters of very homogenous formation water, that had entered the well bore above a packer and

  17. Triggering of repeating earthquakes in central California (United States)

    Wu, Chunquan; Gomberg, Joan; Ben-Naim, Eli; Johnson, Paul


    Dynamic stresses carried by transient seismic waves have been found capable of triggering earthquakes instantly in various tectonic settings. Delayed triggering may be even more common, but the mechanisms are not well understood. Catalogs of repeating earthquakes, earthquakes that recur repeatedly at the same location, provide ideal data sets to test the effects of transient dynamic perturbations on the timing of earthquake occurrence. Here we employ a catalog of 165 families containing ~2500 total repeating earthquakes to test whether dynamic perturbations from local, regional, and teleseismic earthquakes change recurrence intervals. The distance to the earthquake generating the perturbing waves is a proxy for the relative potential contributions of static and dynamic deformations, because static deformations decay more rapidly with distance. Clear changes followed the nearby 2004 Mw6 Parkfield earthquake, so we study only repeaters prior to its origin time. We apply a Monte Carlo approach to compare the observed number of shortened recurrence intervals following dynamic perturbations with the distribution of this number estimated for randomized perturbation times. We examine the comparison for a series of dynamic stress peak amplitude and distance thresholds. The results suggest a weak correlation between dynamic perturbations in excess of ~20 kPa and shortened recurrence intervals, for both nearby and remote perturbations.

  18. Heterogeneous slip and rupture models of the San Andreas fault zone based upon three-dimensional earthquake tomography

    Energy Technology Data Exchange (ETDEWEB)

    Foxall, William [Univ. of California, Berkeley, CA (United States)


    Crystal fault zones exhibit spatially heterogeneous slip behavior at all scales, slip being partitioned between stable frictional sliding, or fault creep, and unstable earthquake rupture. An understanding the mechanisms underlying slip segmentation is fundamental to research into fault dynamics and the physics of earthquake generation. This thesis investigates the influence that large-scale along-strike heterogeneity in fault zone lithology has on slip segmentation. Large-scale transitions from the stable block sliding of the Central 4D Creeping Section of the San Andreas, fault to the locked 1906 and 1857 earthquake segments takes place along the Loma Prieta and Parkfield sections of the fault, respectively, the transitions being accomplished in part by the generation of earthquakes in the magnitude range 6 (Parkfield) to 7 (Loma Prieta). Information on sub-surface lithology interpreted from the Loma Prieta and Parkfield three-dimensional crustal velocity models computed by Michelini (1991) is integrated with information on slip behavior provided by the distributions of earthquakes located using, the three-dimensional models and by surface creep data to study the relationships between large-scale lithological heterogeneity and slip segmentation along these two sections of the fault zone.

  19. Chromosomal mapping of the human M6 genes

    Energy Technology Data Exchange (ETDEWEB)

    Olinsky, S.; Loop, B.T.; DeKosky, A. [Univ. of Pittsburgh, PA (United States)] [and others


    M6 is a neuronal membrane glycoprotein that may have an important role in neural development. This molecule was initially defined by a monoclonal antibody that affected the survival of cultured cerebellar neurons and the outgrowth of neurites. The nature of the antigen was discovered by expression cDNA cloning using this monoclonal antibody. Two distinct murine M6 cDNAs (designated M6a and M6b) whose deduced amino acid sequences were remarkably similar to that of the myelin proteolipid protein human cDNA and genomic clones encoding M6a and M6b and have characterized them by restriction mapping, Southern hybridization with cDNA probes, and sequence analysis. We have localized these genes within the human genome by FISH (fluorescence in situ hybridization). The human M6a gene is located at 4q34, and the M6b gene is located at Xp22.2 A number of human neurological disorders have been mapped to the Xp22 region, including Aicardi syndrome (MIM 304050), Rett syndrome (MIM 312750), X-linked Charcot-Marie-Tooth neuropathy (MIM 302801), and X-linked mental retardation syndromes (MRX1, MIM 309530). This raises the possibility that a defect in the M6b gene is responsible for one of these neurological disorders. 8 refs., 3 figs.

  20. Strong motions and engineering structure performances in recent major earthquakes

    Institute of Scientific and Technical Information of China (English)

    Xiaojun Li


    @@ In recent years, a series of major earthquakes occurred, which resulted in considerable engineering damage and collapse, triggered heavy geological hazards, and caused extremely high casualties and huge property and economic loss. The earthquakes include the 1994 Northridge earthquake (M6.8), the 1995 Kobe earthquake (M6.8), the 1999 Izmit earthquake (M7.6), the 1999 Jiji (Chi-Chi) earthquake (M7.6), the 2005 northern Pakistan earthquake (M7.6), the 2008 Wenchuan earthquake (M8.0) and the 2010 Haiti earthquake (M7.0). Some villages, towns and even cities were devastated in the earthquakes, especially in the 2005 northern Pakistan earthquake, the 2008 Wenchuan earthquake and the 2010 Haiti earthquake.

  1. The Negative Binomial Distribution as a Renewal Model for the Recurrence of Large Earthquakes (United States)

    Tejedor, Alejandro; Gómez, Javier B.; Pacheco, Amalio F.


    The negative binomial distribution is presented as the waiting time distribution of a cyclic Markov model. This cycle simulates the seismic cycle in a fault. As an example, this model, which can describe recurrences with aperiodicities between 0 and 0.5, is used to fit the Parkfield, California earthquake series in the San Andreas Fault. The performance of the model in the forecasting is expressed in terms of error diagrams and compared with other recurrence models from literature.

  2. Variations of terrestrial geomagnetic activity correlated to M6+ global seismic activity (United States)

    Cataldi, Gabriele; Cataldi, Daniele; Straser, Valentino


    From the surface of the Sun, as a result of a solar flare, are expelled a coronal mass (CME or Coronal Mass Ejection) that can be observed from the Earth through a coronagraph in white light. This ejected material can be compared to an electrically charged cloud (plasma) mainly composed of electrons, protons and other small quantities of heavier elements such as helium, oxygen and iron that run radially from the Sun along the lines of the solar magnetic field and pushing into interplanetary space. Sometimes the CME able to reach the Earth causing major disruptions of its magnetosphere: mashed in the region illuminated by the Sun and expanding in the region not illuminated. This interaction creates extensive disruption of the Earth's geomagnetic field that can be detected by a radio receiver tuned to the ELF band (Extreme Low Frequency 0-30 Hz). The Radio Emissions Project (scientific research project founded in February 2009 by Gabriele Cataldi and Daniele Cataldi), analyzing the change in the Earth's geomagnetic field through an induction magnetometer tuned between 0.001 and 5 Hz (bandwidth in which possible to observe the geomagnetic pulsations) was able to detect the existence of a close relationship between this geomagnetic perturbations and the global seismic activity M6+. During the arrival of the CME on Earth, in the Earth's geomagnetic field are generated sudden and intensive emissions that have a bandwidth including between 0 and 15 Hz, an average duration of 2-8 hours, that preceding of 0-12 hours M6+ earthquakes. Between 1 January 2012 and 31 December 2012, all M6+ earthquakes recorded on a global scale were preceded by this type of signals which, due to their characteristics, have been called "Seismic Geomagnetic Precursors" (S.G.P.). The main feature of Seismic Geomagnetic Precursors is represented by the close relationship that they have with the solar activity. In fact, because the S.G.P. are geomagnetic emissions, their temporal modulation depends

  3. Are Earthquake Clusters/Supercycles Real or Random? (United States)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.


    Long records of earthquakes at plate boundaries such as the San Andreas or Cascadia often show that large earthquakes occur in temporal clusters, also termed supercycles, separated by less active intervals. These are intriguing because the boundary is presumably being loaded by steady plate motion. If so, earthquakes resulting from seismic cycles - in which their probability is small shortly after the past one, and then increases with time - should occur quasi-periodically rather than be more frequent in some intervals than others. We are exploring this issue with two approaches. One is to assess whether the clusters result purely by chance from a time-independent process that has no "memory." Thus a future earthquake is equally likely immediately after the past one and much later, so earthquakes can cluster in time. We analyze the agreement between such a model and inter-event times for Parkfield, Pallet Creek, and other records. A useful tool is transformation by the inverse cumulative distribution function, so the inter-event times have a uniform distribution when the memorylessness property holds. The second is via a time-variable model in which earthquake probability increases with time between earthquakes and decreases after an earthquake. The probability of an event increases with time until one happens, after which it decreases, but not to zero. Hence after a long period of quiescence, the probability of an earthquake can remain higher than the long-term average for several cycles. Thus the probability of another earthquake is path dependent, i.e. depends on the prior earthquake history over multiple cycles. Time histories resulting from simulations give clusters with properties similar to those observed. The sequences of earthquakes result from both the model parameters and chance, so two runs with the same parameters look different. The model parameters control the average time between events and the variation of the actual times around this average, so

  4. Along-strike variations in fault frictional properties along the San Andreas Fault near Cholame, California from joint earthquake and low-frequency earthquake relocations (United States)

    Harrington, R.M; Cochran, Elizabeth S.; Griffiths, E.M.; Zeng, X.; Thurber, C.


    Recent observations of low‐frequency earthquakes (LFEs) and tectonic tremor along the Parkfield–Cholame segment of the San Andreas fault suggest slow‐slip earthquakes occur in a transition zone between the shallow fault, which accommodates slip by a combination of aseismic creep and earthquakes (35  km depth). However, the spatial relationship between shallow earthquakes and LFEs remains unclear. Here, we present precise relocations of 34 earthquakes and 34 LFEs recorded during a temporary deployment of 13 broadband seismic stations from May 2010 to July 2011. We use the temporary array waveform data, along with data from permanent seismic stations and a new high‐resolution 3D velocity model, to illuminate the fine‐scale details of the seismicity distribution near Cholame and the relation to the distribution of LFEs. The depth of the boundary between earthquakes and LFE hypocenters changes along strike and roughly follows the 350°C isotherm, suggesting frictional behavior may be, in part, thermally controlled. We observe no overlap in the depth of earthquakes and LFEs, with an ∼5  km separation between the deepest earthquakes and shallowest LFEs. In addition, clustering in the relocated seismicity near the 2004 Mw 6.0 Parkfield earthquake hypocenter and near the northern boundary of the 1857 Mw 7.8 Fort Tejon rupture may highlight areas of frictional heterogeneities on the fault where earthquakes tend to nucleate.

  5. Along-strike variations in fault frictional properties along the San Andreas Fault near Cholame, California from joint earthquake and low-frequency earthquake relocations (United States)

    Harrington, Rebecca M.; Cochran, Elizabeth S.; Griffiths, Emily M.; Zeng, Xiangfang; Thurber, Clifford H.


    Recent observations of low‐frequency earthquakes (LFEs) and tectonic tremor along the Parkfield–Cholame segment of the San Andreas fault suggest slow‐slip earthquakes occur in a transition zone between the shallow fault, which accommodates slip by a combination of aseismic creep and earthquakes (fault, which accommodates slip by stable sliding (>35  km depth). However, the spatial relationship between shallow earthquakes and LFEs remains unclear. Here, we present precise relocations of 34 earthquakes and 34 LFEs recorded during a temporary deployment of 13 broadband seismic stations from May 2010 to July 2011. We use the temporary array waveform data, along with data from permanent seismic stations and a new high‐resolution 3D velocity model, to illuminate the fine‐scale details of the seismicity distribution near Cholame and the relation to the distribution of LFEs. The depth of the boundary between earthquakes and LFE hypocenters changes along strike and roughly follows the 350°C isotherm, suggesting frictional behavior may be, in part, thermally controlled. We observe no overlap in the depth of earthquakes and LFEs, with an ∼5  km separation between the deepest earthquakes and shallowest LFEs. In addition, clustering in the relocated seismicity near the 2004 Mw 6.0 Parkfield earthquake hypocenter and near the northern boundary of the 1857 Mw 7.8 Fort Tejon rupture may highlight areas of frictional heterogeneities on the fault where earthquakes tend to nucleate.

  6. Multiscale site-response mapping: A case study of Parkfield, California (United States)

    Thompson, E.M.; Baise, L.G.; Kayen, R.E.; Morgan, E.C.; Kaklamanos, J.


    The scale of previously proposed methods for mapping site-response ranges from global coverage down to individual urban regions. Typically, spatial coverage and accuracy are inversely related.We use the densely spaced strong-motion stations in Parkfield, California, to estimate the accuracy of different site-response mapping methods and demonstrate a method for integrating multiple site-response estimates from the site to the global scale. This method is simply a weighted mean of a suite of different estimates, where the weights are the inverse of the variance of the individual estimates. Thus, the dominant site-response model varies in space as a function of the accuracy of the different models. For mapping applications, site-response models should be judged in terms of both spatial coverage and the degree of correlation with observed amplifications. Performance varies with period, but in general the Parkfield data show that: (1) where a velocity profile is available, the square-rootof- impedance (SRI) method outperforms the measured VS30 (30 m divided by the S-wave travel time to 30 m depth) and (2) where velocity profiles are unavailable, the topographic slope method outperforms surficial geology for short periods, but geology outperforms slope at longer periods. We develop new equations to estimate site response from topographic slope, derived from the Next Generation Attenuation (NGA) database.

  7. Integrated Program of Multidisciplinary Education and Research in Mechanics and Physics of Earthquakes (United States)

    Lapusta, N.


    Studying earthquake source processes is a multidisciplinary endeavor involving a number of subjects, from geophysics to engineering. As a solid mechanician interested in understanding earthquakes through physics-based computational modeling and comparison with observations, I need to educate and attract students from diverse areas. My CAREER award has provided the crucial support for the initiation of this effort. Applying for the award made me to go through careful initial planning in consultation with my colleagues and administration from two divisions, an important component of the eventual success of my path to tenure. Then, the long-term support directed at my program as a whole - and not a specific year-long task or subject area - allowed for the flexibility required for a start-up of a multidisciplinary undertaking. My research is directed towards formulating realistic fault models that incorporate state-of-the-art experimental studies, field observations, and analytical models. The goal is to compare the model response - in terms of long-term fault behavior that includes both sequences of simulated earthquakes and aseismic phenomena - with observations, to identify appropriate constitutive laws and parameter ranges. CAREER funding has enabled my group to develop a sophisticated 3D modeling approach that we have used to understand patterns of seismic and aseismic fault slip on the Sunda megathrust in Sumatra, investigate the effect of variable hydraulic properties on fault behavior, with application to Chi-Chi and Tohoku earthquake, create a model of the Parkfield segment of the San Andreas fault that reproduces both long-term and short-term features of the M6 earthquake sequence there, and design experiments with laboratory earthquakes, among several other studies. A critical ingredient in this research program has been the fully integrated educational component that allowed me, on the one hand, to expose students from different backgrounds to the

  8. Global survey of earthquakes and non-volcanic tremor triggered by the 2008 Mw7.9 Wenchuan earthquake (United States)

    Jiang, T.; Peng, Z.; Wang, W.; Chen, Q.


    We perform a global survey of triggered earthquakes and non-volcanic tremor by the 2008 Mw7.9 Wenchuan earthquake. The analyzed data is obtained from the Global Seismic Network and various local and regional seismic networks around the world. We identify triggered earthquakes as impulsive seismic energies with clear P and S arrivals on 5 Hz high-pass-filtered three-component velocity seismograms, and triggered tremor as bursts of high-frequency, non-impulsive seismic energies that are coherent among many stations and during the passage of teleseismic body and surface waves. We find wide-spread triggering of regular earthquakes within mainland China and elsewhere in the world. The triggered earthquakes mostly occur in tectonically active regions in northwest and northeast China. However, we also find clear evidence of triggered earthquakes in southeast China that is not tectonically active. Our observations are consistent with previous studies of earthquake triggering (e.g., Gomberg et al., 2004; Velasco et al., 2008), indicating that dynamic triggering of earthquakes is ubiquitous and independent of the tectonic environments. In comparison, clear triggered tremor associated with the Wenchuan earthquake is found in the Taiwan Island (Chao and Peng, 2008), southwest Japan, Cascadia (Vidale et al., 2008), and around the Parkfield section of the San Andreas fault (Peng et al., 2008), where regular and/or triggered tremor has been found before. So far we have not found clear evidence of triggered tremor within mainland China. At least part of the reason could be due to severe clippings of the broadband waveforms during large-amplitude surface waves for many stations within 2000 km of the epicenter. Updated results will be presented at the meeting.

  9. Nowcasting Earthquakes (United States)

    Rundle, J. B.; Donnellan, A.; Grant Ludwig, L.; Turcotte, D. L.; Luginbuhl, M.; Gail, G.


    Nowcasting is a term originating from economics and finance. It refers to the process of determining the uncertain state of the economy or markets at the current time by indirect means. We apply this idea to seismically active regions, where the goal is to determine the current state of the fault system, and its current level of progress through the earthquake cycle. In our implementation of this idea, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. Our method does not involve any model other than the idea of an earthquake cycle. Rather, we define a specific region and a specific large earthquake magnitude of interest, ensuring that we have enough data to span at least 20 or more large earthquake cycles in the region. We then compute the earthquake potential score (EPS) which is defined as the cumulative probability distribution P(nearthquakes in the region. From the count of small earthquakes since the last large earthquake, we determine the value of EPS = P(nearthquake cycle in the defined region at the current time.

  10. Earthquake Facts (United States)

    Jump to Navigation Earthquake Facts The largest recorded earthquake in the United States was a magnitude 9.2 that struck Prince William Sound, ... we know, there is no such thing as "earthquake weather" . Statistically, there is an equal distribution of ...

  11. Nowcasting earthquakes (United States)

    Rundle, J. B.; Turcotte, D. L.; Donnellan, A.; Grant Ludwig, L.; Luginbuhl, M.; Gong, G.


    Nowcasting is a term originating from economics and finance. It refers to the process of determining the uncertain state of the economy or markets at the current time by indirect means. We apply this idea to seismically active regions, where the goal is to determine the current state of the fault system and its current level of progress through the earthquake cycle. In our implementation of this idea, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. Our method does not involve any model other than the idea of an earthquake cycle. Rather, we define a specific region and a specific large earthquake magnitude of interest, ensuring that we have enough data to span at least 20 or more large earthquake cycles in the region. We then compute the earthquake potential score (EPS) which is defined as the cumulative probability distribution P(n < n(t)) for the current count n(t) for the small earthquakes in the region. From the count of small earthquakes since the last large earthquake, we determine the value of EPS = P(n < n(t)). EPS is therefore the current level of hazard and assigns a number between 0% and 100% to every region so defined, thus providing a unique measure. Physically, the EPS corresponds to an estimate of the level of progress through the earthquake cycle in the defined region at the current time.

  12. Improving Earthquake Stress Drop Measurements - What can we Really Resolve? (United States)

    Abercrombie, R. E.; Bannister, S. C.; Fry, B.; Ruhl, C. J.; Kozlowska, M.


    Earthquake stress drop is fundamental to understanding the physics of the rupture process. Although it is superficially simple to calculate an estimate of stress drop from the corner frequency of the radiated spectrum, it is much harder to be certain that measurements are reliable and accurate. The same is true of other measurements of stress drop and radiated energy. The large number of studies of earthquake stress drop, the high variability in results (~0.1-100 MPa), the large uncertainties, and the ongoing scaling controversy are evidence for this. We investigate the resolution and uncertainties of stress drops calculated using an empirical Green's function (EGF) approach. Earthquakes in 3 sequences at Parkfield, California are recorded by multiple borehole stations and have abundant smaller earthquakes to use as EGFs (Abercrombie, 2014). The earthquakes in the largest magnitude cluster (M~2.1) exhibit clear temporal variation of stress drop. Independent studies obtained a similar pattern implying that it is resolvable for these well-recorded, simple sources. The borehole data reveal a similar temporal pattern for another sequence, not resolvable in an earlier study using surface recordings. The earthquakes in the third sequence have complex sources; corner frequency measurements for this sequence are highly variable and poorly resolved. We use the earthquakes in the first cluster to quantify the uncertainties likely to arise in less optimal settings. The limited signal bandwidth and the quality of the EGF assumption are major sources of error. Averaging across multiple stations improves the resolution, as does using multiple good EGFs (Abercrombie, 2015). We adapt the approach to apply to larger data sets. We focus on New Zealand, with the aim of resolving stress drop variability in a variety of tectonic settings. We investigate stacking over stations and multiple EGFs, and compare earthquakes (M~3-6) from both the overlying and the subducting plates.

  13. A case study of alternative site response explanatory variables in Parkfield, California (United States)

    Thompson, E.M.; Baise, L.G.; Kayen, R.E.; Morgan, E.C.; Kaklamanos, J.


    The combination of densely-spaced strong-motion stations in Parkfield, California, and spectral analysis of surface waves (SASW) profiles provides an ideal dataset for assessing the accuracy of different site response explanatory variables. We judge accuracy in terms of spatial coverage and correlation with observations. The performance of the alternative models is period-dependent, but generally we observe that: (1) where a profile is available, the square-root-of-impedance method outperforms VS30 (average S-wave velocity to 30 m depth), and (2) where a profile is unavailable, the topographic-slope method outperforms surficial geology. The fundamental site frequency is a valuable site response explanatory variable, though less valuable than VS30. However, given the expense and difficulty of obtaining reliable estimates of VS30 and the relative ease with which the fundamental site frequency can be computed, the fundamental site frequency may prove to be a valuable site response explanatory variable for many applications. ?? 2011 ASCE.

  14. The possible statistical relation of Pc1 pulsations to Earthquake occurrence at low latitudes

    Directory of Open Access Journals (Sweden)

    J. Bortnik


    Full Text Available We examine the association between earthquakes and Pc1 pulsations observed at a low-latitude station in Parkfield, California. The period under examination is ~7.5 years in total, from February 1999 to July 2006, and we use an automatic identification algorithm to extract information on Pc1 pulsations from the magnetometer data. These pulsations are then statistically correlated to earthquakes from the USGS NEIC catalog within a radius of 200 km around the magnetometer, and M>3.0. Results indicate that there is an enhanced occurrence probability of Pc1 pulsations ~5–15 days in advance of the earthquakes, during the daytime. We quantify the statistical significance and show that such an enhancement is unlikely to have occurred due to chance alone. We then examine the effect of declustering our earthquake catalog, and show that even though significance decreases, there is still a statistically significant daytime enhancement prior to the earthquakes. Finally, we select only daytime Pc1 pulsations as the fiducial time of our analysis, and show that earthquakes are ~3–5 times more likely to occur in the week following these pulsations, than normal. Comparing these results to other events, it is preliminarily shown that the normal earthquake probability is unaffected by geomagnetic activity, or a random event sequence.

  15. Seismic Attenuation in the Parkfield area of the San Andreas Fault (United States)

    Kelly, C. M.; Rietbrock, A.; Faulkner, D. R.


    Fault zone structure and rock properties at depth within the Parkfield area of San Andreas Fault are investigated through a seismic attenuation study. Attenuation is sensitive to the degree of fracturing, water saturation and other rock properties. The Parkfield area is of interest as it marks the boundary between the creeping area of the San Andreas Fault and an area which ruptured seismically in 1966 and again in 2004. It is also the area of the SAFOD drilling project. Previous studies of this area have suggested a complex picture of fault strands linking at depth and small bodies of high-velocity material (e.g. Li et al. 1997, Michael & Eberhart-Philips 1991). Various temporary and local seismic networks have been installed in the region and data from the PASO, PASO TRES and HRSN networks are used in this study. PASO data runs from 2001-2002 at sampling rate of 100sps. The PASO TRES data spans the time period 2004-2006 at 200sps. The HRSN network has been running since March 2001 to present with sampling at 250sps. Attenuation parameters (e.g. Q-values) are established using the spectral ratios technique. A window of 1.28 seconds around each event arrival is extracted together with a window of the same length within the noise directly preceding. Instrument corrected frequency spectra from both the event and the noise are smoothed in a logarithmically-scaled smoothing function. Only frequencies with a signal/noise ratio of 3 or above are used. The ratio between frequency spectra from event arrivals and synthetic frequency spectra of known seismic parameters is determined. A gridsearch method is used to fit the event corner frequency, searching within a range of corner frequencies implied from the reported event magnitude and assuming a stress drop of between 0.1 and 10MPa. A Brune source model is assumed (gamma=2, n=1) for the source spectra (Brune 1970). When the correct corner frequency is fitted, there should be a linear relationship between frequency and the

  16. The Elmore Ranch and Superstition Hills earthquakes of 24 November 1987: Introduction to the special issue


    Hanks, Thomas C.; Allen, Clarence R.


    On 24 November 1987, two significant earthquakes occurred along the southern San Jacinto fault zone and related structural elements in southern California, not far from the International Border. These two events, the Elmore Ranch earthquake (M = 6.2 at 0154 GMT) and the Superstition Hills earthquake (M = 6.6 at 1315 GMT, both moment magnitudes from Sipkin, 1989), and their aftershocks have yielded a rich harvest of geological, seismological, and engineering data pertinent to the cause and ...

  17. Transportations Systems Modeling and Applications in Earthquake Engineering (United States)


    earthquake (Japan) The 1995 Hanshin-Awaji earthquake ( wM 6.8) in the Osaka -Kobe area had an even greater impact on the transportation systems compared...with the Loma Prieta and Northridge earthquakes in the U.S. The span collapses of the elevated Osaka -Kobe expressway (Route 3) caused long- time...nation’s economy and society. The numerical case study focuses on the road network in the Memphis metropolitan area. The road network information

  18. Earthquake Declustering via a Nearest-Neighbor Approach in Space-Time-Magnitude Domain (United States)

    Zaliapin, I. V.; Ben-Zion, Y.


    We propose a new method for earthquake declustering based on nearest-neighbor analysis of earthquakes in space-time-magnitude domain. The nearest-neighbor approach was recently applied to a variety of seismological problems that validate the general utility of the technique and reveal the existence of several different robust types of earthquake clusters. Notably, it was demonstrated that clustering associated with the largest earthquakes is statistically different from that of small-to-medium events. In particular, the characteristic bimodality of the nearest-neighbor distances that helps separating clustered and background events is often violated after the largest earthquakes in their vicinity, which is dominated by triggered events. This prevents using a simple threshold between the two modes of the nearest-neighbor distance distribution for declustering. The current study resolves this problem hence extending the nearest-neighbor approach to the problem of earthquake declustering. The proposed technique is applied to seismicity of different areas in California (San Jacinto, Coso, Salton Sea, Parkfield, Ventura, Mojave, etc.), as well as to the global seismicity, to demonstrate its stability and efficiency in treating various clustering types. The results are compared with those of alternative declustering methods.

  19. Analog earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Hofmann, R.B. [Center for Nuclear Waste Regulatory Analyses, San Antonio, TX (United States)


    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.

  20. Disaster triggers disaster: Earthquake triggering by tropical cyclones (United States)

    Wdowinski, S.; Tsukanov, I.


    Three recent devastating earthquakes, the 1999 M=7.6 Chi-Chi (Taiwan), 2010 M=7.0 Leogane (Haiti), 2010 M=6.4 Kaohsiung (Taiwan), and additional three moderate size earthquakes (6cyclones (hurricane or typhoon) hit the very same area. The most familiar example is Haiti, which was hit during the late summer of 2008 by two hurricanes and two tropical storms (Fay, Gustav, Hanna and Ike) within 25 days. A year an a half after this very wet hurricane season, the 2010 Leogane earthquake occurred in the mountainous Haiti's southern peninsula and caused the death of more than 300,000 people. The other cases are from Taiwan, which is characterized by a high seismicity level and frequent typhoon landfall. The three wettest typhoons in Taiwan's past 50 years were Morakot (in 2009, with 2885 mm or rain), Flossie (1969, 2162 mm) and Herb (1996, 1987 mm)[Lin et al., 2010]. Each of this three very wet storms was followed by one or two main-shock M>6 earthquake that occurred in the central mountainous area of Taiwan within three years after the typhoon. The 2009 Morakot typhoon was followed by 2009 M=6.2 Nantou and 2010 M=6.4 Kaohsiung earthquakes; the 1969 Flossie typhoon was followed by an M=6.3 earthquake in 1972; and the 1996 Herb typhoon by the 1998 M=6.2 Rueyli and 1999 M=7.6 Chi-Chi earthquakes. The earthquake catalog of Taiwan lists only two other M>6 main-shocks that occurred in Taiwan's central mountainous belt, one of them was in 1964 only four months after the wet Typhoon Gloria poured heavy rain in the same area. We suggest that the close proximity in time and space between wet tropical cyclones and earthquakes reflects a physical link between the two hazard types in which these earthquakes were triggered by rapid erosion induced by tropical cyclone's heavy rain. Based on remote sensing observations, meshfree finite element modeling, and Coulomb failure stress analysis, we show that the erosion induced by very wet cyclones increased the failure stresses at the

  1. Potential for a large earthquake near Los Angeles inferred from the 2014 La Habra earthquake (United States)

    Grant Ludwig, Lisa; Parker, Jay W.; Rundle, John B.; Wang, Jun; Pierce, Marlon; Blewitt, Geoffrey; Hensley, Scott


    Abstract Tectonic motion across the Los Angeles region is distributed across an intricate network of strike‐slip and thrust faults that will be released in destructive earthquakes similar to or larger than the 1933 M6.4 Long Beach and 1994 M6.7 Northridge events. Here we show that Los Angeles regional thrust, strike‐slip, and oblique faults are connected and move concurrently with measurable surface deformation, even in moderate magnitude earthquakes, as part of a fault system that accommodates north‐south shortening and westerly tectonic escape of northern Los Angeles. The 28 March 2014 M5.1 La Habra earthquake occurred on a northeast striking, northwest dipping left‐lateral oblique thrust fault northeast of Los Angeles. We present crustal deformation observation spanning the earthquake showing that concurrent deformation occurred on several structures in the shallow crust. The seismic moment of the earthquake is 82% of the total geodetic moment released. Slip within the unconsolidated upper sedimentary layer may reflect shallow release of accumulated strain on still‐locked deeper structures. A future M6.1–6.3 earthquake would account for the accumulated strain. Such an event could occur on any one or several of these faults, which may not have been identified by geologic surface mapping. PMID:27981074

  2. A 30-year history of earthquake crisis communication in California and lessons for the future (United States)

    Jones, L.


    The first statement from the US Geological Survey to the California Office of Emergency Services quantifying the probability of a possible future earthquake was made in October 1985 about the probability (approximately 5%) that a M4.7 earthquake located directly beneath the Coronado Bay Bridge in San Diego would be a foreshock to a larger earthquake. In the next 30 years, publication of aftershock advisories have become routine and formal statements about the probability of a larger event have been developed in collaboration with the California Earthquake Prediction Evaluation Council (CEPEC) and sent to CalOES more than a dozen times. Most of these were subsequently released to the public. These communications have spanned a variety of approaches, with and without quantification of the probabilities, and using different ways to express the spatial extent and the magnitude distribution of possible future events. The USGS is re-examining its approach to aftershock probability statements and to operational earthquake forecasting with the goal of creating pre-vetted automated statements that can be released quickly after significant earthquakes. All of the previous formal advisories were written during the earthquake crisis. The time to create and release a statement became shorter with experience from the first public advisory (to the 1988 Lake Elsman earthquake) that was released 18 hours after the triggering event, but was never completed in less than 2 hours. As was done for the Parkfield experiment, the process will be reviewed by CEPEC and NEPEC (National Earthquake Prediction Evaluation Council) so the statements can be sent to the public automatically. This talk will review the advisories, the variations in wording and the public response and compare this with social science research about successful crisis communication, to create recommendations for future advisories

  3. Record-breaking earthquake intervals in a global catalogue and an aftershock sequence

    Directory of Open Access Journals (Sweden)

    M. R. Yoder


    Full Text Available For the purposes of this study, an interval is the elapsed time between two earthquakes in a designated region; the minimum magnitude for the earthquakes is prescribed. A record-breaking interval is one that is longer (or shorter than preceding intervals; a starting time must be specified. We consider global earthquakes with magnitudes greater than 5.5 and show that the record-breaking intervals are well estimated by a Poissonian (random theory. We also consider the aftershocks of the 2004 Parkfield earthquake and show that the record-breaking intervals are approximated by very different statistics. In both cases, we calculate the number of record-breaking intervals (nrb and the record-breaking interval durations Δtrb as a function of "natural time", the number of elapsed events. We also calculate the ratio of record-breaking long intervals to record-breaking short intervals as a function of time, r(t, which is suggested to be sensitive to trends in noisy time series data. Our data indicate a possible precursory signal to large earthquakes that is consistent with accelerated moment release (AMR theory.

  4. Earthquake Hazards Program: Earthquake Scenarios (United States)

    U.S. Geological Survey, Department of the Interior — A scenario represents one realization of a potential future earthquake by assuming a particular magnitude, location, and fault-rupture geometry and estimating...

  5. Constraining earthquake source inversions with GPS data: 1. Resolution-based removal of artifacts (United States)

    Page, M.T.; Custodio, S.; Archuleta, R.J.; Carlson, J.M.


    We present a resolution analysis of an inversion of GPS data from the 2004 Mw 6.0 Parkfield earthquake. This earthquake was recorded at thirteen 1-Hz GPS receivers, which provides for a truly coseismic data set that can be used to infer the static slip field. We find that the resolution of our inverted slip model is poor at depth and near the edges of the modeled fault plane that are far from GPS receivers. The spatial heterogeneity of the model resolution in the static field inversion leads to artifacts in poorly resolved areas of the fault plane. These artifacts look qualitatively similar to asperities commonly seen in the final slip models of earthquake source inversions, but in this inversion they are caused by a surplus of free parameters. The location of the artifacts depends on the station geometry and the assumed velocity structure. We demonstrate that a nonuniform gridding of model parameters on the fault can remove these artifacts from the inversion. We generate a nonuniform grid with a grid spacing that matches the local resolution length on the fault and show that it outperforms uniform grids, which either generate spurious structure in poorly resolved regions or lose recoverable information in well-resolved areas of the fault. In a synthetic test, the nonuniform grid correctly averages slip in poorly resolved areas of the fault while recovering small-scale structure near the surface. Finally, we present an inversion of the Parkfield GPS data set on the nonuniform grid and analyze the errors in the final model. Copyright 2009 by the American Geophysical Union.

  6. Geoelectric Anomalies Preceding the Aug. 24 2016 Amatrice, Italy Earthquake (United States)

    Scoville, J.; Bobrovskiy, V.; Freund, F. T.


    We report on geoelectric measurements taken at 70 and 120 km from the epicenter of the M6.2 Amatrice Central Italy Earthquake Aug. 24, 2016. Two stations, each consisting of 12 buried electrodes at depths of 1 to 3 meters, recorded ground EMF values once per second for approximately one year prior to the earthquake. Several geoelectric anomalies suggest the incidence of seismic electric signals in the weeks leading up to the earthquake. Notably, EMF values in the DC regime deviated progressively farther from baseline levels and AC components exhibited episodes of significant nonstationarity in their frequency spectra as the earthquake approached.

  7. Spatial Evaluation and Verification of Earthquake Simulators (United States)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.


    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  8. Spatial Evaluation and Verification of Earthquake Simulators (United States)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.


    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  9. Distribution characteristics of earthquake-induced landslide with the earthquake source fault-the cases of recent strong earthquakes in eastern Japan (United States)

    Hasi, B.; Ishii, Y.; Maruyama, K.; Terada, H.


    In recent years, 3 strong earthquakes, the Mid-Niigata earthquake (M6.8, October 23, 2004), the Noto Peninsula earthquake (M6.9, March 25, 2007), the Chuetsu-offshore earthquake (M6.8, July 16, 2007), stroke eastern Japan. All of these earthquakes occurred inland by reverse fault, with depth 11-17km hypocenter, triggered a large number of landslides and caused serious damage to the involved regions due to these landslides. To clarify the distribution characteristics of landslides induced by these earthquakes, we interpreted landslides by using aerial photographs taken immediately after the earthquakes, and analyzed landslide distributions with the peak ground acceleration (PGA) and seismic intensity (in Japan Meteorological Agency intensity scale), source fault of the mainshock of each earthquake. The analyzing results revealed that: 1) Most of the landslides occurred in the area where the PGA is larger than 500 gal, and the maximum seismic intensity is larger than 5 plus ; 2) The landslides occurred in a short distance from the source fault (the shortest distance from the surface projection of top tip of the fault), about 80% occurred within the distance of 20 km; 3) More than 80% of landslides occurred on the hanging wall, and the size of landslide (length, width, area) is larger than that occurred on the footwall of the source fault; 4) The number and size of landslide tends to deceases with the distance from the source fault. Our results suggesting that the distance from the source fault of earthquake could be a parameter to analyze the landslide occurrence induce by strong earthquake.

  10. Earthquake engineering research: 1982 (United States)

    The Committee on Earthquake Engineering Research addressed two questions: What progress has research produced in earthquake engineering and which elements of the problem should future earthquake engineering pursue. It examined and reported in separate chapters of the report: Applications of Past Research, Assessment of Earthquake Hazard, Earthquake Ground Motion, Soil Mechanics and Earth Structures, Analytical and Experimental Structural Dynamics, Earthquake Design of Structures, Seismic Interaction of Structures and Fluids, Social and Economic Aspects, Earthquake Engineering Education, Research in Japan.

  11. Induced earthquake during the 2016 Kumamoto earthquake (Mw7.0): Importance of real-time shake monitoring for Earthquake Early Warning (United States)

    Hoshiba, M.; Ogiso, M.


    Sequence of the 2016 Kumamoto earthquakes (Mw6.2 on April 14, Mw7.0 on April 16, and many aftershocks) caused a devastating damage at Kumamoto and Oita prefectures, Japan. During the Mw7.0 event, just after the direct S waves passing the central Oita, another M6 class event occurred there more than 80 km apart from the Mw7.0 event. The M6 event is interpreted as an induced earthquake; but it brought stronger shaking at the central Oita than that from the Mw7.0 event. We will discuss the induced earthquake from viewpoint of Earthquake Early Warning. In terms of ground shaking such as PGA and PGV, the Mw7.0 event is much smaller than those of the M6 induced earthquake at the central Oita (for example, 1/8 smaller at OIT009 station for PGA), and then it is easy to discriminate two events. However, PGD of the Mw7.0 is larger than that of the induced earthquake, and its appearance is just before the occurrence of the induced earthquake. It is quite difficult to recognize the induced earthquake from displacement waveforms only, because the displacement is strongly contaminated by that of the preceding Mw7.0 event. In many methods of EEW (including current JMA EEW system), magnitude is used for prediction of ground shaking through Ground Motion Prediction Equation (GMPE) and the magnitude is often estimated from displacement. However, displacement magnitude does not necessarily mean the best one for prediction of ground shaking, such as PGA and PGV. In case of the induced earthquake during the Kumamoto earthquake, displacement magnitude could not be estimated because of the strong contamination. Actually JMA EEW system could not recognize the induced earthquake. One of the important lessons we learned from eight years' operation of EEW is an issue of the multiple simultaneous earthquakes, such as aftershocks of the 2011 Mw9.0 Tohoku earthquake. Based on this lesson, we have proposed enhancement of real-time monitor of ground shaking itself instead of rapid estimation of

  12. Connecting slow earthquakes to huge earthquakes. (United States)

    Obara, Kazushige; Kato, Aitaro


    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes.

  13. Connecting slow earthquakes to huge earthquakes (United States)

    Obara, Kazushige; Kato, Aitaro


    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes.

  14. Application of the M6T Tracker to Simulated and Experimental Multistatic Sonar Data

    NARCIS (Netherlands)

    Theije, P.A.M. de; Kester, L.J.H.M.; Bergmans, J.


    This paper describes the first results of applying a multi-sensor multi-hypothesis tracker, called M6T, to simulated and experimental sonar data sets. The simulated data have been generated in the context of the Multistatic Tracking Working Group (MSTWG). For a number of cases (number of sensors and

  15. Structures of the m(6)A Methyltransferase Complex: Two Subunits with Distinct but Coordinated Roles. (United States)

    Zhou, Katherine I; Pan, Tao


    In this issue of Molecular Cell, Wang et al. (2016a) report crystal structures of the core of the METTL3/METTL14 m(6)A methyltransferase complex and propose how the two subunits interact and cooperate to bind and methylate RNA.


    Directory of Open Access Journals (Sweden)

    I. I. Lipatov


    Full Text Available Introduction to the results of a numerical study buffeting on Onera M6 wing. The determination of condition to appear buffeting of the modes of oscillation of a shock when interacting with in the boundary layer at Mach numbers and angles of attack.

  17. Genome Sequence of Selenium-Solubilizing Bacterium Caulobacter vibrioides T5M6

    DEFF Research Database (Denmark)

    Wang, Yihua; Qin, Yanan; Kot, Witold


    Caulobacter vibrioides T5M6 is a Gram-negative strain that strongly solubilizes selenium (Se) mineral into Se(IV) and was isolated from a selenium mining area in Enshi, southwest China. This strain produces the phytohormone IAA and promotes plant growth. Here we present the genome of this strain ...

  18. Remotely triggered microearthquakes and tremor in central California following the 2010 Mw 8.8 Chile earthquake (United States)

    Peng, Zhigang; Hill, David P.; Shelly, David R.; Aiken, Chastity


    We examine remotely triggered microearthquakes and tectonic tremor in central California following the 2010 Mw 8.8 Chile earthquake. Several microearthquakes near the Coso Geothermal Field were apparently triggered, with the largest earthquake (Ml 3.5) occurring during the large-amplitude Love surface waves. The Chile mainshock also triggered numerous tremor bursts near the Parkfield-Cholame section of the San Andreas Fault (SAF). The locally triggered tremor bursts are partially masked at lower frequencies by the regionally triggered earthquake signals from Coso, but can be identified by applying high-pass or matched filters. Both triggered tremor along the SAF and the Ml 3.5 earthquake in Coso are consistent with frictional failure at different depths on critically-stressed faults under the Coulomb failure criteria. The triggered tremor, however, appears to be more phase-correlated with the surface waves than the triggered earthquakes, likely reflecting differences in constitutive properties between the brittle, seismogenic crust and the underlying lower crust.

  19. Is Earthquake Triggering Driven by Small Earthquakes?

    CERN Document Server

    Helmstetter, A


    Using a catalog of seismicity for Southern California, we measure how the number of triggered earthquakes increases with the earthquake magnitude. The trade-off between this scaling and the distribution of earthquake magnitudes controls the relative role of small compared to large earthquakes. We show that seismicity triggering is driven by the smallest earthquakes, which trigger fewer aftershocks than larger earthquakes, but which are much more numerous. We propose that the non-trivial scaling of the number of aftershocks emerges from the fractal spatial distribution of aftershocks.

  20. Predictable earthquakes? (United States)

    Martini, D.


    acceleration) and global number of earthquake for this period from published literature which give us a great picture about the dynamical geophysical phenomena. Methodology: The computing of linear correlation coefficients gives us a chance to quantitatively characterise the relation among the data series, if we suppose a linear dependence in the first step. The correlation coefficients among the Earth's rotational acceleration and Z-orbit acceleration (perpendicular to the ecliptic plane) and the global number of the earthquakes were compared. The results clearly demonstrate the common feature of both the Earth's rotation and Earth's Z-acceleration around the Sun and also between the Earth's rotational acceleration and the earthquake number. This fact might means a strong relation among these phenomena. The mentioned rather strong correlation (r = 0.75) and the 29 year period (Saturn's synodic period) was clearly shown in the counted cross correlation function, which gives the dynamical characteristic of correlation, of Earth's orbital- (Z-direction) and rotational acceleration. This basic period (29 year) was also obvious in the earthquake number data sets with clear common features in time. Conclusion: The Core, which involves the secular variation of the Earth's magnetic field, is the only sufficiently mobile part of the Earth with a sufficient mass to modify the rotation which probably effects on the global time distribution of the earthquakes. Therefore it might means that the secular variation of the earthquakes is inseparable from the changes in Earth's magnetic field, i.e. the interior process of the Earth's core belongs to the dynamical state of the solar system. Therefore if the described idea is real the global distribution of the earthquakes in time is predictable.

  1. Defeating Earthquakes (United States)

    Stein, R. S.


    The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by

  2. Comment on "Astronomical alignments as the cause of ~M6+ seismicity"

    CERN Document Server

    Zanette, Damian H


    It is shown that, according to the criteria used by M. Omerbashich (arXiv:1104.2036v4 [physics.gen-ph]), during 2010 the Earth was aligned with at least one pair of planets some 98.6% of the time. This firmly supports Omerbashich's claim that 2010 strongest earthquakes occurred during such astronomical alignments. On this basis, we argue that seismicity is, generally, a phenomenon of astrological origin.

  3. Seismic databases and earthquake catalogue of the Caucasus (United States)

    Godoladze, Tea; Javakhishvili, Zurab; Tvaradze, Nino; Tumanova, Nino; Jorjiashvili, Nato; Gok, Rengen


    The Caucasus has a documented historical catalog stretching back to the beginning of the Christian era. Most of the largest historical earthquakes prior to the 19th century are assumed to have occurred on active faults of the Greater Caucasus. Important earthquakes include the Samtskhe earthquake of 1283, Ms~7.0, Io=9; Lechkhumi-Svaneti earthquake of 1350, Ms~7.0, Io=9; and the Alaverdi(earthquake of 1742, Ms~6.8, Io=9. Two significant historical earthquakes that may have occurred within the Javakheti plateau in the Lesser Caucasus are the Tmogvi earthquake of 1088, Ms~6.5, Io=9 and the Akhalkalaki earthquake of 1899, Ms~6.3, Io =8-9. Large earthquakes that occurred in the Caucasus within the period of instrumental observation are: Gori 1920; Tabatskuri 1940; Chkhalta 1963; 1991 Ms=7.0 Racha earthquake, the largest event ever recorded in the region; the 1992 M=6.5 Barisakho earthquake; Ms=6.9 Spitak, Armenia earthquake (100 km south of Tbilisi), which killed over 50,000 people in Armenia. Recently, permanent broadband stations have been deployed across the region as part of various national networks (Georgia (~25 stations), Azerbaijan (~35 stations), Armenia (~14 stations)). The data from the last 10 years of observation provides an opportunity to perform modern, fundamental scientific investigations. A catalog of all instrumentally recorded earthquakes has been compiled by the IES (Institute of Earth Sciences, Ilia State University). The catalog consists of more then 80,000 events. Together with our colleagues from Armenia, Azerbaijan and Turkey the database for the Caucasus seismic events was compiled. We tried to improve locations of the events and calculate Moment magnitudes for the events more than magnitude 4 estimate in order to obtain unified magnitude catalogue of the region. The results will serve as the input for the Seismic hazard assessment for the region.

  4. Hurricane Sandy and earthquakes




    Submit for consideration the connection between formation of a hurricane Sandy and earthquakes. As a rule, weather anomalies precede and accompany earthquakes. The hurricane Sandy emerged 2 days prior to strong earthquakes that occurred in the area. And the trajectory of the hurricane Sandy matched the epicenter of the earthquakes. Possibility of early prediction of natural disasters will minimize the moral and material damage.

  5. Study on Stress Triggering During the Activity Process of the Jiashi Strong Earthquake Swarm

    Institute of Scientific and Technical Information of China (English)


    The Bachu-Jiashi earthquake of Ms6.8 occurred on February 24, 2003, about 20km from the southeast of the 1997 ~ 1998 Jiashi seismic region in Xinjiang, and its aftershocks are rich and strong. Did the 1997 ~ 1998 Jiashi strong earthquake swarm trigger the Bachu-Jiashi Ms6.8earthquake? The Atushi earthquake of Ms6.7 occurred in 1996, and the 1997 ~ 1998 Jiashi strong earthquake swarm occurred about 70km from the Atushi earthquake 10 months later.Did the Atushi earthquake of Ms6.7 encourage the 1997 ~ 1998 Jiashi strong earthquake swarm? There were 9 earthquakes with Ms6.0 from 1996 to 1997 in the Jiashi seismic region,how did they act on each other? To answer the above questions, the article studies the triggering effect of the activity process of the whole Jiashi earthquake swarm from the 1996 Atushi earthquake of Ms6.7, the 1997 ~ 1998 Jiashi strong swarm to the 2003 Bachu-Jiashi earthquake of Ms6.8, and analyzes the seismicity characteristics around the Jiashi region. The results show that the 1996 Atushi earthquake of Ms6.7 encouraged the 1997 ~ 1998 Jiashi strong swarm to some extent, the accumulative Coulomb stress change from the previous M6.0earthquakes of the Jiashi strong swarm had certain triggering effects on the following M6.0 events, and the Coulomb stress change converted from the Jiashi strong swarm strongly encouraged the 2003 Bachu-Jiashi earthquake with Ms6.8.

  6. 感官释放 全新BMW M6

    Institute of Scientific and Technical Information of China (English)



    对于极致驾乘乐趣无止尽的热情与追求,一直是BMW集团最感荣耀的信念与使命。全新BMW M6的诞生,不仅传承BMW M1和635CSi的经典性能表现,并成功演绎大型豪华双门跑车6系列的优异设计,成为BMW当代M系列中最疾速的旗舰车款!

  7. High-resolution seismic velocities and shallow structure of the San Andreas fault zone at Middle Mountain, Parkfield, California (United States)

    Catchings, R.D.; Rymer, M.J.; Goldman, M.R.; Hole, J.A.; Huggins, R.; Lippus, C.


    A 5-km-long, high-resolution seismic imaging survey across the San Andreas fault (SAF) zone and the proposed San Andreas Fault Observatory at Depth (SAFOD) drill site near Parkfield, California, shows that velocities vary both laterally and vertically. Velocities range from 4.0 km/sec) probably correspond to granitic rock of the Salinian block, which is exposed a few kilometers southwest of the SAF. The depth to the top of probable granitic rock varies laterally along the seismic profile but is about 600 m below the surface at the proposed SAFOD site. We observe a prominent, lateral low-velocity zone (LVZ) beneath and southwest of the surface trace of the SAF. The LVZ is about 1.5 km wide at 300-m depth but tapers to about 600 m wide at 750-m depth. At the maximum depth of the velocity model (750 m), the LVZ is centered approximately 400 m southwest of the surface trace of the SAF. Similar velocities and velocity gradients are observed at comparable depths on both sides of the LVZ, suggesting that the LVZ is anomalous relative to rocks on either side of it. Velocities within the LVZ are lower than those of San Andreas fault gouge, and the LVZ is also anomalous with respect to gravity, magnetic, and resistivity measurements. Because of its proximity to the surface trace of the SAF, it is tempting to suggest that the LVZ represents a zone of fractured crystalline rocks at depth. However, the LVZ instead probably represents a tectonic sliver of sedimentary rock that now rests adjacent to or encompasses the SAF. Such a sliver of sedimentary rock implies fault strands on both sides and possibly within the sliver, suggesting a zone of fault strands at least 1.5 km wide at a depth of 300 m, tapering to about 600 m wide at 750-m depth. Fluids within the sedimentary sliver are probably responsible for observed low-resistivity values.

  8. Dynamics of liquefaction during the 1987 Superstition Hills, California, earthquake (United States)

    Holzer, T.L.; Youd, T.L.; Hanks, T.C.


    Simultaneous measurements of seismically induced pore-water pressure changes and surface and subsurface accelerations at a site undergoing liquefaction caused by the Superstition Hills, California, earthquake (24 November 1987; M = 6.6) reveal that total pore pressures approached lithostatic conditions, but, unexpectedly, after most of the strong motion ceased. Excess pore pressures were generated once horizontal acceleration exceeded a threshold value.


    Energy Technology Data Exchange (ETDEWEB)

    CHOU,W.; WEI,J.


    The M6 working group had more than 40 active participants (listed in Section 4). During the three weeks at Snowmass, there were about 50 presentations, covering a wide range of topics associated with high intensity proton sources. The talks are listed in Section 5. This group also had joint sessions with a number of other working groups, including E1 (Neutrino Factories and Muon Colliders), E5 (Fixed-Target Experiments), M1 (Muon Based Systems), T4 (Particle Sources), T5 (Beam dynamics), T7 (High Performance Computing) and T9 (Diagnostics). The M6 group performed a survey of the beam parameters of existing and proposed high intensity proton sources, in particular, of the proton drivers. The results are listed in Table 1. These parameters are compared with the requirements of high-energy physics users of secondary beams in Working Groups E1 and E5. According to the consensus reached in the E1 and E5 groups, the U.S. HEP program requires an intense proton source, a 1-4 MW Proton Driver, by the end of this decade.

  10. Tohoku earthquake: a surprise?

    CERN Document Server

    Kagan, Yan Y


    We consider three issues related to the 2011 Tohoku mega-earthquake: (1) how to evaluate the earthquake maximum size in subduction zones, (2) what is the repeat time for the largest earthquakes in Tohoku area, and (3) what are the possibilities of short-term forecasts during the 2011 sequence. There are two quantitative methods which can be applied to estimate the maximum earthquake size: a statistical analysis of the available earthquake record and the moment conservation principle. The latter technique studies how much of the tectonic deformation rate is released by earthquakes. For the subduction zones, the seismic or historical record is not sufficient to provide a reliable statistical measure of the maximum earthquake. The moment conservation principle yields consistent estimates of maximum earthquake size: for all the subduction zones the magnitude is of the order 9.0--9.7, and for major subduction zones the maximum earthquake size is statistically indistinguishable. Starting in 1999 we have carried out...

  11. Earthquake Potential in the Zagros Region, Iran

    Directory of Open Access Journals (Sweden)

    Madahizadeh Rohollah


    Full Text Available Seismic strain and b value are used to quantify seismic potential in the Zagros region (Iran. Small b values (0.69 and 0.69 are related to large seismic moment rates (9.96×1017 and 4.12×1017 in southern zones of the Zagros, indicating more frequent large earthquakes. Medium to large b values (0.72 and 0.92 are related to small seismic moment rates (2.94×1016 and 6.80×1016 in middle zones of the Zagros, indicating less frequent large earthquakes. Small b value (0.64 is related to medium seismic moment rate (1.38×1017 in middle to northern zone of the Zagros, indicating frequent large earthquakes. Large b value (0.87 is related to large seismic moment rate (2.29×1017 in northwestern zone, indicating more frequent large earthquakes. Recurrence intervals of large earthquakes (M > 6 are short in southern (10 and 14 years and northwestern (13 years zones, while the recurrence intervals are long in the middle (46 and 114 years and middle to northern (25 years zones.

  12. Automatic identification of fault zone head waves and direct P waves and its application in the Parkfield section of the San Andreas Fault, California (United States)

    Li, Zefeng; Peng, Zhigang


    Fault zone head waves (FZHWs) are observed along major strike-slip faults and can provide high-resolution imaging of fault interface properties at seismogenic depth. In this paper, we present a new method to automatically detect FZHWs and pick direct P waves secondary arrivals (DWSAs). The algorithm identifies FZHWs by computing the amplitude ratios between the potential FZHWs and DSWAs. The polarities, polarizations and characteristic periods of FZHWs and DSWAs are then used to refine the picks or evaluate the pick quality. We apply the method to the Parkfield section of the San Andreas Fault where FZHWs have been identified before by manual picks. We compare results from automatically and manually picked arrivals and find general agreement between them. The obtained velocity contrast at Parkfield is generally 5-10 per cent near Middle Mountain while it decreases below 5 per cent near Gold Hill. We also find many FZHWs recorded by the stations within 1 km of the background seismicity (i.e. the Southwest Fracture Zone) that have not been reported before. These FZHWs could be generated within a relatively wide low velocity zone sandwiched between the fast Salinian block on the southwest side and the slow Franciscan Mélange on the northeast side. Station FROB on the southwest (fast) side also recorded a small portion of weak precursory signals before sharp P waves. However, the polarities of weak signals are consistent with the right-lateral strike-slip mechanisms, suggesting that they are unlikely genuine FZHW signals.

  13. Water level and strain changes preceding and following the August 4, 1985 Kettleman Hills, California, earthquake (United States)

    Roeloffs, E.; Quilty, E.


    Two of the four wells monitored near Parkfield, California, during 1985 showed water level rises beginning three days before the M4 6.1 Kettleman Hills earthquake. In one of these wells, the 3.0 cm rise was nearly unique in five years of water level data. However, in the other well, which showed a 3.8 cm rise, many other changes of comparable size have been observed. Both wells that did not display pre-earthquake rises tap partially confined aquifers that cannot sustain pressure changes due to tectonic strain having periods longer than several days. We evaluate the effect of partial aquifer confinement on the ability of these four wells to display water level changes in response to aquifer strain. Although the vertical hydraulic diffusivities cannot be determined uniquely, we can find a value of diffusivity for each site that is consistent with the site's tidal and barometric responses as well as with the rate of partial recovery of the coseismic water level drops. Furthermore, the diffusivity for one well is high enough to explain why the preseismic rise could not have been detected there. For the fourth well, the diffusivity is high enough to have reduced the size of the preseismic signal as much as 50%, although it should still have been detectable. Imperfect confinement cannot explain the persistent water level changes in the two partially confined aquifers, but it does show that they were not due to volume strain. The pre-earthquake water level rises may have been precursors to the Kettleman Hills earthquake. If so, they probably were not caused by accelerating slip over the part of the fault plane that ruptured in that earthquake because they are of opposite sign to the observed coseismic water level drops.

  14. Receptor-mediated hepatic uptake of M6P-BSA-conjugated triplex-forming oligonucleotides in rats. (United States)

    Ye, Zhaoyang; Cheng, Kun; Guntaka, Ramareddy V; Mahato, Ram I


    Excessive production of extracellular matrix, predominantly type I collagen, results in liver fibrosis. Earlier we synthesized mannose 6-phosphate-bovine serum albumin (M6P-BSA) and conjugated to the type I collagen specific triplex-forming oligonucleotide (TFO) for its enhanced delivery to hepatic stellate cells (HSCs), which is the principal liver fibrogenic cell. In this report, we demonstrate a time-dependent cellular uptake of M6P-BSA-33P-TFO by HSC-T6 cells. Both cellular uptake and nuclear deposition of M6P-BSA-33P-TFO were significantly higher than those of 33P-TFO, leading to enhanced inhibition of type I collagen transcription. Following systemic administration into rats, hepatic accumulation of M6P-BSA-33P-TFO increased from 55% to 68% with the number of M6P per BSA from 14 to 27. Unlike 33P-TFO, there was no significant decrease in the hepatic uptake of (M6P)20-BSA-33P-TFO in fibrotic rats. Prior administration of excess M6P-BSA decreased the hepatic uptake of (M6P)20-BSA-33P-TFO from 66% to 40% in normal rats, and from 60% to 15% in fibrotic rats, suggesting M6P/insulin-like growth factor II (M6P/IGF II) receptor-mediated endocytosis of M6P-BSA-33P-TFO by HSCs. Almost 82% of the total liver uptake in fibrotic rats was contributed by HSCs. In conclusion, by conjugation with M6P-BSA, the TFO could be potentially used for the treatment of liver fibrosis.

  15. Acute guttate psoriasis patients have positive streptococcus hemolyticus throat cultures and elevated antistreptococcal M6 protein titers. (United States)

    Zhao, Guang; Feng, Xiaoling; Na, Aihua; Yongqiang, Jiang; Cai, Qing; Kong, Jian; Ma, Huijun


    To further study the role of Streptococci hemolyticus infection and streptococcal M6 protein in the pathogenesis of acute guttate psoriasis, streptococcal cultures were taken from the throats of 68 patients with acute guttate psoriasis. PCR technique was applied to detect M6 protein encoding DNA from those cultured streptococci. Pure M6 protein was obtained by Sephacry/S-200HR and Mono-Q chromatography from proliferated Streptococcus hemolyticus. Antistreptococcal M6 protein titers were measured in the serum of patients with acute guttate psoriasis, plaque psoriasis and healthy controls by ELISA. A high incidence of Streptococcus hemolyticus culture was observed in the guttate psoriatic group compared with the plaque psoriasis and control groups. Fourteen strains of Streptococcus hemolyticus were cultured from the throats of 68 acute guttate psoriasis patients. Of these, 5 strains contain DNA encoding the M6 protein gene as confirmed by PCR technique. More than 85% purification of M6 protein was obtained from Streptococcus pyogenes. Applying our pure M6 protein with the ELISA methods, we found that the titer of antistreptococcal M6 protein was significantly higher in the serum of guttate psoriasis patients than in the control or plaque psoriasis groups (P psoriasis have a high incidence of Streptococcus hemolyticus in their throats and raised titers of antistreptococcal M6 protein in their sera.

  16. Uniform California earthquake rupture forecast, version 2 (UCERF 2) (United States)

    Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.


    The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.

  17. Integrated Seismicity Model to Detect Pairs of Possible Interdependent Earthquakes and Its Application to Aftershocks of the 2011 Tohoku-Oki Earthquake and Sequence of the 2014 Kermadec and Rat Islands Earthquakes (United States)

    Miyazawa, M.; Tamura, R.


    We introduce an integrated seismicity model to stochastically evaluate the time intervals of consecutive earthquakes at global scales, making it possible to detect a pair of earthquakes that are remotely located and possibly related to each other. The model includes seismicity in non-overlapping areas and comprehensively explains the seismicity on the basis of point process models, which include the stationary Poisson model, the aftershock decay model following Omori-Utsu's law, and/or the epidemic-type aftershock sequence (ETAS) model. By use of this model, we examine the possibility of remote triggering of the 2011 M6.4 eastern Shizuoka earthquake in the vicinity of Mt. Fuji that occurred 4 days after the Mw9.0 Tohoku-Oki earthquake and 4 minutes after the M6.2 off-Fukushima earthquake that located about 400 km away, and that of the 2014 Mw7.9 Rat Islands earthquake that occurred within one hour after the Mw6.7 Kermadec earthquake that located about 9,000 km away and followed two large (Mw6.9, 6.5) earthquakes in the region. Both target earthquakes occurred during the passage of surface waves propagating from the previous large events. We estimated probability that the time interval is shorter than that between consecutive events and obtained dynamic stress changes on the faults. The results indicate that the M6.4 eastern Shizuoka event may be rather triggered by the static stress changes from the Tohoku-Oki earthquake and that the Mw7.9 Rat Islands event may have been remotely triggered by the Kermadec events possibly via cyclic fatigue.

  18. Membrane glycoprotein M6A promotes μ-opioid receptor endocytosis and facilitates receptor sorting into the recycling pathway

    Institute of Scientific and Technical Information of China (English)

    Ying-Jian Liang; Dai-Fei Wu; Ralf Stumm; Volker H(o)llt; Thomas Koch


    The interaction of μ-opioid receptor (MOPr) with the neuronal membrane glycoprotein M6a is known to facilitate MOPr endocytosis in human embryonic kidney 293 (HEK293) cells. To further study the role of M6a in the post-endocytotic sorting of MOPr, we investigated the agonist-induced co-internalization of MOPr and M6a and protein targeting after internalization in HEK293 cells that co-expressed HA-tagged MOPr and Myc-tagged M6a. We found that M6a, MOPr, and Rab 11, a marker for recycling endosomes, co-localized in endocytotic vesicles, indicating that MOPr and M6a are primarily targeted to recycling endosomes after endocytosis. Furthermore, co-expression of M6a augmented the post-endocytotic sorting of δ-opioid receptors into the recycling pathway, indicating that M6a might have a more general role in opioid receptor post-ndocytotic sorting. The enhanced post-endocytotic sorting of MOPr into the recycling pathway was accompanied by a decrease in agonist-induced receptor down-regulation of M6a in co-expressing cells. We tested the physiological relevance of these findings in primary cultures of cortical neurons and found that co-expression of M6a markedly increased the translocation of MOPrs from the plasma membrane to intracellular vesicles at steady state and significantly enhanced both constitutive and agonist-induced receptor endocytosis. In conclusion, our results strongly indicate that M6a modulates MOPr endocytosis and post-endocytotic sorting and has an important role in receptor regulation.

  19. ElarmS Earthquake Early Warning System Updates and Performance (United States)

    Chung, A. I.; Allen, R. M.; Hellweg, M.; Henson, I. H.; Neuhauser, D. S.


    The ElarmS earthquake early warning algorithm has been detecting earthquakes throughout California since 2007. It is one of the algorithms that contributes to CISN's ShakeAlert, a prototype earthquake early warning system being developed for California. Overall, ElarmS performance has been excellent. Over the past year (July 1, 2014 - July 1, 2015), ElarmS successfully detected all but three of the significant earthquakes (M4+) that occurred within California. Of the 24 events that were detected, the most notable was the M6.0 South Napa earthquake that occurred on August 24, 2014. The first alert for this event was sent in 5.1 seconds with an initial magnitude estimate of M5.7. This alert provided approximately 8 seconds of warning of the impending S-wave arrival to the city of San Francisco. The magnitude estimate increased to the final value of M6.0 within 15 seconds of the initial alert. One of the two events that were not detected by ElarmS occurred within 30 seconds of the M6.0 Napa mainshock. The two other missed events occurred offshore in a region with sparse station coverage in the Eureka area. Since its inception, ElarmS has evolved and adapted to meet new challenges. On May 30, 2015, an extraordinarily deep (678km) M7.8 teleseism in Japan generated 5 false event detections for earthquakes greater than M4 within a minute due to the simultaneous arrival of the P-waves at stations throughout California. In order to improve the speed and accuracy of the ElarmS detections, we are currently exploring new methodologies to quickly evaluate incoming triggers from individual stations. Rapidly determining whether or not a trigger at a given station is due to a local earthquake or some other source (such as a distant teleseism) could dramatically increase the confidence in individual triggers and reduce false alerts.

  20. Earthquake Damage - General (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — An earthquake is the motion or trembling of the ground produced by sudden displacement of rock in the Earth's crust. Earthquakes result from crustal strain,...

  1. Earthquake Notification Service (United States)

    U.S. Geological Survey, Department of the Interior — The Earthquake Notification Service (ENS) is a free service that sends you automated notifications to your email or cell phone when earthquakes happen.

  2. Earthquakes: hydrogeochemical precursors (United States)

    Ingebritsen, Steven E.; Manga, Michael


    Earthquake prediction is a long-sought goal. Changes in groundwater chemistry before earthquakes in Iceland highlight a potential hydrogeochemical precursor, but such signals must be evaluated in the context of long-term, multiparametric data sets.

  3. Earthquakes in Southern California (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — There have been many earthquake occurrences in Southern California. This set of slides shows earthquake damage from the following events: Imperial Valley, 1979,...


    Institute of Scientific and Technical Information of China (English)



    Two measures of earthquakes, the seismic moment and the broadband radiated energy, show completely different scaling relations. For shallow earthquakes worldwide from January 1987 to December 1998, the frequency distribution of the seismic moment shows a clear kink between moderate and large earthquakes, as revealed by previous works. But the frequency distribution of the broadband radiated energy shows a single power law, a classical Gutenberg-Richter relation. This inconsistency raises a paradox in the self-organized criticality model of earthquakes.

  5. Deformation in the central Gulf of California from the August 2009 M 6.9 event (United States)

    Plattner, C.; Amelung, F.; Malservisi, R.; Hackl, M.; Gonzalez-Garcia, J. J.


    The Gulf of California (GOC) hosts 90% of the Pacific - North America plate motion (thus about 45 mm/yr) along a transtensional fault system. In the central GOC, the major part of this deformation is localized at the Ballenas Transform fault segment, which is located at a narrow (~ 20 km wide) oceanic channel between the Baja California peninsula and Isla Angel de la Guarda. In August 2009 a Mw 6.9 earthquake occurred at this fault and lead to significant surface deformation that can be quantified from space geodetic data at Baja California peninsula and Isla Angel de la Guarda. Here we use a combination of Interferometric Satellite Aperture Radar (InSAR) data acquired by Envisat (May - November 2009) and campaign Global Positioning System (GPS) data (with data in May and December 2009). At the location of three of our five GPS sites we find excellent agreement between the InSAR and GPS data. At Isla Angel de la Guarda we test for phase ambiguity from unwrapping errors in the InSAR data, however these cannot explain the misfit between the InSAR and GPS data. We include the phase ambiguity testing into the elastic halfspace coseismic displacement modeling and invert for the fault parameters.

  6. Children's Ideas about Earthquakes (United States)

    Simsek, Canan Lacin


    Earthquake, a natural disaster, is among the fundamental problems of many countries. If people know how to protect themselves from earthquake and arrange their life styles in compliance with this, damage they will suffer will reduce to that extent. In particular, a good training regarding earthquake to be received in primary schools is considered…

  7. Earthquake and Schools. [Videotape]. (United States)

    Federal Emergency Management Agency, Washington, DC.

    Designing schools to make them more earthquake resistant and protect children from the catastrophic collapse of the school building is discussed in this videotape. It reveals that 44 of the 50 U.S. states are vulnerable to earthquake, but most schools are structurally unprepared to take on the stresses that earthquakes exert. The cost to the…

  8. School Safety and Earthquakes. (United States)

    Dwelley, Laura; Tucker, Brian; Fernandez, Jeanette


    A recent assessment of earthquake risk to Quito, Ecuador, concluded that many of its public schools are vulnerable to collapse during major earthquakes. A subsequent examination of 60 buildings identified 15 high-risk buildings. These schools were retrofitted to meet standards that would prevent injury even during Quito's largest earthquakes. US…

  9. Interaction of small repeating earthquakes in a rate and state fault model (United States)

    Lapusta, N.; Chen, T.


    Small repeating earthquake sequences can be located very close, for example, the San Andreas Fault Observatory at Depth (SAFOD) target cluster repeaters "San Francisco" and "Los Angeles" are separated by only about 50 m. These two repeating sequences also show closeness in occurrence time, indicating substantial interaction. Modeling of the interaction of repeating sequences and comparing the modeling results with observations would help us understand the physics of fault slip. Here we conduct numerical simulations of two asperities in a rate and state fault model (Chen and Lapusta, JGR, 2009), with asperities being rate weakening and the rest of the fault being rate-strengthening. One of our goals is to create a model for the observed interaction between "San Francisco" and "Los Angeles" clusters. The study of Chen and Lapusta (JGR, 2009) and Chen et al (accepted by EPSL, 2010) showed that this approach can reproduce behavior of isolated repeating earthquake sequences, in particular, the scaling of their moment versus recurrence time and the response to accelerated postseismic creep. In this work, we investigate the effect of distance between asperities and asperity size on the interaction, in terms of occurrence time, seismic moment and rupture pattern. The fault is governed by the aging version of rate-and-state friction. To account for relatively high stress drops inferred seismically for Parkfield SAFOD target earthquakes (Dreger et al, 2007), we also conduct simulations that include enhanced dynamic weakening during seismic events. As expected based on prior studies (e.g., Kato, JGR, 2004; Kaneko et al., Nature Geoscience, 2010), the two asperities act like one asperity if they are close enough, and they behave like isolated asperities when they are sufficiently separated. Motivated by the SAFOD target repeaters that rupture separately but show evidence of interaction, we concentrate on the intermediate distance between asperities. In that regime, the

  10. Redefining Earthquakes and the Earthquake Machine (United States)

    Hubenthal, Michael; Braile, Larry; Taber, John


    The Earthquake Machine (EML), a mechanical model of stick-slip fault systems, can increase student engagement and facilitate opportunities to participate in the scientific process. This article introduces the EML model and an activity that challenges ninth-grade students' misconceptions about earthquakes. The activity emphasizes the role of models…

  11. Redefining Earthquakes and the Earthquake Machine (United States)

    Hubenthal, Michael; Braile, Larry; Taber, John


    The Earthquake Machine (EML), a mechanical model of stick-slip fault systems, can increase student engagement and facilitate opportunities to participate in the scientific process. This article introduces the EML model and an activity that challenges ninth-grade students' misconceptions about earthquakes. The activity emphasizes the role of models…

  12. Deep postseismic viscoelastic relaxation excited by an intraslab normal fault earthquake in the Chile subduction zone (United States)

    Bie, Lidong; Ryder, Isabelle; Métois, Marianne


    The 2005 Mw 7.8 Tarapaca earthquake was the result of normal faulting on a west-dipping plane at a depth of 90 km within the subducting slab down-dip of the North Chilean gap that partially ruptured in the 2014 M 8.2 Iquique earthquake. We use Envisat observations of nearly four years of postseismic deformation following the earthquake, together with some survey GPS measurements, to investigate the viscoelastic relaxation response of the surrounding upper mantle to the coseismic stress. We constrain the rheological structure by testing various 3D models, taking into account the vertical and lateral heterogeneities in viscosity that one would expect in a subduction zone environment. A viscosity of 4-8 × 1018 Pa s for the continental mantle asthenosphere fits both InSAR line-of-sight (LOS) and GPS horizontal displacements reasonably well. In order to test whether the Tarapaca earthquake and associated postseismic relaxation could have triggered the 2014 Iquique sequence, we computed the Coulomb stress change induced by the co- and postseismic deformation following the Tarapaca earthquake on the megathrust interface and nodal planes of its M 6.7 foreshock. These static stress calculations show that the Tarapaca earthquake may have an indirect influence on the Iquique earthquake, via loading of the M 6.7 preshock positively. We demonstrate the feasibility of using deep intraslab earthquakes to constrain subduction zone rheology. Continuing geodetic observation following the 2014 Iquique earthquake may further validate the rheological parameters obtained here.

  13. Earthquake mechanism and seafloor deformation for tsunami generation (United States)

    Geist, Eric L.; Oglesby, David D.; Beer, Michael; Kougioumtzoglou, Ioannis A.; Patelli, Edoardo; Siu-Kui Au, Ivan


    Tsunamis are generated in the ocean by rapidly displacing the entire water column over a significant area. The potential energy resulting from this disturbance is balanced with the kinetic energy of the waves during propagation. Only a handful of submarine geologic phenomena can generate tsunamis: large-magnitude earthquakes, large landslides, and volcanic processes. Asteroid and subaerial landslide impacts can generate tsunami waves from above the water. Earthquakes are by far the most common generator of tsunamis. Generally, earthquakes greater than magnitude (M) 6.5–7 can generate tsunamis if they occur beneath an ocean and if they result in predominantly vertical displacement. One of the greatest uncertainties in both deterministic and probabilistic hazard assessments of tsunamis is computing seafloor deformation for earthquakes of a given magnitude.

  14. Operational earthquake forecasting can enhance earthquake preparedness (United States)

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.


    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  15. He I D3 Observation of the 1984 May 22 M6.3 Solar Flare

    CERN Document Server

    Liu, Chang; Deng, Na; Lee, Jeongwoo; Zhang, Jifeng; Choudhary, Debi Prasad; Wang, Haimin


    He I D3 line has a unique response to the flare impact on the low solar atmosphere and can be a powerful diagnostic tool for energy transport processes. Using images obtained from the recently digitized films of Big Bear Solar Observatory, we report D3 observation of the M6.3 flare on 1984 May 22, which occurred in an active region with a circular magnetic polarity inversion line (PIL). The impulsive phase of the flare starts with a main elongated source that darkens in D3, inside of which bright emission kernels appear at the time of the initial small peak in hard X-rays (HXRs). These flare cores subsequently evolve into a sharp emission strand lying within the dark halo simultaneously with the main peak in HXRs, reversing the overall source contrast from -5% to 5%. The radiated energy in D3 during the main peak is estimated to be about 10^30 ergs, which is comparable to that carried by nonthermal electrons above 20 keV. Afterwards the flare proceeds along the circular PIL in the counterclockwise direction t...

  16. Accuracy of an unstructured-grid upwind-Euler algorithm for the ONERA M6 wing (United States)

    Batina, John T.


    Improved algorithms for the solution of the three-dimensional, time-dependent Euler equations are presented for aerodynamic analysis involving unstructured dynamic meshes. The improvements have been developed recently to the spatial and temporal discretizations used by unstructured-grid flow solvers. The spatial discretization involves a flux-split approach that is naturally dissipative and captures shock waves sharply with at most one grid point within the shock structure. The temporal discretization involves either an explicit time-integration scheme using a multistage Runge-Kutta procedure or an implicit time-integration scheme using a Gauss-Seidel relaxation procedure, which is computationally efficient for either steady or unsteady flow problems. With the implicit Gauss-Seidel procedure, very large time steps may be used for rapid convergence to steady state, and the step size for unsteady cases may be selected for temporal accuracy rather than for numerical stability. Steady flow results are presented for both the NACA 0012 airfoil and the Office National d'Etudes et de Recherches Aerospatiales M6 wing to demonstrate applications of the new Euler solvers. The paper presents a description of the Euler solvers along with results and comparisons that assess the capability.

  17. Chemical composition of intermediate mass stars members of the M6 (NGC 6405) open cluster

    CERN Document Server

    Kılıçoğlu, Tolgahan; Richer, Jacques; Fossati, Luca; Albayrak, Berahitdin


    We present here the first abundance analysis of 44 late B, A and F-type members of the young open cluster M6 (NGC 6405, age about 75 Myrs). Spectra, covering the 4500 to 5800 \\AA{} wavelength range, were obtained using the FLAMES/GIRAFFE spectrograph attached to the ESO Very Large Telescopes (VLT). We determined the atmospheric parameters using calibrations of the Geneva photometry and by adjusting the $H_{\\beta}$ profiles to synthetic ones. The abundances of up to 20 chemical elements, were derived for 19 late B, 16 A and 9 F stars by iteratively adjusting synthetic spectra to the observations. We also derived a mean cluster metallicity of $\\mathrm{[Fe/H]=0.07\\pm0.03}$ dex from the iron abundances of the F-type stars. We find that, for most chemical elements, the normal late B and A-type stars exhibit larger star-to-star abundance variations than the F-type stars do probably because of the faster rotation of the B and A stars. The abundances of C, O, Mg, Si and Sc appear to be anticorrelated to that of Fe, w...

  18. Characterization and application of microearthquake clusters to problems of scaling, fault zone dynamics, and seismic monitoring at Parkfield, California

    Energy Technology Data Exchange (ETDEWEB)

    Nadeau, Robert Michael [Univ. of California, Berkeley, CA (United States)


    This document contains information about the characterization and application of microearthquake clusters and fault zone dynamics. Topics discussed include: Seismological studies; fault-zone dynamics; periodic recurrence; scaling of microearthquakes to large earthquakes; implications of fault mechanics and seismic hazards; and wave propagation and temporal changes.

  19. Earth's rotation variations and earthquakes 2010–2011

    Directory of Open Access Journals (Sweden)

    L. Ostřihanský


    Full Text Available In contrast to unsuccessful searching (lasting over 150 years for correlation of earthquakes with biweekly tides, the author found correlation of earthquakes with sidereal 13.66 days Earth's rotation variations expressed as length of a day (LOD measured daily by International Earth's Rotation Service. After short mention about earthquakes M 8.8 Denali Fault Alaska 3 November 2002 triggered on LOD maximum and M 9.1 Great Sumatra earthquake 26 December 2004 triggered on LOD minimum and the full Moon, the main object of this paper are earthquakes of period 2010–June 2011: M 7.0 Haiti (12 January 2010 on LOD minimum, M 8.8 Maule Chile 12 February 2010 on LOD maximum, map constructed on the Indian plate revealing 6 earthquakes from 7 on LOD minimum in Sumatra and Andaman Sea region, M 7.1 New Zealand Christchurch 9 September 2010 on LOD minimum and M 6.3 Christchurch 21 February 2011 on LOD maximum, and M 9.1 Japan near coast of Honshu 11 March 2011 on LOD minimum. It was found that LOD minimums coincide with full or new Moon only twice in a year in solstices. To prove that determined coincidences of earthquakes and LOD extremes stated above are not accidental events, histograms were constructed of earthquake occurrences and their position on LOD graph deeply in the past, in some cases from the time the IERS (International Earth's Rotation Service started to measure the Earth's rotation variations in 1962. Evaluations of histograms and the Schuster's test have proven that majority of earthquakes are triggered in both Earth's rotation deceleration and acceleration. Because during these coincidences evident movements of lithosphere occur, among others measured by GPS, it is concluded that Earth's rotation variations effectively contribute to the lithospheric plates movement. Retrospective overview of past earthquakes revealed that the Great Sumatra earthquake 26 December 2004 had its equivalent in the shape of LOD graph, full Moon position, and

  20. The key role of eyewitnesses in rapid earthquake impact assessment (United States)

    Bossu, Rémy; Steed, Robert; Mazet-Roux, Gilles; Roussel, Frédéric; Etivant, Caroline


    Uncertainties in rapid earthquake impact models are intrinsically large even when excluding potential indirect losses (fires, landslides, tsunami…). The reason is that they are based on several factors which are themselves difficult to constrain, such as the geographical distribution of shaking intensity, building type inventory and vulnerability functions. The difficulties can be illustrated by two boundary cases. For moderate (around M6) earthquakes, the size of potential damage zone and the epicentral location uncertainty share comparable dimension of about 10-15km. When such an earthquake strikes close to an urban area, like in 1999, in Athens (M5.9), earthquake location uncertainties alone can lead to dramatically different impact scenario. Furthermore, for moderate magnitude, the overall impact is often controlled by individual accidents, like in 2002 in Molise, Italy (M5.7), in Bingol, Turkey (M6.4) in 2003 or in Christchurch, New Zealand (M6.3) where respectively 23 out of 30, 84 out of 176 and 115 out of 185 of the causalities perished in a single building failure. Contrastingly, for major earthquakes (M>7), the point source approximation is not valid anymore, and impact assessment requires knowing exactly where the seismic rupture took place, whether it was unilateral, bilateral etc.… and this information is not readily available directly after the earthquake's occurrence. In-situ observations of actual impact provided by eyewitnesses can dramatically reduce impact models uncertainties. We will present the overall strategy developed at the EMSC which comprises of crowdsourcing and flashsourcing techniques, the development of citizen operated seismic networks, and the use of social networks to engage with eyewitnesses within minutes of an earthquake occurrence. For instance, testimonies are collected through online questionnaires available in 32 languages and automatically processed in maps of effects. Geo-located pictures are collected and then

  1. Improved stress release model: Application to the study of earthquake prediction in Taiwan area

    Institute of Scientific and Technical Information of China (English)

    朱守彪; 石耀霖


    tress release model used to be applied to seismicity study of large historical earthquakes in a space of large scale. In this paper, we improve the stress release model, and discuss whether the stress release model is still applicable or not in the case of smaller spatio-temporal scale and weaker earthquakes. As an example of testing the model, we have analyzed the M(6 earthquakes in recent about 100 years. The result shows that the stress release model is still applicable. The earthquake conditional probability intensity in Taiwan area is calculated with the improved stress release model. We see that accuracy of earthquake occurrence time predicted by the improved stress release model is higher than that by Poisson model in the test of retrospect earthquake prediction.

  2. Time-Dependent Earthquake Forecasts on a Global Scale (United States)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.; Graves, W. R.


    We develop and implement a new type of global earthquake forecast. Our forecast is a perturbation on a smoothed seismicity (Relative Intensity) spatial forecast combined with a temporal time-averaged ("Poisson") forecast. A variety of statistical and fault-system models have been discussed for use in computing forecast probabilities. An example is the Working Group on California Earthquake Probabilities, which has been using fault-based models to compute conditional probabilities in California since 1988. An example of a forecast is the Epidemic-Type Aftershock Sequence (ETAS), which is based on the Gutenberg-Richter (GR) magnitude-frequency law, the Omori aftershock law, and Poisson statistics. The method discussed in this talk is based on the observation that GR statistics characterize seismicity for all space and time. Small magnitude event counts (quake counts) are used as "markers" for the approach of large events. More specifically, if the GR b-value = 1, then for every 1000 M>3 earthquakes, one expects 1 M>6 earthquake. So if ~1000 M>3 events have occurred in a spatial region since the last M>6 earthquake, another M>6 earthquake should be expected soon. In physics, event count models have been called natural time models, since counts of small events represent a physical or natural time scale characterizing the system dynamics. In a previous research, we used conditional Weibull statistics to convert event counts into a temporal probability for a given fixed region. In the present paper, we move belyond a fixed region, and develop a method to compute these Natural Time Weibull (NTW) forecasts on a global scale, using an internally consistent method, in regions of arbitrary shape and size. We develop and implement these methods on a modern web-service computing platform, which can be found at and We also discuss constraints on the User Interface (UI) that follow from practical considerations of site usability.

  3. Earthquake Science: a New Start

    Institute of Scientific and Technical Information of China (English)

    Chen Yun-tai


    @@ Understanding the mechanisms which cause earthquakes and thus earthquake prediction, is inher-ently difficult in comparison to other physical phenom-ena. This is due to the inaccessibility of the Earth's inte-rior, the infrequency of large earthquakes, and the com-plexities of the physical processes involved. Conse-quently, in its broadest sense, earthquake science--the science of studying earthquake phenomena, is a com-prehensive and inter-disciplinary field. The disciplines involved in earthquake science include: traditional seismology, earthquake geodesy, earthquake geology, rock mechanics, complex system theory, and informa-tion and communication technologies related to earth-quake studies.


    Energy Technology Data Exchange (ETDEWEB)

    Kılıçoğlu, T.; Albayrak, B. [Ankara University, Faculty of Science, Department of Astronomy and Space Sciences, 06100, Tandoğan, Ankara (Turkey); Monier, R. [LESIA, UMR 8109, Observatoire de Paris Meudon, Place J. Janssen, Meudon (France); Richer, J. [Département de physique, Université de Montréal, 2900, Boulevard Edouard-Montpetit, Montréal QC, H3C 3J7 (Canada); Fossati, L., E-mail:, E-mail:, E-mail:, E-mail:, E-mail: [Argelander-Institut für Astronomie der Universität Bonn, Auf dem Hügel 71, D-53121, Bonn (Germany)


    We present here the first abundance analysis of 44 late B-, A-, and F-type members of the young open cluster M6 (NGC 6405, age about 75 Myr). Low- and medium-resolution spectra, covering the 4500–5840 Å wavelength range, were obtained using the FLAMES/GIRAFFE spectrograph attached to the ESO Very Large Telescopes. We determined the atmospheric parameters using calibrations of the Geneva photometry and by adjusting the H{sub β} profiles to synthetic ones. The abundances of up to 20 chemical elements, from helium to mercury, were derived for 19 late B, 16 A, and 9 F stars by iteratively adjusting synthetic spectra to the observations. We also derived a mean cluster metallicity of [Fe/H] = 0.07 ± 0.03 dex from the iron abundances of the F-type stars. We find that for most chemical elements, the normal late B- and A-type stars exhibit larger star-to-star abundance variations than the F-type stars probably because of the faster rotation of the B and A stars. The abundances of C, O, Mg, Si, and Sc appear to be anticorrelated with that of Fe, while the opposite holds for the abundances of Ca, Ti, Cr, Mn, Ni, Y, and Ba as expected if radiative diffusion is efficient in the envelopes of these stars. In the course of this analysis, we discovered five new peculiar stars: one mild Am, one Am, and one Fm star (HD 318091, CD-32 13109, GSC 07380-01211, CP1), one HgMn star (HD 318126, CP3), and one He-weak P-rich (HD 318101, CP4) star. We also discovered a new spectroscopic binary, most likely a SB2. We performed a detailed modeling of HD 318101, the new He-weak P-rich CP star, using the Montréal stellar evolution code XEVOL which self-consistently treats all particle transport processes. Although the overall abundance pattern of this star is properly reproduced, we find that detailed abundances (in particular the high P excess) resisted modeling attempts even when a range of turbulence profiles and mass-loss rates were considered. Solutions are proposed which are

  5. Brief communication "A pre seismic radio anomaly revealed in the area where the Abruzzo earthquake (M=6.3 occurred on 6 April 2009"

    Directory of Open Access Journals (Sweden)

    T. Ligonzo


    Full Text Available We report the information that in the days of the radio anomaly presented in the paper Biagi et al. (2009 an interruption of the broadcasting from the transmitter (RMC, France happened. It remains unclear if the action resulted in a complete power off of the system, or in a reduction in the radiated power, and if this has affected France only, or every direction. Should a complete power off have occurred, the proposed pre-seismic defocusing is inexistent. Our doubts on this action are reported.

  6. Encyclopedia of earthquake engineering

    CERN Document Server

    Kougioumtzoglou, Ioannis; Patelli, Edoardo; Au, Siu-Kui


    The Encyclopedia of Earthquake Engineering is designed to be the authoritative and comprehensive reference covering all major aspects of the science of earthquake engineering, specifically focusing on the interaction between earthquakes and infrastructure. The encyclopedia comprises approximately 265 contributions. Since earthquake engineering deals with the interaction between earthquake disturbances and the built infrastructure, the emphasis is on basic design processes important to both non-specialists and engineers so that readers become suitably well-informed without needing to deal with the details of specialist understanding. The content of this encyclopedia provides technically inclined and informed readers about the ways in which earthquakes can affect our infrastructure and how engineers would go about designing against, mitigating and remediating these effects. The coverage ranges from buildings, foundations, underground construction, lifelines and bridges, roads, embankments and slopes. The encycl...

  7. Reduction of earthquake disasters

    Institute of Scientific and Technical Information of China (English)

    陈顒; 陈祺福; 黄静; 徐文立


    The article summarizes the researches on mitigating earthquake disasters of the past four years in China. The studyof earthquake disasters′ quantification shows that the losses increase remarkably when population concentrates inurban area and social wealth increase. The article also summarizes some new trends of studying earthquake disas-ters′ mitigation, which are from seismic hazard to seismic risk, from engineering disaster to social disaster andintroduces the community-centered approach.

  8. Stress drops and radiated energies of aftershocks of the 1994 Northridge, California, earthquake


    Mori, Jim; Abercrombie, Rachel E.; Kanamori, Hiroo


    We study stress levels and radiated energy to infer the rupture characteristics and scaling relationships of aftershocks and other southern California earthquakes. We use empirical Green functions to obtain source time functions for 47 of the larger (M ≥ 4.0) aftershocks of the 1994 Northridge, California earthquake (M6.7). We estimate static and dynamic stress drops from the source time functions and compare them to well-calibrated estimates of the radiated energy. Our measurements of radiat...

  9. Atmospheric Signals Associated with Major Earthquakes. A Multi-Sensor Approach. Chapter 9 (United States)

    Ouzounov, Dimitar; Pulinets, Sergey; Hattori, Katsumi; Kafatos, Menas; Taylor, Patrick


    We are studying the possibility of a connection between atmospheric observation recorded by several ground and satellites as earthquakes precursors. Our main goal is to search for the existence and cause of physical phenomenon related to prior earthquake activity and to gain a better understanding of the physics of earthquake and earthquake cycles. The recent catastrophic earthquake in Japan in March 2011 has provided a renewed interest in the important question of the existence of precursory signals preceding strong earthquakes. We will demonstrate our approach based on integration and analysis of several atmospheric and environmental parameters that were found associated with earthquakes. These observations include: thermal infrared radiation, radon! ion activities; air temperature and humidity and a concentration of electrons in the ionosphere. We describe a possible physical link between atmospheric observations with earthquake precursors using the latest Lithosphere-Atmosphere-Ionosphere Coupling model, one of several paradigms used to explain our observations. Initial results for the period of2003-2009 are presented from our systematic hind-cast validation studies. We present our findings of multi-sensor atmospheric precursory signals for two major earthquakes in Japan, M6.7 Niigata-ken Chuetsu-oki of July16, 2007 and the latest M9.0 great Tohoku earthquakes of March 11,2011

  10. Long-Term Prediction of Large Earthquakes: When Does Quasi-Periodic Behavior Occur? (United States)

    Sykes, L. R.


    every great earthquake. The 2002 Working Group on large earthquakes in the San Francisco Bay region followed Ellsworth et al. (1999) in adopting much larger values of CV for several critical fault segments and underestimating their likelihood of rupture in the next 30 years. The Working Group also gives considerable weight to a Poisson model, which is in conflict with both renewal processes involving slow stress accumulation and with values of CV near 0.2. The failure of the Parkfield prediction has greatly influenced views in the U.S. about long-term forecasts. The model of the repeated breaking of a single asperity is incorrect since past Parkfield shocks of about magnitude 6 likely did not rupture the same part of the San Andreas fault.

  11. Stem cells. m6A mRNA methylation facilitates resolution of naïve pluripotency toward differentiation. (United States)

    Geula, Shay; Moshitch-Moshkovitz, Sharon; Dominissini, Dan; Mansour, Abed AlFatah; Kol, Nitzan; Salmon-Divon, Mali; Hershkovitz, Vera; Peer, Eyal; Mor, Nofar; Manor, Yair S; Ben-Haim, Moshe Shay; Eyal, Eran; Yunger, Sharon; Pinto, Yishay; Jaitin, Diego Adhemar; Viukov, Sergey; Rais, Yoach; Krupalnik, Vladislav; Chomsky, Elad; Zerbib, Mirie; Maza, Itay; Rechavi, Yoav; Massarwa, Rada; Hanna, Suhair; Amit, Ido; Levanon, Erez Y; Amariglio, Ninette; Stern-Ginossar, Noam; Novershtern, Noa; Rechavi, Gideon; Hanna, Jacob H


    Naïve and primed pluripotent states retain distinct molecular properties, yet limited knowledge exists on how their state transitions are regulated. Here, we identify Mettl3, an N(6)-methyladenosine (m(6)A) transferase, as a regulator for terminating murine naïve pluripotency. Mettl3 knockout preimplantation epiblasts and naïve embryonic stem cells are depleted for m(6)A in mRNAs, yet are viable. However, they fail to adequately terminate their naïve state and, subsequently, undergo aberrant and restricted lineage priming at the postimplantation stage, which leads to early embryonic lethality. m(6)A predominantly and directly reduces mRNA stability, including that of key naïve pluripotency-promoting transcripts. This study highlights a critical role for an mRNA epigenetic modification in vivo and identifies regulatory modules that functionally influence naïve and primed pluripotency in an opposing manner.

  12. Correlation of pre-earthquake electromagnetic signals with laboratory and field rock experiments

    Directory of Open Access Journals (Sweden)

    T. Bleier


    rock stressing results and the 30 October 2007 M5.4 Alum Rock earthquake field data.

    The second part of this paper examined other California earthquakes, prior to the Alum Rock earthquake, to see if magnetic pulsations were also present prior to those events. A search for field examples of medium earthquakes was performed to identify earthquakes where functioning magnetometers were present within 20 km, the expected detection range of the magnetometers. Two earthquakes identified in the search included the 12 August 1998 M5.1 San Juan Bautista (Hollister Ca. earthquake and the 28 September 2004 M6.0 Parkfield Ca. earthquake. Both of these data sets were recorded using EMI Corp. Model BF4 induction magnetometers, installed in equipment owned and operated by UC Berkeley. Unfortunately, no air conductivity or IR data were available for these earthquake examples. This new analysis of old data used the raw time series data (40 samples per s, and examined the data for short duration pulsations that exceeded the normal background noise levels at each site, similar to the technique used at Alum Rock. Analysis of Hollister magnetometer, positioned 2 km from the epicenter, showed a significant increase in magnetic pulsations above quiescient threshold levels several weeks prior, and especially 2 days prior to the quake. The pattern of positive and negative pulsations observed at Hollister, were similar, but not identical to Alum Rock in that the pattern of pulsations were interspersed with Pc 1 pulsation trains, and did not start 2 weeks prior to the quake, but rather 2 days prior. The Parkfield data (magnetometer positioned 19 km from the epicenter showed much smaller pre-earthquake pulsations, but the area had significantly higher conductivity (which attenuates the signals. More interesting was the fact that significant pulsations occurred between the aftershock sequences of quakes as the crustal stress patterns were migrating.


  13. Locating Very-Low-Frequency Earthquakes in the San Andreas Fault. (United States)

    Peña-Castro, A. F.; Harrington, R. M.; Cochran, E. S.


    The portion of tectonic fault where rheological properties transtition from brittle to ductile hosts a variety of seismic signals suggesting a range of slip velocities. In subduction zones, the two dominantly observed seismic signals include very-low frequency earthquakes ( VLFEs), and low-frequency earthquakes (LFEs) or tectonic tremor. Tremor and LFE are also commonly observed in transform faults, however, VLFEs have been reported dominantly in subduction zone environments. Here we show some of the first known observations of VLFEs occurring on a plate boundary transform fault, the San Andreas Fault (SAF) between the Cholame-Parkfield segment in California. We detect VLFEs using both permanent and temporary stations in 2010-2011 within approximately 70 km of Cholame, California. We search continous waveforms filtered from 0.02-0.05 Hz, and remove time windows containing teleseismic events and local earthquakes, as identified in the global Centroid Moment Tensor (CMT) and the Northern California Seismic Network (NCSN) catalog. We estimate the VLFE locations by converting the signal into envelopes, and cross-correlating them for phase-picking, similar to procedures used for locating tectonic tremor. We first perform epicentral location using a grid search method and estimate a hypocenter location using Hypoinverse and a shear-wave velocity model when the epicenter is located close to the SAF trace. We account for the velocity contrast across the fault using separate 1D velocity models for stations on each side. Estimated hypocentral VLFE depths are similar to tremor catalog depths ( 15-30 km). Only a few VLFEs produced robust hypocentral locations, presumably due to the difficulty in picking accurate phase arrivals with such a low-frequency signal. However, for events for which no location could be obtained, the moveout of phase arrivals across the stations were similar in character, suggesting that other observed VLFEs occurred in close proximity.

  14. Earthquakes and Schools (United States)

    National Clearinghouse for Educational Facilities, 2008


    Earthquakes are low-probability, high-consequence events. Though they may occur only once in the life of a school, they can have devastating, irreversible consequences. Moderate earthquakes can cause serious damage to building contents and non-structural building systems, serious injury to students and staff, and disruption of building operations.…

  15. More Earthquake Misery

    Institute of Scientific and Technical Information of China (English)


    Less than four months after the devastation of the Wenchuan earthquake on May 12, another quake brings further death and destruction to southwest China on August 30, a 6.1-magnitude earthquake hit southwest China, the border of Sichuan Province and Yunnan Province. Panzhihua City, Huili County in Sichuan and Yuanmou County and Yongren County in Yunnan were worst hit.

  16. Bam Earthquake in Iran

    CERN Multimedia


    Following their request for help from members of international organisations, the permanent Mission of the Islamic Republic of Iran has given the following bank account number, where you can donate money to help the victims of the Bam earthquake. Re: Bam earthquake 235 - UBS 311264.35L Bubenberg Platz 3001 BERN

  17. Earthquake-induced Landslidingand Ground Damage In New Zealand (United States)

    Hancox, G. T.; Perrin, N. D.; Dellow, G. D.

    A study of landsliding caused by 22 historical earthquakes in New Zealand was completed at the end of 1997 (Hancox et al., 1997). The main aims of that study were to determine: (a) the nature and extent of landsliding and other ground damage (sand boils, subsidence and lateral spreading due to soil liquefaction) caused by historical earthquakes; (b) relationships between landsliding and earthquake magnitude, epicentre, faulting, geology and topography; (c) improved environmental criteria and ground classes for assigning MM intensities and seismic hazard assessments in N.Z. The data and results of the 1997 study have recently been summarised and expanded (Hancox et al., in press), and are described in this paper. Relationships developed from these studies indicate that the minimum magnitude for earthquake-induced landsliding (EIL) in N.Z. is about M 5, with significant landsliding occurring at M 6 or greater. The minimum MM intensity for landsliding is MM6, while the most common intensities for significant landsliding are MM7-8. The intensity threshold for soil liquefaction in New Zealand was found to be MM7 for sand boils, and MM8 for lateral spreading, although such effects may also occur at one intensity level lower in highly susceptible materials. The minimum magnitude for liquefaction phenomena in N.Z. is about M 6, compared to M 5 overseas where highly susceptible soils are probably more widespread. Revised environmental response criteria (landsliding, subsidence, liquefaction-induced sand boils and lateral spreading) have also been established for the New Zealand MM Intensity Scale, and provisional landslide susceptibility Ground Classes developed for assigning MM intensities in areas where there are few buildings. Other new data presented include a size/frequency distribution model for earthquake-induced landslides over the last 150 years and a preliminary EIL Opportunity model for N.Z. The application of EIL data and relationships for seismic hazard

  18. Demand surge following earthquakes (United States)

    Olsen, Anna H.


    Demand surge is understood to be a socio-economic phenomenon where repair costs for the same damage are higher after large- versus small-scale natural disasters. It has reportedly increased monetary losses by 20 to 50%. In previous work, a model for the increased costs of reconstruction labor and materials was developed for hurricanes in the Southeast United States. The model showed that labor cost increases, rather than the material component, drove the total repair cost increases, and this finding could be extended to earthquakes. A study of past large-scale disasters suggested that there may be additional explanations for demand surge. Two such explanations specific to earthquakes are the exclusion of insurance coverage for earthquake damage and possible concurrent causation of damage from an earthquake followed by fire or tsunami. Additional research into these aspects might provide a better explanation for increased monetary losses after large- vs. small-scale earthquakes.

  19. Modeling earthquake dynamics (United States)

    Charpentier, Arthur; Durand, Marilou


    In this paper, we investigate questions arising in Parsons and Geist (Bull Seismol Soc Am 102:1-11, 2012). Pseudo causal models connecting magnitudes and waiting times are considered, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos and Karlis (Environmetrics 19: 251-269, 2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are functions of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year or a decade.

  20. The 2016 Central Italy Earthquake: an Overview (United States)

    Amato, A.


    The M6 central Italy earthquake occurred on the seismic backbone of the Italy, just in the middle of the highest hazard belt. The shock hit suddenly during the night of August 24, when people were asleep; no foreshocks occurred before the main event. The earthquake ruptured from 10 km to the surface, and produced a more than 17,000 aftershocks (Oct. 19) spread on a 40x20 km2 area elongated NW-SE. It is geologically very similar to previous recent events of the Apennines. Both the 2009 L'Aquila earthquake to the south and the 1997 Colfiorito to the north, were characterized by the activation of adjacent fault segments. Despite its magnitude and the well known seismic hazard of the region, the earthquake produced extensive damage and 297 fatalities. The town of Amatrice, that paid the highest toll, was classified in zone 1 (the highest) since 1915, but the buildings in this and other villages revealed highly vulnerable. In contrast, in the town of Norcia, that also experienced strong ground shaking, no collapses occurred, most likely due to the retrofitting carried out after an earthquake in 1979. Soon after the quake, the INGV Crisis Unit convened at night in the Rome headquarters, in order to coordinate the activities. The first field teams reached the epicentral area at 7 am with the portable seismic stations installed to monitor the aftershocks; other teams followed to map surface faults, damage, to measure GPS sites, to install instruments for site response studies, and so on. The INGV Crisis Unit includes the Press office and the INGVterremoti team, in order to manage and coordinate the communication towards the Civil Protection Dept. (DPC), the media and the web. Several tens of reports and updates have been delivered in the first month of the sequence to DPC. Also due to the controversial situation arisen from the L'Aquila earthquake and trials, particular attention was given to the communication: continuous and timely information has been released to

  1. A non-parametric method for automatic determination of P-wave and S-wave arrival times: application to local micro earthquakes (United States)

    Rawles, Christopher; Thurber, Clifford


    We present a simple, fast, and robust method for automatic detection of P- and S-wave arrivals using a nearest neighbours-based approach. The nearest neighbour algorithm is one of the most popular time-series classification methods in the data mining community and has been applied to time-series problems in many different domains. Specifically, our method is based on the non-parametric time-series classification method developed by Nikolov. Instead of building a model by estimating parameters from the data, the method uses the data itself to define the model. Potential phase arrivals are identified based on their similarity to a set of reference data consisting of positive and negative sets, where the positive set contains examples of analyst identified P- or S-wave onsets and the negative set contains examples that do not contain P waves or S waves. Similarity is defined as the square of the Euclidean distance between vectors representing the scaled absolute values of the amplitudes of the observed signal and a given reference example in time windows of the same length. For both P waves and S waves, a single pass is done through the bandpassed data, producing a score function defined as the ratio of the sum of similarity to positive examples over the sum of similarity to negative examples for each window. A phase arrival is chosen as the centre position of the window that maximizes the score function. The method is tested on two local earthquake data sets, consisting of 98 known events from the Parkfield region in central California and 32 known events from the Alpine Fault region on the South Island of New Zealand. For P-wave picks, using a reference set containing two picks from the Parkfield data set, 98 per cent of Parkfield and 94 per cent of Alpine Fault picks are determined within 0.1 s of the analyst pick. For S-wave picks, 94 per cent and 91 per cent of picks are determined within 0.2 s of the analyst picks for the Parkfield and Alpine Fault data set

  2. Aerodynamics configuration conceptual design for ATLLAS-M6 analog transport aircraft%类 ATLLAS-M6运输机气动布局分析与设计

    Institute of Scientific and Technical Information of China (English)

    肖光明; 冯毅; 唐伟; 桂业伟


    根据德国宇航中心设计的ATLLAS-M6运输机的气动布局特点,利用基于类型函数和形状函数的CST方法对其进行了参数化建模,并初步计算分析了该外形的主要气动特性,包括配平特性、静/动稳定性以及控制面的控制效率等.研究结果表明,类ATLLAS-M6的气动性能基本满足高超声速运输机的设计要求,其气动布局方案是可以借鉴的.在此基础上,将进一步考虑运输机结构重量、热防护性能等对布局的约束,对其外形进行多学科优化设计.%The aerodynamic configuration study of ATLLAS-M6 transport aircraft proposed by the German Aerospace Center (DLR) is presented and discussed. The ATLLAS-M6 has a turbine-based combined cycle (TBCC) propulsion system, and it's aerodynamic configuration has the following characteristics: double delta-wing with low-aspect ratio, axial vertical tail, high-set horizontal tail and lifting body with high-fineness ratio. In this paper, a parameterized configuration is proposed via the "class function and shape function transformation technique" (CST) method. The aerodynamic characteristics are investigated, such as the trim characteristic, static/dynamic stability and control efficiency of control surfaces. The detail research indicated that the ATLLAS-M6 analog has a high hypersonic lift to drag ratio at the cruise state of low trimming angle of attack. The hypersonic stability derivatives predicted by the " dahlem-buck" method and the "prandtl-meyer" method shown that the transporter is static and dynamic stable in both lateral and directional directions and the proposed configuration is one of the feasible transporter choices. Yet, the aerodynamic configuration conceptual design of the hypersonic transport aircraft is a highly integrated project, several disciplines involved must be considered farther, such as structured materials and thermal protection system. Therefore, the "multidisciplinary design optimization" (MDO) method

  3. Earthquake forecast enrichment scores

    Directory of Open Access Journals (Sweden)

    Christine Smyth


    Full Text Available The Collaboratory for the Study of Earthquake Predictability (CSEP is a global project aimed at testing earthquake forecast models in a fair environment. Various metrics are currently used to evaluate the submitted forecasts. However, the CSEP still lacks easily understandable metrics with which to rank the universal performance of the forecast models. In this research, we modify a well-known and respected metric from another statistical field, bioinformatics, to make it suitable for evaluating earthquake forecasts, such as those submitted to the CSEP initiative. The metric, originally called a gene-set enrichment score, is based on a Kolmogorov-Smirnov statistic. Our modified metric assesses if, over a certain time period, the forecast values at locations where earthquakes have occurred are significantly increased compared to the values for all locations where earthquakes did not occur. Permutation testing allows for a significance value to be placed upon the score. Unlike the metrics currently employed by the CSEP, the score places no assumption on the distribution of earthquake occurrence nor requires an arbitrary reference forecast. In this research, we apply the modified metric to simulated data and real forecast data to show it is a powerful and robust technique, capable of ranking competing earthquake forecasts.

  4. Phase Transformations and Earthquakes (United States)

    Green, H. W.


    Phase transformations have been cited as responsible for, or at least involved in, "deep" earthquakes for many decades (although the concept of "deep" has varied). In 1945, PW Bridgman laid out in detail the string of events/conditions that would have to be achieved for a solid/solid transformation to lead to a faulting instability, although he expressed pessimism that the full set of requirements would be simultaneously achieved in nature. Raleigh and Paterson (1965) demonstrated faulting during dehydration of serpentine under stress and suggested dehydration embrittlement as the cause of intermediate depth earthquakes. Griggs and Baker (1969) produced a thermal runaway model of a shear zone under constant stress, culminating in melting, and proposed such a runaway as the origin of deep earthquakes. The discovery of Plate Tectonics in the late 1960s established the conditions (subduction) under which Bridgman's requirements for earthquake runaway in a polymorphic transformation could be possible in nature and Green and Burnley (1989) found that instability during the transformation of metastable olivine to spinel. Recent seismic correlation of intermediate-depth-earthquake hypocenters with predicted conditions of dehydration of antigorite serpentine and discovery of metastable olivine in 4 subduction zones, suggests strongly that dehydration embrittlement and transformation-induced faulting are the underlying mechanisms of intermediate and deep earthquakes, respectively. The results of recent high-speed friction experiments and analysis of natural fault zones suggest that it is likely that similar processes occur commonly during many shallow earthquakes after initiation by frictional failure.

  5. Evaluating the role of large earthquakes on aquifer dynamics using data fusion and knowledge discovery techniques (United States)

    Friedel, Michael; Cox, Simon; Williams, Charles; Holden, Caroline


    Artificial adaptive systems are evaluated for their usefulness in modeling earthquake hydrology of the Canterbury region, NZ. For example, an unsupervised machine-learning technique, self-organizing map, is used to fuse about 200 disparate and sparse data variables (such as, well pressure response, ground acceleration, intensity, shaking, stress and strain; aquifer and well characteristics) associated with the M7.1 Darfield earthquake in 2010 and the M6.3 Christchurch earthquake in 2011. The strength of correlations, determined using cross-component plots, varied between earthquakes with pressure changes more strongly related to dynamic- than static stress-related variables during the M7.1 earthquake, and vice versa during the M6.3. The method highlights the importance of data distribution and that driving mechanisms of earthquake-induced pressure change in the aquifers are not straight forward to interpret. In many cases, data mining revealed that confusion and reduction in correlations are associated with multiple trends in the same plot: one for confined and one for unconfined earthquake response. The autocontractive map and minimum spanning tree techniques are used for grouping variables of similar influence on earthquake hydrology. K-means clustering of neural information identified 5 primary regions influenced by the two earthquakes. The application of genetic doping to a genetic algorithm is used for identifying optimal subsets of variables in formulating predictions of well pressures. Predictions of well pressure changes are compared and contrasted using machine-learning network and symbolic regression models with prediction uncertainty quantified using a leave-one-out cross-validation strategy. These preliminary results provide impetus for subsequent analysis with information from another 100 earthquakes that occurred across the South Island.

  6. Earthquake Disaster Management and Insurance

    Institute of Scientific and Technical Information of China (English)


    As one of the most powerful tools to reduce the earthquake loss, the Earthquake Disaster Management [EDM] and Insurance [EI] have been highlighted and have had a great progress in many countries in recent years. Earthquake disaster management includes a series of contents, such as earthquake hazard and risk analysis, vulnerability analysis of building and infrastructure, earthquake aware training, and building the emergency response system. EI, which has been included in EDM after this practice has been...

  7. Earthquakes and emergence (United States)

    Earthquakes and emerging infections may not have a direct cause and effect relationship like tax evasion and jail, but new evidence suggests that there may be a link between the two human health hazards. Various media accounts have cited a massive 1993 earthquake in Maharashtra as a potential catalyst of the recent outbreak of plague in India that has claimed more than 50 lives and alarmed the world. The hypothesis is that the earthquake may have uprooted underground rat populations that carry the fleas infected with the bacterium that causes bubonic plague and can lead to the pneumonic form of the disease that is spread through the air.

  8. Earthquake engineering in Peru (United States)

    Vargas, N.J


    During the last decade, earthquake engineering research in Peru has been carried out at the Catholic University of Peru and at the Universidad Nacional de Ingeniera (UNI). The Geophysical Institute (IGP) under the auspices of the Organization of American States (OAS) has initiated in Peru other efforts in regional seismic hazard assessment programs with direct impact to the earthquake engineering program. Further details on these programs have been reported by L. Ocola in the Earthquake Information Bulletin, January-February 1982, vol. 14, no. 1, pp. 33-38. 

  9. Coseismic and postseismic deformation due to the 2007 M5.5 Ghazaband fault earthquake, Balochistan, Pakistan (United States)

    Fattahi, H.; Amelung, F.; Chaussard, E.; Wdowinski, S.


    Time series analysis of interferometric synthetic aperture radar data reveals coseismic and postseismic surface displacements associated with the 2007 M5.5 earthquake along the southern Ghazaband fault, a major but little studied fault in Pakistan. Modeling indicates that the coseismic surface deformation was caused by ~9 cm of strike-slip displacement along a shallow subvertical fault. The earthquake was followed by at least 1 year of afterslip, releasing ~70% of the moment of the main event, equivalent to a M5.4 earthquake. This high aseismic relative to the seismic moment release is consistent with previous observations for moderate earthquakes (M < 6) and suggests that smaller earthquakes are associated with a higher aseismic relative to seismic moment release than larger earthquakes.

  10. A statistical feature of anomalous seismic activities prior to large shallow earthquakes in Japan revealed by the Pattern Informatics method

    Directory of Open Access Journals (Sweden)

    M. Kawamura


    Full Text Available For revealing the preparatory processes of large inland earthquakes, we systematically applied the Pattern Informatics method (PI method to the earthquake data of Japan. We focused on 12 large earthquakes with magnitudes larger than M = 6.4 (an official magnitude of the Japan Meteorological Agency that occurred at depths shallower than 30 km between 2000 and 2010. We examined the relation between the spatiotemporal locations of such large shallow earthquakes and those of PI hotspots, which correspond to the grid cells of anomalous seismic activities in a designated time span. Based on a statistical test using Molchan's error diagram, we inquired into the existence of precursory anomalous seismic activities of the large earthquakes and, if any, their characteristic time span. The test indicated that the Japanese M ≧ 6.4 inland earthquakes tend to be preceded by anomalous seismic activities of 8-to-10-yr time scales.

  11. Earthquake probabilities in the San Francisco Bay Region: 2000 to 2030 - a summary of findings (United States)



    The San Francisco Bay region sits astride a dangerous “earthquake machine,” the tectonic boundary between the Pacific and North American Plates. The region has experienced major and destructive earthquakes in 1838, 1868, 1906, and 1989, and future large earthquakes are a certainty. The ability to prepare for large earthquakes is critical to saving lives and reducing damage to property and infrastructure. An increased understanding of the timing, size, location, and effects of these likely earthquakes is a necessary component in any effective program of preparedness. This study reports on the probabilities of occurrence of major earthquakes in the San Francisco Bay region (SFBR) for the three decades 2000 to 2030. The SFBR extends from Healdsberg on the northwest to Salinas on the southeast and encloses the entire metropolitan area, including its most rapidly expanding urban and suburban areas. In this study a “major” earthquake is defined as one with M≥6.7 (where M is moment magnitude). As experience from the Northridge, California (M6.7, 1994) and Kobe, Japan (M6.9, 1995) earthquakes has shown us, earthquakes of this size can have a disastrous impact on the social and economic fabric of densely urbanized areas. To reevaluate the probability of large earthquakes striking the SFBR, the U.S. Geological Survey solicited data, interpretations, and analyses from dozens of scientists representing a wide crosssection of the Earth-science community (Appendix A). The primary approach of this new Working Group (WG99) was to develop a comprehensive, regional model for the long-term occurrence of earthquakes, founded on geologic and geophysical observations and constrained by plate tectonics. The model considers a broad range of observations and their possible interpretations. Using this model, we estimate the rates of occurrence of earthquakes and 30-year earthquake probabilities. Our study considers a range of magnitudes for earthquakes on the major faults in the

  12. Generalized Free-Surface Effect and Random Vibration Theory: a new tool for computing moment magnitudes of small earthquakes using borehole data (United States)

    Malagnini, Luca; Dreger, Douglas S.


    Although optimal, computing the moment tensor solution is not always a viable option for the calculation of the size of an earthquake, especially for small events (say, below Mw 2.0). Here we show an alternative approach to the calculation of the moment-rate spectra of small earthquakes, and thus of their scalar moments, that uses a network-based calibration of crustal wave propagation. The method works best when applied to a relatively small crustal volume containing both the seismic sources and the recording sites. In this study we present the calibration of the crustal volume monitored by the High-Resolution Seismic Network (HRSN), along the San Andreas Fault (SAF) at Parkfield. After the quantification of the attenuation parameters within the crustal volume under investigation, we proceed to the spectral correction of the observed Fourier amplitude spectra for the 100 largest events in our data set. Multiple estimates of seismic moment for the all events (1811 events total) are obtained by calculating the ratio of rms-averaged spectral quantities based on the peak values of the ground velocity in the time domain, as they are observed in narrowband-filtered time-series. The mathematical operations allowing the described spectral ratios are obtained from Random Vibration Theory (RVT). Due to the optimal conditions of the HRSN, in terms of signal-to-noise ratios, our network-based calibration allows the accurate calculation of seismic moments down to Mw < 0. However, because the HRSN is equipped only with borehole instruments, we define a frequency-dependent Generalized Free-Surface Effect (GFSE), to be used instead of the usual free-surface constant F = 2. Our spectral corrections at Parkfield need a different GFSE for each side of the SAF, which can be quantified by means of the analysis of synthetic seismograms. The importance of the GFSE of borehole instruments increases for decreasing earthquake's size because for smaller earthquakes the bandwidth available

  13. Characterization of the radiation environment at the UNLV accelerator facility during operation of the Varian M6 linac (United States)

    Hodges, M.; Barzilov, A.; Chen, Y.; Lowe, D.


    The bremsstrahlung photon flux from the UNLV particle accelerator (Varian M6 model) was determined using MCNP5 code for 3 MeV and 6 MeV incident electrons. Human biological equivalent dose rates due to accelerator operation were evaluated using the photon flux with the flux-to-dose conversion factors. Dose rates were computed for the accelerator facility for M6 linac use under different operating conditions. The results showed that the use of collimators and linac internal shielding significantly reduced the dose rates throughout the facility. It was shown that the walls of the facility, in addition to the earthen berm enveloping the building, provide equivalent shielding to reduce dose rates outside to below the 2 mrem/h limit.

  14. Earthquake Counting Method for Spatially Localized Probabilities: Challenges in Real-Time Information Delivery

    CERN Document Server

    Holliday, James R; Rundle, John B; Turcotte, Donald L


    We develop and implement a new type of global earthquake forecast. Our forecast is a perturbation on a smoothed seismicity (Relative Intensity) spatial forecast combined with a temporal time-averaged (Poisson) forecast. A variety of statistical and fault-system models have been discussed for use in computing forecast probabilities. Our paper takes a new approach. The idea is based on the observation that GR statistics characterize seismicity for all space and time. Small magnitude event counts (quake counts) are used as markers for the approach of large events. More specifically, if the GR b-value = 1, then for every 1000 M>3 earthquakes, one expects 1 M>6 earthquake. So if ~1000 M>3 events have occurred in a spatial region since the last M>6 earthquake, another M>6 earthquake should be expected soon. In physics, event count models have been called natural time models, since counts of small events represent a physical or natural time scale characterizing the system dynamics. In a previous paper, we used condi...

  15. Tweet Earthquake Dispatch (TED) (United States)

    U.S. Geological Survey, Department of the Interior — The USGS is offering earthquake alerts via two twitter accounts: @USGSted and @USGSBigQuakes. On average, @USGSted and @USGSBigQuakes will produce about one tweet...

  16. 1988 Spitak Earthquake Database (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...

  17. Earthquake Damage to Schools (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This set of slides graphically illustrates the potential danger that major earthquakes pose to school structures and to the children and adults who happen to be...

  18. Study of earthquakes using a borehole seismic network at Koyna, India (United States)

    Gupta, Harsh; Satyanarayana, Hari VS; Shashidhar, Dodla; Mallika, Kothamasu; Ranjan Mahato, Chitta; Shankar Maity, Bhavani


    Koyna, located near the west coast of India, is a classical site of artificial water reservoir triggered earthquakes. Triggered earthquakes started soon after the impoundment of the Koyna Dam in 1962. The activity has continued till now including the largest triggered earthquake of M 6.3 in 1967; 22 earthquakes of M ≥ 5 and several thousands smaller earthquakes. The latest significant earthquake of ML 3.7 occurred on 24th November 2016. In spite of having a network of 23 broad band 3-component seismic stations in the near vicinity of the Koyna earthquake zone, locations of earthquakes had errors of 1 km. The main reason was the presence of 1 km thick very heterogeneous Deccan Traps cover that introduced noise and locations could not be improved. To improve the accuracy of location of earthquakes, a unique network of eight borehole seismic stations surrounding the seismicity was designed. Six of these have been installed at depths varying from 981 m to 1522 m during 2015 and 2016, well below the Deccan Traps cover. During 2016 a total of 2100 earthquakes were located. There has been a significant improvement in the location of earthquakes and the absolute errors of location have come down to ± 300 m. All earthquakes of ML ≥ 0.5 are now located, compared to ML ≥1.0 earlier. Based on seismicity and logistics, a block of 2 km x 2 km area has been chosen for the 3 km deep pilot borehole. The installation of the borehole seismic network has further elucidated the correspondence between rate of water loading/unloading the reservoir and triggered seismicity.

  19. Injection-induced earthquakes. (United States)

    Ellsworth, William L


    Earthquakes in unusual locations have become an important topic of discussion in both North America and Europe, owing to the concern that industrial activity could cause damaging earthquakes. It has long been understood that earthquakes can be induced by impoundment of reservoirs, surface and underground mining, withdrawal of fluids and gas from the subsurface, and injection of fluids into underground formations. Injection-induced earthquakes have, in particular, become a focus of discussion as the application of hydraulic fracturing to tight shale formations is enabling the production of oil and gas from previously unproductive formations. Earthquakes can be induced as part of the process to stimulate the production from tight shale formations, or by disposal of wastewater associated with stimulation and production. Here, I review recent seismic activity that may be associated with industrial activity, with a focus on the disposal of wastewater by injection in deep wells; assess the scientific understanding of induced earthquakes; and discuss the key scientific challenges to be met for assessing this hazard.

  20. Injection-induced earthquakes (United States)

    Ellsworth, William L.


    Earthquakes in unusual locations have become an important topic of discussion in both North America and Europe, owing to the concern that industrial activity could cause damaging earthquakes. It has long been understood that earthquakes can be induced by impoundment of reservoirs, surface and underground mining, withdrawal of fluids and gas from the subsurface, and injection of fluids into underground formations. Injection-induced earthquakes have, in particular, become a focus of discussion as the application of hydraulic fracturing to tight shale formations is enabling the production of oil and gas from previously unproductive formations. Earthquakes can be induced as part of the process to stimulate the production from tight shale formations, or by disposal of wastewater associated with stimulation and production. Here, I review recent seismic activity that may be associated with industrial activity, with a focus on the disposal of wastewater by injection in deep wells; assess the scientific understanding of induced earthquakes; and discuss the key scientific challenges to be met for assessing this hazard.

  1. Charles Darwin's earthquake reports (United States)

    Galiev, Shamil


    As it is the 200th anniversary of Darwin's birth, 2009 has also been marked as 170 years since the publication of his book Journal of Researches. During the voyage Darwin landed at Valdivia and Concepcion, Chile, just before, during, and after a great earthquake, which demolished hundreds of buildings, killing and injuring many people. Land was waved, lifted, and cracked, volcanoes awoke and giant ocean waves attacked the coast. Darwin was the first geologist to observe and describe the effects of the great earthquake during and immediately after. These effects sometimes repeated during severe earthquakes; but great earthquakes, like Chile 1835, and giant earthquakes, like Chile 1960, are rare and remain completely unpredictable. This is one of the few areas of science, where experts remain largely in the dark. Darwin suggested that the effects were a result of ‘ …the rending of strata, at a point not very deep below the surface of the earth…' and ‘…when the crust yields to the tension, caused by its gradual elevation, there is a jar at the moment of rupture, and a greater movement...'. Darwin formulated big ideas about the earth evolution and its dynamics. These ideas set the tone for the tectonic plate theory to come. However, the plate tectonics does not completely explain why earthquakes occur within plates. Darwin emphasised that there are different kinds of earthquakes ‘...I confine the foregoing observations to the earthquakes on the coast of South America, or to similar ones, which seem generally to have been accompanied by elevation of the land. But, as we know that subsidence has gone on in other quarters of the world, fissures must there have been formed, and therefore earthquakes...' (we cite the Darwin's sentences following researchspace. auckland. ac. nz/handle/2292/4474). These thoughts agree with results of the last publications (see Nature 461, 870-872; 636-639 and 462, 42-43; 87-89). About 200 years ago Darwin gave oneself airs by the

  2. Recent damaging earthquakes in Japan, 2003-2008 (United States)

    Kayen, Robert E


    During the last six years, from 2003-2008, Japan has been struck by three significant and damaging earthquakes: The most recent M6.6 Niigata Chuetsu Oki earthquake of July 16, 2007 off the coast of Kashiwazaki City, Japan; The M6.6 Niigata Chuetsu earthquake of October 23, 2004, located in Niigata Prefecture in the central Uonuma Hills; and the M8.0 Tokachi Oki Earthquake of September 26, 2003 effecting southeastern Hokkaido Prefecture. These earthquakes stand out among many in a very active period of seismicity in Japan. Within the upper 100 km of the crust during this period, Japan experienced 472 earthquakes of magnitude 6, or greater. Both Niigata events affected the south-central region of Tohoku Japan, and the Tokachi-Oki earthquake affected a broad region of the continental shelf and slope southeast of the Island of Hokkaido. This report is synthesized from the work of scores of Japanese and US researchers who led and participated in post-earthquake reconnaissance of these earthquakes: their noteworthy and valuable contributions are listed in an extended acknowledgements section at the end of the paper. During the Niigata Chuetsu Oki event of 2007, damage to the Kashiwazaki-Kariwa nuclear power plant, structures, infrastructure, and ground were primarily the product of two factors: (1) high intensity motions from this moderate-sized shallow event, and (2) soft, poor performing, or liquefiable soils in the coastal region of southwestern Niigata Prefecture. Structural and geotechnical damage along the slopes of dunes was ubiquitous in the Kashiwazaki-Kariwa region. The 2004 Niigata Chuetsu Earthquake was the most significant to affect Japan since the 1995 Kobe earthquake. Forty people were killed, almost 3,000 were injured, and many hundreds of landslides destroyed entire upland villages. Landslides were of all types; some dammed streams, temporarily creating lakes threatening to overtop their new embankments and cause flash floods and mudslides. The numerous

  3. A New Tomato Hybrid 'Jinpeng M6' with Resistance to Meloidogyne incognita%抗南方根结线虫番茄新品种‘金棚M6

    Institute of Scientific and Technical Information of China (English)

    李晓东; 郑丽芳; 王建人; 巩振辉; 蔡义勇; 李永宁; 任向辉


    ‘金棚M6’番茄是以‘M6’为母本,‘13B’为父本杂交培育的无限生长类型一代杂种。植株生长势较强,叶片较稀。果实高圆形,幼果无绿肩。成熟果粉红色,无棱沟,着色均匀一致,光泽度好。果肉厚,果实硬度好,耐贮耐运,货架期长。高抗南方根结线虫,高抗番茄花叶病毒(ToMV),中抗黄瓜花叶病毒(CMV),抗枯萎病和叶霉病。适宜日光温室、大棚等保护地春提早、越冬及秋冬春一大茬栽培。早熟,春提早种植前期产量可达52500kg·hm-2,春季栽培总产量一般为127500-150000kg·hm-2。%'Jinpeng M6' is a new indeterminate growth type and pink tomato hybrid, which is developed by crossing 'M6' x ' 13B' . The hybrid grows strongly and has spare foliage. Fruits are highly round in shape, and young fruits have no green shoulder. Its matured fruit is pink, has a smooth peel, uniform colour and high glossiness. The fruit has thick pulp, big fruit core, long storability and shelf-life. The hybrid is highly resistant to Meloidogyne incognita and ToMV, moderately resistant to CMV, fusarium wilt and leaf mould. It is suitable for cultivation in early spring, overwinter and one big crop of autumn-winter-spring under greenhouse conditions. It is early ripening, the early yield is 52 500 kg · hm-2 in early spring, the total yield is about 127 500 - 150 000 kg·hm-2 in spring.

  4. Research on the Accurate Location of the 2007 Ms 6. 4 Ning'er, Yunnan Earthquake

    Institute of Scientific and Technical Information of China (English)

    Lu Xian; Zhou Longquan


    Five mobile digital seismic stations were set up by the Earthquake Administration of Yunnan Province near the epicenter of the main shock after the Ning'er M6. 4 earthquake on June 3, 2007. In this paper, the aftershock sequence of the Ning'er M6. 4 earthquake is relocated by using the double difference earthquake location method. The data is from the 5 mobile digital seismic stations and the permanent Simao seismic station. The results show that the length of the aftershock sequence is 40kin and the width is 30km, concentrated obviously at the lateral displacement area between the Pu'er fault and the NNE-trending faults, with the majority occurring on the Pu'er fault around the main shock. The depths of aftershocks are from 2kin to 12km, and the predominant distribution is in the depth of 8 ~ 10km. The mean depth is 7. 9kin. The seismic fault dips to the northwest revealed from the profile parallel to this aftershock sequence, which is identical to the dip of the secondary fault of the NE-trending Menglian-Mojiang fault in the earthquake area. There are more earthquakes concentrated in the northwest segment than in the southeast segment, which is perhaps related to the underground medium and faults. The depth profile of the earthquake sequence shows that the relocated earthquakes are mainly located near the Pu'er fault and the seismic faults dip to the southwest, consistent with the dip of the west branch of the Pu'er fault. In all, the fault strike revealed by earthquake relocations matches well with the strike in the focal mechanism solutions. The main shock is in the top of the aftershock sequence and the aftershocks are symmetrically distributed, showing that faulting was complete in both the NE and SW directions.

  5. The Need for More Earthquake Science in Southeast Asia (United States)

    Sieh, K.


    Many regions within SE Asia have as great a density of active seismic structures as does the western US - Sumatra, Myanmar, Bangladesh, New Guinea and the Philippines come first to mind. Much of Earth's release of seismic energy in the current millennium has, in fact, come from these regions, with great losses of life and livelihoods. Unfortunately, the scientific progress upon which seismic-risk reduction in SE Asia ultimately depends has been and continues to be slow. Last year at AGU, for example, I counted 57 talks about the M6 Napa earthquake. In contrast, I can't recall hearing any talk on a SE Asian M6 earthquake at any venue in the past many years. In fact, even M7+ earthquakes often go unstudied. Not uncommonly, the region's earthquake scientists face high financial and political impediments to conducting earthquake research. Their slow speed in the development of scientific knowledge doesn't bode well for speedy progress in the science of seismic hazards, the sine qua non for substantially reducing seismic risk. There are two basic necessities for the region to evolve significantly from the current state of affairs. Both involve the development of regional infrastructure: 1) Data: Robust and accessible geophysical monitoring systems would need to be installed, maintained and utilized by the region's earth scientists and their results shared internationally. Concomitantly, geological mapping (sensu lato) would need to be undertaken. 2) People: The training, employment, and enduring support of a new, young, international corps of earth scientists would need to accelerate markedly. The United States could play an important role in achieving the goal of significant seismic risk reduction in the most seismically active countries of SE Asia by taking the lead in establishing a coalition to robustly fund a multi-decadal program that supports scientists and their research institutions to work alongside local expertise.

  6. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria (United States)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.


    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  7. Estimates of aseismic slip associated with small earthquakes near San Juan Bautista, CA (United States)

    Hawthorne, J. C.; Simons, M.; Ampuero, J.-P.


    Postseismic slip observed after large (M > 6) earthquakes typically has an equivalent moment of a few tens of percent of the coseismic moment. Some observations of the recurrence intervals of repeating earthquakes suggest that postseismic slip following small (M≲4) earthquakes could be much larger—up to 10 or 100 times the coseismic moment. We use borehole strain data from U.S. Geological Survey strainmeter SJT to analyze deformation in the days before and after 1000 1.9 < M < 5 earthquakes near San Juan Bautista, CA. We find that on average, postseismic strain is roughly equal in magnitude to coseismic strain for the magnitude range considered, suggesting that postseismic moment following these small earthquakes is roughly equal to coseismic moment. This postseismic to coseismic moment ratio is larger than typically observed in earthquakes that rupture through the seismogenic zone but is much smaller than was hypothesized from modeling repeating earthquakes. Our results are consistent with a simple, self-similar model of earthquakes.

  8. Slip Distribution of Two Recent Large Earthquakes in the Guerrero Segment of the Mexican Subduction Zone, and Their Relation to Previous Earthquakes, Silent Slip Events and Seismic Gaps (United States)

    Hjorleifsdottir, V.; Ji, C.; Iglesias, A.; Cruz-Atienza, V. M.; Singh, S. K.


    In 2012 and 2014 mega-thrust earthquakes occurred approximately 300 km apart, in the state of Guerrero, Mexico. The westernmost half of the segment between them has not had a large earthquake in at least 100 years and most of the easternmost half last broke in 1957. However, down dip of both earthquakes, silent slip events have been reported, as well as in the gap between them (Kostoglodov et al 2003, Graham 2014). There are indications that the westernmost half has different frictional properties than the areas surrounds it. However, the two events at the edges of the zone also seem to behave in different manners, indicating a broad range of frictional properties in this area, with changes occurring over short distances. The 2012/03/20, M7.5 earthquake occurred near the Guerrero-Oaxaca border, between the towns of Ometepec (Gro.) and Pinotepa Nacional (Oax.). This earthquake is noteworthy for breaking the same asperities as two previously recorded earthquakes, the M7.2 1937 and M6.9 1982(a) earthquakes, in very large "repeating earthquakes". Furthermore, the density of repeating smaller events is larger in this zone than in other parts of the subduction zone (Dominguez et al, submitted) and this earthquake has had very many aftershocks for its size (UNAM Seis. group, 2013). The 2012 event may have broken two asperities (UNAM Seis. group, 2013). How the two asperities relate to the previous relatively smaller "large events", to the repeating earthquakes, the high number of aftershocks and to the slow slip event is not clear. The 2014/04/18 M 7.2 earthquake broke a patch on the edge of the Guerrero gap, that previously broke in the 1979 M7.4 earthquake as well as the 1943 M 7.4 earthquake. This earthquake, despite being smaller, had a much larger duration, few aftershocks and clearly ruptured two separate patches (UNAM Seis. group 2015). In this work we estimate the slip distributions for the 2012 and 2014 earthquakes, by combining the data used separately in

  9. The M7 October 21, 1868 Hayward Earthquake, Northern California-140 Years Later (United States)

    Brocher, T. M.; Boatwright, J.; Lienkaemper, J. J.; Schwartz, D. P.; Garcia, S.


    October 21, 2008 marks the 140th anniversary of the M7 1868 Hayward earthquake. This large earthquake, which occurred slightly before 8 AM, caused extensive damage to San Francisco Bay Area and remains the nation's 12th most lethal earthquake. Property loss was extensive and about 30 people were killed. This earthquake culminated a decade-long series of earthquakes in the Bay Area which started with an M~6 earthquake in the southern Peninsula in 1856, followed by a series of four M5.8 to M6.1 sized earthquakes along the northern Calaveras fault, and ended with a M~6.5 earthquake in the Santa Cruz Mountains in 1865. Despite this flurry of quakes, the shaking from the 1868 earthquake was the strongest that the new towns and growing cities of the Bay Area had ever experienced. The effect on the brick buildings of the time was devastating: walls collapsed in San Francisco, Oakland, and San Jose, and buildings cracked as far away as Napa, Santa Rosa, and Hollister. The area that was strongly shaken (at Modified Mercalli Intensity VII or higher) encompassed about 2,300 km2. Aftershocks continued into November 1868. Surface cracking of the ground along the southern end of the Hayward Fault was traced from Warm Springs in Fremont northward 32 km to San Leandro. As Lawson (1908) reports, "the evidence to the northward of San Leandro is not very satisfactory. The country was then unsettled, and the information consisted of reports of cow- boys riding on the range". Analysis of historical triangulation data suggest that the fault moved as far north as Berkeley, and from these data the average slip along the fault is inferred to be about 1.9 ± 0.4 meters. The paleoseismic record from the southern end of the Hayward Fault provides evidence for 10 earthquakes before 1868. The average interval between these earthquakes is 170 ± 80 years, but the last five earthquakes have had an average interval of only 140 ± 50 years. The 1868 Hayward earthquake and more recent analogs such

  10. Inter-Disciplinary Validation of Pre Earthquake Signals. Case Study for Major Earthquakes in Asia (2004-2010) and for 2011 Tohoku Earthquake (United States)

    Ouzounov, D.; Pulinets, S.; Hattori, K.; Liu, J.-Y.; Yang. T. Y.; Parrot, M.; Kafatos, M.; Taylor, P.


    We carried out multi-sensors observations in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several physical and environmental parameters, which we found, associated with the earthquake processes: thermal infrared radiation, temperature and concentration of electrons in the ionosphere, radon/ion activities, and air temperature/humidity in the atmosphere. We used satellite and ground observations and interpreted them with the Lithosphere-Atmosphere- Ionosphere Coupling (LAIC) model, one of possible paradigms we study and support. We made two independent continues hind-cast investigations in Taiwan and Japan for total of 102 earthquakes (M>6) occurring from 2004-2011. We analyzed: (1) ionospheric electromagnetic radiation, plasma and energetic electron measurements from DEMETER (2) emitted long-wavelength radiation (OLR) from NOAA/AVHRR and NASA/EOS; (3) radon/ion variations (in situ data); and 4) GPS Total Electron Content (TEC) measurements collected from space and ground based observations. This joint analysis of ground and satellite data has shown that one to six (or more) days prior to the largest earthquakes there were anomalies in all of the analyzed physical observations. For the latest March 11 , 2011 Tohoku earthquake, our analysis shows again the same relationship between several independent observations characterizing the lithosphere /atmosphere coupling. On March 7th we found a rapid increase of emitted infrared radiation observed from satellite data and subsequently an anomaly developed near the epicenter. The GPS/TEC data indicated an increase and variation in electron density reaching a maximum value on March 8. Beginning from this day we confirmed an abnormal TEC variation over the epicenter in the lower ionosphere. These findings revealed the existence of atmospheric and ionospheric phenomena occurring prior to the 2011 Tohoku earthquake, which indicated new evidence of a distinct

  11. From Multi-Sensors Observations Towards Cross-Disciplinary Study of Pre-Earthquake Signals. What have We Learned from the Tohoku Earthquake? (United States)

    Ouzounov, D.; Pulinets, S.; Papadopoulos, G.; Kunitsyn, V.; Nesterov, I.; Hayakawa, M.; Mogi, K.; Hattori, K.; Kafatos, M.; Taylor, P.


    The lessons we have learned from the Great Tohoku EQ (Japan, 2011) how this knowledge will affect our future observation and analysis is the main focus of this presentation.We present multi-sensors observations and multidisciplinary research in our investigation of phenomena preceding major earthquakes. These observations revealed the existence of atmospheric and ionospheric phenomena occurring prior to theM9.0 Tohoku earthquake of March 11, 2011, which indicates s new evidence of a distinct coupling between the lithosphere and atmosphere/ionosphere, as related to underlying tectonic activity. Similar results have been reported before the catastrophic events in Chile (M8.8, 2010), Italy (M6.3, 2009) and Sumatra (M9.3, 2004). For the Tohoku earthquake, our analysis shows a synergy between several independent observations characterizing the state of the lithosphere /atmosphere coupling several days before the onset of the earthquakes, namely: (i) Foreshock sequence change (rate, space and time); (ii) Outgoing Long wave Radiation (OLR) measured at the top of the atmosphere; and (iii) Anomalous variations of ionospheric parameters revealed by multi-sensors observations. We are presenting a cross-disciplinary analysis of the observed pre-earthquake anomalies and will discuss current research in the detection of these signals in Japan. We expect that our analysis will shed light on the underlying physics of pre-earthquake signals associated with some of the largest earthquake events

  12. Classification of M~7 earthquakes in Tokyo Metropolitan area since 1885 - The 1921 Ibaraki-ken Nambu and 1922 Uraga channel earthquakes (United States)

    Ishibe, T.; Satake, K.; Shimazaki, K.; Murotani, S.; Nishiyama, A.


    S-P times, focal mechanism solutions from initial motion, and seismic intensity distribution show that the 1921 Ibaraki-ken Nambu earthquake (M 7.0) and the 1922 Uraga channel earthquake (M 6.8) both occurred within the subducting Philippine Sea plate beneath the Tokyo Metropolitan area. The Tokyo Metropolitan area is situated in a tectonically complex region; The Philippine Sea plate (PHS) subducts from south, while the Pacific plate (PAC) subducts from east below PHS. As a result, various types of earthquakes occur in this region. They are classified into: shallow crustal earthquakes, intraplate (slab) earthquakes within PHS, within PAC, and interplate earthquakes between continental plate and PHS, and between PHS and PAC. The probability of the large earthquakes with magnitude (M)~7 is high; Earthquake Research Committee calculated the probability of occurrence during the next 30 years as 70 %, based on the fact that five M~7 earthquakes (the 1894 Meiji Tokyo, 1895 and 1921 Ibaraki-ken Nambu, 1922 Uraga Channel, and 1987 Chiba-ken Toho-oki earthquakes) occurred since 1885. However, types of these earthquakes except for the 1987 earthquake are not well known due to low quality of data. It is important to classify these earthquakes into above-described intraplate or interplate earthquakes. The Ibaraki-ken Nambu earthquake occurred on 8 December, 1921 and caused damage such as fissures on road, tumble of gravestones especially in the northwestern Chiba and southwestern Ibaraki prefectures. The focal depth was estimated to be around 55 km using S-P times of old seismograms or JMA reports, suggesting that this earthquake was probably a slab earthquake within PHS. Seismic intensity distribution supports this result; seismic intensity anomalies characterizing the PAC slab earthquakes are not recognized. Furthermore, initial motion focal mechanisms using HASH algorithm (Hardebeck and Shearer, 2002) are strike-slip types, even if the uncertainty of hypocenter locations

  13. High Performance Liquid Chromatography of Propellants. Part 1. Analysis of M1, M6, and M10 Propellants (United States)


    High performance liquid chromatography permits the differentation among the stabilizers and their degradation products together with accurate quantitation. This progress report describes work carried out in the analysis of single base propellants containing diphenylamine (DPA) as the stabilizer. Several degradation products have been identified and the routine determination of these compounds is feasible. The degradation of DPA seems to follow a pattern that is unique for M1 and M6’s as compared to the pattern for M10’s. It is postulated

  14. Earthquake number forecasts testing (United States)

    Kagan, Yan Y.


    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  15. Earthquake impact scale (United States)

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Bausch, D.


    With the advent of the USGS prompt assessment of global earthquakes for response (PAGER) system, which rapidly assesses earthquake impacts, U.S. and international earthquake responders are reconsidering their automatic alert and activation levels and response procedures. To help facilitate rapid and appropriate earthquake response, an Earthquake Impact Scale (EIS) is proposed on the basis of two complementary criteria. On the basis of the estimated cost of damage, one is most suitable for domestic events; the other, on the basis of estimated ranges of fatalities, is generally more appropriate for global events, particularly in developing countries. Simple thresholds, derived from the systematic analysis of past earthquake impact and associated response levels, are quite effective in communicating predicted impact and response needed after an event through alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1,000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses reaching $1M, $100M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness predominate in countries in which local building practices typically lend themselves to high collapse and casualty rates, and these impacts lend to prioritization for international response. In contrast, financial and overall societal impacts often trigger the level of response in regions or countries in which prevalent earthquake resistant construction practices greatly reduce building collapse and resulting fatalities. Any newly devised alert, whether economic- or casualty-based, should be intuitive and consistent with established lexicons and procedures. Useful alerts should

  16. Earthquake and Geothermal Energy

    CERN Document Server

    Kapoor, Surya Prakash


    The origin of earthquake has long been recognized as resulting from strike-slip instability of plate tectonics along the fault lines. Several events of earthquake around the globe have happened which cannot be explained by this theory. In this work we investigated the earthquake data along with other observed facts like heat flow profiles etc... of the Indian subcontinent. In our studies we found a high-quality correlation between the earthquake events, seismic prone zones, heat flow regions and the geothermal hot springs. As a consequence, we proposed a hypothesis which can adequately explain all the earthquake events around the globe as well as the overall geo-dynamics. It is basically the geothermal power, which makes the plates to stand still, strike and slip over. The plates are merely a working solid while the driving force is the geothermal energy. The violent flow and enormous pressure of this power shake the earth along the plate boundaries and also triggers the intra-plate seismicity. In the light o...

  17. Rupture, waves and earthquakes (United States)

    UENISHI, Koji


    Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but “extraordinary” phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable. PMID:28077808

  18. Earthquake engineering in China

    Institute of Scientific and Technical Information of China (English)



    The development of earthquake engineering in China is described into three stages.The initial stage in 1950's -1960's was marked with the initiation of this branch of science from its creation in the first national 12-year plan of science andtechnology by specifying earthquake engineering as a branch item and IEM was one participant. The first earthquake zonationmap and the first seismic design code were soon completed and used in engineering design. Site effect on structural design andsite selection were seriously studied. The second stage marked with the occurrence of quite a few strong earthquakes in China,from which many lessons were learned and corresponding considerations were specified in our design codes and followed inconstruction practice. The third stage is a stage of disaster management, which is marked by a series of governmentdocumentations, leading by a national law of the People's Republic of China on the protecting against and mitigating earthquakedisasters adopted at the meeting of the Standing Committee of the National People's Congress of the People's Republic of Chinain 1997, and then followed by some provincial and municipal laws to force the actions outlined in the national law. It may beexpected that our society will be much more safer to resist the attack of future strong earthquakes with less losses. Lastly,possible future developments are also discussed.

  19. Rupture, waves and earthquakes. (United States)

    Uenishi, Koji


    Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but "extraordinary" phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable.

  20. Recurrence Statistics of Great Earthquakes

    CERN Document Server

    Ben-Naim, E; Johnson, P A


    We investigate the sequence of great earthquakes over the past century. To examine whether the earthquake record includes temporal clustering, we identify aftershocks and remove those from the record. We focus on the recurrence time, defined as the time between two consecutive earthquakes. We study the variance in the recurrence time and the maximal recurrence time. Using these quantities, we compare the earthquake record with sequences of random events, generated by numerical simulations, while systematically varying the minimal earthquake magnitude Mmin. Our analysis shows that the earthquake record is consistent with a random process for magnitude thresholds 7.0<=Mmin<=8.3, where the number of events is larger. Interestingly, the earthquake record deviates from a random process at magnitude threshold 8.4<=Mmin<= 8.5, where the number of events is smaller; however, this deviation is not strong enough to conclude that great earthquakes are clustered. Overall, the findings are robust both qualitat...

  1. Earthquake Damage to Transportation Systems (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Earthquakes represent one of the most destructive natural hazards known to man. A serious result of large-magnitude earthquakes is the disruption of transportation...

  2. Earthquakes, March-April 1989 (United States)

    Person, W.J.


    The first major earthquake (7.0-7.9) of the year hit Mexico on April 25, killing three people and causing some damage. Earthquake-related deaths were also reported from Malawi, China, and New Britain. 

  3. Early earthquakes of the Americas

    Institute of Scientific and Technical Information of China (English)

    Niu Zhijun


    @@ In recent decades the science of seismology,in particular the study of individual earthquakes, has expanded dramatically. A seismologist can look for evidence of past earthquakes in the material remains that have been excavated by archaeologists.

  4. AthMethPre: a web server for the prediction and query of mRNA m(6)A sites in Arabidopsis thaliana. (United States)

    Xiang, Shunian; Yan, Zhangming; Liu, Ke; Zhang, Yaou; Sun, Zhirong


    N(6)-Methyladenosine (m(6)A) is the most prevalent and abundant modification in mRNA that has been linked to many key biological processes. High-throughput experiments have generated m(6)A-peaks across the transcriptome of A. thaliana, but the specific methylated sites were not assigned, which impedes the understanding of m(6)A functions in plants. Therefore, computational prediction of mRNA m(6)A sites becomes emergently important. Here, we present a method to predict the m(6)A sites for A. thaliana mRNA sequence(s). To predict the m(6)A sites of an mRNA sequence, we employed the support vector machine to build a classifier using the features of the positional flanking nucleotide sequence and position-independent k-mer nucleotide spectrum. Our method achieved good performance and was applied to a web server to provide service for the prediction of A. thaliana m(6)A sites. The server also provides a comprehensive database of predicted transcriptome-wide m(6)A sites and curated m(6)A-seq peaks from the literature for query and visualization. The AthMethPre web server is the first web server that provides a user-friendly tool for the prediction and query of A. thaliana mRNA m(6)A sites, which is freely accessible for public use at .

  5. Understanding Earthquake Hazard & Disaster in Himalaya - A Perspective on Earthquake Forecast in Himalayan Region of South Central Tibet (United States)

    Shanker, D.; Paudyal, ,; Singh, H.


    characterized by an extremely high annual earthquake frequency as compared to the preceding normal and the following gap episodes, and is the characteristics of the events in such an episode is causally related with the magnitude and the time of occurrence of the forthcoming earthquake. It is observed here that for the shorter duration of the preparatory time period, there will be the smaller mainshock, and vice-versa. The Western Nepal and the adjoining Tibet region are potential for the future medium size earthquakes. Accordingly, it has been estimated here that an earthquake with M 6.5 ± 0.5 may occur at any time from now onwards till December 2011 in the Western Nepal within an area bounded by 29.3°-30.5° N and 81.2°-81.9° E, in the focal depth range 10 -30 km.

  6. Australia: historical earthquake studies

    Directory of Open Access Journals (Sweden)

    K. McCue


    Full Text Available Historical studies of earthquakes in Australia using information dating back to 1788 have been comprehensive, if not exhaustive. Newspapers have been the main source of historical earthquake studies. A brief review is given here with an introduction to the pre-European aboriginal dreamtime information. Some of the anecdotal information of the last two centuries has been compiled as isoseismal maps. Relationships between isoseismal radii and magnitude have been established using post-instrumental data allowing magnitudes to be assigned to the pre-instrumental data, which can then be incorporated into the national earthquake database. The studies have contributed to hazard analyses for the building codes and stimulated research into microzonation and paleo-seismology.

  7. Organizational changes at Earthquakes & Volcanoes (United States)

    Gordon, David W.


    Primary responsibility for the preparation of Earthquakes & Volcanoes within the Geological Survey has shifted from the Office of Scientific Publications to the Office of Earthquakes, Volcanoes, and Engineering (OEVE). As a consequence of this reorganization, Henry Spall has stepepd down as Science Editor for Earthquakes & Volcanoes(E&V).

  8. Sensing the earthquake (United States)

    Bichisao, Marta; Stallone, Angela


    Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.

  9. Synchronization of atmospheric indicators at the last stage of earthquake preparation cycle

    Directory of Open Access Journals (Sweden)

    Sergey A. Pulinets


    Full Text Available We consider the dynamics of different parameters in the boundary layer of atmosphere and low level cloud structure around the time of three recent moderate and strong earthquakes: Virginia M 5.8 earthquake on August 23 2011 in USA, Van M 7.1 earthquake on October 23 2011 in Turkey, and Northwestern Iran M 6.4 earthquake on August 11, 2012, Iran. Using as indicators the water vapor chemical potential correction value, aerosol optical thickness, and linear cloud structures appearance we discovered their coherence in space and time within the time interval 3-5 days before the seismic shock. Obtained results are interpreted as synergetic result of the lithosphere-atmosphere-ionosphere coupling process.

  10. Daily earthquake forecasts during the May-June 2012 Emilia earthquake sequence (northern Italy

    Directory of Open Access Journals (Sweden)

    Warner Marzocchi


    Full Text Available On May 20, 2012, at 02:03 UTC, a magnitude Ml 5.9 earthquake hit part of the Po Plain area (latitude, 44.89 ˚N; longitude, 11.23 ˚E close to the village of Finale-Emilia in the Emilia-Romagna region (northern Italy. This caused a number of human losses and significant economic damage to buildings, and to local farms and industry. This earthquake was preceded by an increase in the seismicity the day before, with the largest shock of Ml 4.1 at 23:13 UTC (latitude, 44.90 ˚N; longitude, 11.26 ˚E. It was then followed by six other Ml 5.0 or greater events in the following weeks. The largest of these six earthquakes occurred on May 29, 2012, at 07:00 UTC (Ml 5.8, and was located 12 km southwest of the May 20, 2012, main event (latitude, 44.85 ˚N; longitude, 11.09 ˚E, resulting in the collapse of many buildings that had already been weakened, a greater number of victims, and most of the economic damage (see Figure 1. This sequence took place in one of the Italian regions that is considered to be at small-to-moderate seismic hazard [Gruppo di Lavoro MPS 2004]. Earthquakes of the M6 class have occurred in the past in this zone [Gruppo di Lavoro CPTI 2004], but with a much smaller time frequency with respect to the most seismically hazardous parts of Italy. […

  11. Gas and Dust Phenomena of Mega-earthquakes and the Cause (United States)

    Yue, Z.


    A mega-earthquake suddenly releases a large to extremely large amount of kinetic energy within a few tens to two hundreds seconds and over ten to hundreds kilometer distances in the Earth's crust and on ground surface. It also generates seismic waves that can be received globally and co-seismic ground damages such co-seismic ruptures and landslides. However, such vast, dramatic and devastating kinetic actions in the Earth's crustal rocks and on the ground soils cannot be known or predicted by people at few weeks, days, hours, or minutes before they are happening. Although seismologists can develop and use seismometers to report the locations and magnitudes of earthquakes within minutes of their occurrence, they cannot predict earthquakes at present. Therefore, damage earthquakes have caused and would continue to cause huge disasters, fatalities and injuries to our human beings. This problem may indicate that it is necessary to re-examine the cause of mega-earthquakes in addition to the conventional cause of active fault elastic rebounding. In the last ten years, many mega-earthquakes occurred in China and around the Pacific Ocean and caused many casualties to human beings and devastating disasters to environments. The author will give a brief review on the impacts of the mega-earthquakes happened in recent years. He will then present many gas and dust related phenomena associated with the sudden occurrences of these mega earthquakes. They include the 2001 Kunlunshan Earthquake M8.1, 2008 Wenchuan Earthquake M8.0 and the 2010 Yushu Earthquake M7.1 in China, the 2010 Haiti Earthquake M7.0, the 2010 Mexicali Earthquake M7.2, the 2010 Chile Earthquake M8.8, the 2011 Christchurch earthquake M6.3 and the 2011 Japan Earthquake M9.0 around the Pacific Ocean. He will discuss the cause of these gas and dust related phenomena. He will use these phenomena and their common cause to show that the earthquakes were caused the rapid migration and expansion of highly compressed and

  12. Photodynamic activities of silicon phthalocyanines against achromic M6 melanoma cells and healthy human melanocytes and keratinocytes. (United States)

    Decreau, R; Richard, M J; Verrando, P; Chanon, M; Julliard, M


    Dichlorosilicon phthalocyanine (Cl2SiPc) and bis(tri-n-hexylsiloxy) silicon phthalocyanine (HexSiPc) have been evaluated in vitro as potential photosensitizers for photodynamic therapy (PDT) against the human amelanotic melanoma cell line M6. Each photosensitizer is dissolved in a solvent-PBS mixture, or entrapped in egg-yolk lecithin liposomes or in Cremophor EL micelles. The cells are incubated for 1 h with the sensitizer and then irradiated for 20 min, 1 h or 2 h (lambda > 480 nm, 10 mW cm-2). The photocytotoxic effect is dependent on the photosensitizer concentration and the light dose. Higher phototoxicity is observed after an irradiation of 2 h: treatment with a solution of photosensitizer (2 x 10(-9) M) leads to 10% (HexSiPc in egg-yolk lecithin liposomes) or 20% (Cl2SiPc in DMF-PBS solution) cell viability. After 1 h incubation and 20 min of light exposure, the photodynamic effect is connected with the type of delivery system used. For HexSiPc, lower cell viability is found when this photosensitizer is entrapped in egg-yolk lecithin instead of solvent-PBS or for Cremophor EL micelles with Cl2SiPc. Liposome-delivered HexSiPc leads to lipid damage in M6 cells, illustrated by an increase of thiobarbituric acid-reacting substances (TBARs), but the change is not significant with Cremophor EL. The same is observed for the antioxidative defences after photodynamic stress. The cells irradiated with HexSiPc entrapped in liposomes display an increase of superoxide dismutase (SOD) activity and a decrease of glutathione (GSH) level, glutathione peroxidase (GSHPx) and catalase (Cat) activities.

  13. The 4 January 2016 Manipur earthquake in the Indo-Burmese wedge, an intra-slab event

    Directory of Open Access Journals (Sweden)

    V. K. Gahalaut


    Full Text Available Earthquakes in the Indo-Burmese wedge occur due to India-Sunda plate motion. These earthquakes generally occur at depth between 25 and 150 km and define an eastward gently dipping seismicity trend surface that coincides with the Indian slab. Although this feature mimics the subduction zone, the relative motion of Indian plate predominantly towards north, earthquake focal mechanisms suggest that these earthquakes are of intra-slab type which occur on steep plane within the Indian plate. The relative motion between the India and Sunda plates is accommodated at the Churachandpur-Mao fault (CMF and Sagaing Fault. The 4 January 2016 Manipur earthquake (M 6.7 is one such earthquake which occurred 20 km west of the CMF at ∼60 km depth. Fortunately, this earthquake occurred in a very sparse population region with very traditional wooden frame houses and hence, the damage caused by the earthquake in the source region was very minimal. However, in the neighbouring Imphal valley, it caused some damage to the buildings and loss of eight lives. The damage in Imphal valley due to this and historical earthquakes in the region emphasizes the role of local site effect in the Imphal valley.

  14. Indonesian Earthquake Decision Support System

    CERN Document Server

    Warnars, Spits


    Earthquake DSS is an information technology environment which can be used by government to sharpen, make faster and better the earthquake mitigation decision. Earthquake DSS can be delivered as E-government which is not only for government itself but in order to guarantee each citizen's rights for education, training and information about earthquake and how to overcome the earthquake. Knowledge can be managed for future use and would become mining by saving and maintain all the data and information about earthquake and earthquake mitigation in Indonesia. Using Web technology will enhance global access and easy to use. Datawarehouse as unNormalized database for multidimensional analysis will speed the query process and increase reports variation. Link with other Disaster DSS in one national disaster DSS, link with other government information system and international will enhance the knowledge and sharpen the reports.

  15. Episodic tremor triggers small earthquakes (United States)

    Balcerak, Ernie


    It has been suggested that episodic tremor and slip (ETS), the weak shaking not associated with measurable earthquakes, could trigger nearby earthquakes. However, this had not been confirmed until recently. Vidale et al. monitored seismicity in the 4-month period around a 16-day episode of episodic tremor and slip in March 2010 in the Cascadia region. They observed five small earthquakes within the subducting slab during the ETS episode. They found that the timing and locations of earthquakes near the tremor suggest that the tremor and earthquakes are related. Furthermore, they observed that the rate of earthquakes across the area was several times higher within 2 days of tremor activity than at other times, adding to evidence of a connection between tremor and earthquakes. (Geochemistry, Geophysics, Geosystems, doi:10.1029/2011GC003559, 2011)

  16. ALMA measures Calama earthquake (United States)

    Brito, R.; Shillue, B.


    On 4 March 2010, the ALMA system response to an extraordinarily large disturbance was measured when a magnitude 6.3 earthquake struck near Calama, Chile, relatively close to the ALMA site. Figures 1 through 4 demonstrate the remarkable performance of the ALMA system to a huge disturbance that was more than 100 times the specification for correction accuracy.

  17. Road Damage Following Earthquake (United States)


    Ground shaking triggered liquefaction in a subsurface layer of water-saturated sand, producing differential lateral and vertical movement in a overlying carapace of unliquified sand and slit, which moved from right to left towards the Pajaro River. This mode of ground failure, termed lateral spreading, is a principal cause of liquefaction-related earthquake damage caused by the Oct. 17, 1989, Loma Prieta earthquake. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: S.D. Ellen, U.S. Geological Survey

  18. Dynamic Source Inversion of Intermediate Depth Earthquakes in Mexico (United States)

    Yuto Sho Mirwald, Aron; Cruz-Atienza, Victor Manuel; Krishna Singh-Singh, Shri


    The source mechanisms of earthquakes at intermediate depth (50-300 km) are still under debate. Due to the high confining pressure at depths below 50 km, rocks ought to deform by ductile flow rather than brittle failure, which is the mechanism originating most earthquakes. Several source mechanisms have been proposed, but for neither of them conclusive evidence has been found. One of two viable mechanisms is Dehydration Embrittlement, where liberation of water lowers the effective pressure and enables brittle fracture. The other is Thermal Runaway, a highly localized ductile deformation (Prieto et. al., Tecto., 2012). In the Mexican subduction zone, intermediate depth earthquakes represent a real hazard in central Mexico due to their proximity to highly populated areas and the large accelerations induced on ground motion (Iglesias et. al., BSSA, 2002). To improve our understanding of these rupture processes, we use a recently introduced inversion method (Diaz-Mojica et. al., JGR, 2014) to analyze several intermediate depth earthquakes in Mexico. The method inverts strong motion seismograms to determine the dynamic source parameters based on a genetic algorithm. It has been successfully used for the M6.5 Zumpango earthquake that occurred at a depth of 62 km in the state of Guerrero, Mexico. For this event, high radiated energy, low radiation efficiency and low rupture velocity were determined. This indicates a highly dissipative rupture process, suggesting that Thermal Runaway could probably be the dominant source process. In this work we improved the inversion method by introducing a theoretical consideration for the nucleation process that minimizes the effects of rupture initiation and guarantees self-sustained rupture propagation (Galis et. al., GJInt., 2014). Preliminary results indicate that intermediate depth earthquakes in central Mexico may vary in their rupture process. For instance, for a M5.9 normal-faulting earthquake at 55 km depth that produced very

  19. The HayWired earthquake scenario—Earthquake hazards (United States)

    Detweiler, Shane T.; Wein, Anne M.


    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  20. Hazus® estimated annualized earthquake losses for the United States (United States)

    Jaiswal, Kishor; Bausch, Doug; Rozelle, Jesse; Holub, John; McGowan, Sean


    Large earthquakes can cause social and economic disruption that can be unprecedented to any given community, and the full recovery from these impacts may or may not always be achievable. In the United States (U.S.), the 1994 M6.7 Northridge earthquake in California remains the third costliest disaster in U.S. history; and it was one of the most expensive disasters for the federal government. Internationally, earthquakes in the last decade alone have claimed tens of thousands of lives and caused hundreds of billions of dollars of economic impact throughout the globe (~90 billion U.S. dollars (USD) from 2008 M7.9 Wenchuan China, ~20 billion USD from 2010 M8.8 Maule earthquake in Chile, ~220 billion USD from 2011 M9.0 Tohoku Japan earthquake, ~25 billion USD from 2011 M6.3 Christchurch New Zealand, and ~22 billion USD from 2016 M7.0 Kumamoto Japan). Recent earthquakes show a pattern of steadily increasing damages and losses that are primarily due to three key factors: (1) significant growth in earthquake-prone urban areas, (2) vulnerability of the older building stock, including poorly engineered non-ductile concrete buildings, and (3) an increased interdependency in terms of supply and demand for the businesses that operate among different parts of the world. In the United States, earthquake risk continues to grow with increased exposure of population and development even though the earthquake hazard has remained relatively stable except for the regions of induced seismic activity. Understanding the seismic hazard requires studying earthquake characteristics and locales in which they occur, while understanding the risk requires an assessment of the potential damage from earthquake shaking to the built environment and to the welfare of people—especially in high-risk areas. Estimating the varying degree of earthquake risk throughout the United States is critical for informed decision-making on mitigation policies, priorities, strategies, and funding levels in the

  1. Fault-based Earthquake Rupture Forecasts for Western Gulf of Corinth, Greece (United States)

    Ganas, A.; Parsons, T.; Segkou, M.


    The western Gulf of Corinth has not experienced a strong earthquake since 1995 (the Ms=6.2 event of Aigion on 15 June 1995; Bernard et al., 1997), although the Gulf is extending fast (over 12 mm/yr of N-S extension from continuous GPS data spanning a period of 9+ years) and its seismic history since 1769 exhibits twelve (12) shallow events with M>6.0. We undertook an analysis of rupture forecasts along the active faults in this area of central Greece, using most updated datasets (active fault maps, fault geometry, fault slip rates, trenching data on past earthquakes, historical and instrumental seismicity, strain) and models for earthquake budget extrapolated from observed seismicity, magnitude-frequency distributions and calculated earthquake rates vs. magnitude for individual faults. We present a unified rupture forecast model that comprises a time-independent (Poisson-process) earthquake rate model, and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The resulting rupture rate maps may be used to update building codes and promote mitigation efforts.

  2. Seismomagnetic models for earthquakes in the eastern part of Izu Peninsula, Central Japan

    Directory of Open Access Journals (Sweden)

    Y. Ishikawa


    Full Text Available Seismomagnetic changes accompanied by four damaging earthquakes are explained by the piezomagnetic effect observed in the eastern part of Izu Peninsula, Central Japan. Most of the data were obtained by repeat surveys. Although these data suffered electric railway noise, significant magnetic changes were detected at points close to earthquake faults. Coseismic changes can be well interpreted by piezomagnetic models in the case of the 1978 Near Izu-Oshima (M 7.0 and the 1980 East Off Izu Peninsula (M 6.7 earthquakes. A large total intensity change up to 5 nT was observed at a survey point almost above the epicenter of the 1976 Kawazu (M 5.4 earthquake. This change is not explained by a single fault model; a 2-segment fault is suggested. Remarkable precursory and coseismic changes in the total force intensity were observed at KWZ station along with the 1978 Higashi-Izu (M 4.9 earthquake. KWZ station is located very close to a buried subsidiary fault of the M 7.0 Near Izu-Oshima earthquake, which moved aseismically at the time of the M 7.0 quake. The precursory magnetic change to the M 4.9 quake is ascribed to aseismic faulting of this buried fault, while the coseismic rebound to enlargement of the slipping surface at the time of M 4.9 quake. This implies that we observed the formation process of the earthquake nucleation zone via the magnetic field.

  3. Lithospheric flexure under the Hawaiian volcanic load: Internal stresses and a broken plate revealed by earthquakes (United States)

    Klein, Fred W.


    Several lines of earthquake evidence indicate that the lithospheric plate is broken under the load of the island of Hawai`i, where the geometry of the lithosphere is circular with a central depression. The plate bends concave downward surrounding a stress-free hole, rather than bending concave upward as with past assumptions. Earthquake focal mechanisms show that the center of load stress and the weak hole is between the summits of Mauna Loa and Mauna Kea where the load is greatest. The earthquake gap at 21 km depth coincides with the predicted neutral plane of flexure where horizontal stress changes sign. Focal mechanism P axes below the neutral plane display a striking radial pattern pointing to the stress center. Earthquakes above the neutral plane in the north part of the island have opposite stress patterns; T axes tend to be radial. The M6.2 Honomu and M6.7 Kiholo main shocks (both at 39 km depth) are below the neutral plane and show radial compression, and the M6.0 Kiholo aftershock above the neutral plane has tangential compression. Earthquakes deeper than 20 km define a donut of seismicity around the stress center where flexural bending is a maximum. The hole is interpreted as the soft center where the lithospheric plate is broken. Kilauea's deep conduit is seismically active because it is in the ring of maximum bending. A simplified two-dimensional stress model for a bending slab with a load at one end yields stress orientations that agree with earthquake stress axes and radial P axes below the neutral plane. A previous inversion of deep Hawaiian focal mechanisms found a circular solution around the stress center that agrees with the model. For horizontal faults, the shear stress within the bending slab matches the slip in the deep Kilauea seismic zone and enhances outward slip of active flanks.

  4. Statistical Monitoring of the Seismic Activities before and after the Kumamoto Earthquakes (United States)

    Ogata, Y.; Kumazawa, T.; Tsuruoka, H.


    It is expected that the probability gain of a large earthquake in an aftershock region or its vicinity can be elevated by the presence of the relative quiescence in seismicity sequence (Ogata 2001). We first analyzed the seismicity in the Kumamoto region since 2010 before the occurrence of the M6.5 first foreshock. Although the ETAS model well fits the seismicity in the most subregions of the Kumamoto District, anomalous swarm activities are observed in the subregions to the north of the bending part of the focal faults on which the foreshocks of M6.5, M6.4 and the main shock M7.3 successively occurred. These anomalous swarm activities are characterized by the nonstationary ETAS model. We then applied the ETAS model to the aftershock sequence of M6.5 event or of a few other major earthquakes which precedes the Kumamoto main shock, and revealed that there was relative quiescence. It is also seen that M6.5 aftershocks migrated deeper and closer to the M7.3 hypocenter. We further applied the ETAS model and non-stationary ETAS model (Kumazawa and Ogata 2013) and also model of the b-value change estimate, to the sequence throughout the M6.5 foreshock sequence and M7.3 aftershocks. Moreover, we examined regionally different aftershock activities between the main and off fault zones. In particular, the aftershock productivity parameter K0(t) is high during the foreshocks, and decreased after the M7.3 main shock, whereas the background seismicity rate μ(t) stays constant through the entire period. The b-values show stepwise increasing changes at major events of the M6.5, M6.4, and M7.3.

  5. Italian Case Studies Modelling Complex Earthquake Sources In PSHA (United States)

    Gee, Robin; Peruzza, Laura; Pagani, Marco


    This study presents two examples of modelling complex seismic sources in Italy, done in the framework of regional probabilistic seismic hazard assessment (PSHA). The first case study is for an area centred around Collalto Stoccaggio, a natural gas storage facility in Northern Italy, located within a system of potentially seismogenic thrust faults in the Venetian Plain. The storage exploits a depleted natural gas reservoir located within an actively growing anticline, which is likely driven by the Montello Fault, the underlying blind thrust. This fault has been well identified by microseismic activity (Mseismological information. We explore the sensitivity of the hazard results to various parameters affected by epistemic uncertainty, such as ground motions prediction equations with different rupture-to-site distance metrics, fault geometry, and maximum magnitude. The second case is an innovative study, where we perform aftershock probabilistic seismic hazard assessment (APSHA) in Central Italy, following the Amatrice M6.1 earthquake of August 24th, 2016 (298 casualties) and the subsequent earthquakes of Oct 26th and 30th (M6.1 and M6.6 respectively, no deaths). The aftershock hazard is modelled using a fault source with complex geometry, based on literature data and field evidence associated with the August mainshock. Earthquake activity rates during the very first weeks after the deadly earthquake were used to calibrated an Omori-Utsu decay curve, and the magnitude distribution of aftershocks is assumed to follow a Gutenberg-Richter distribution. We apply uniform and non-uniform spatial distribution of the seismicity across the fault source, by modulating the rates as a decreasing function of distance from the mainshock. The hazard results are computed for short-exposure periods (1 month, before the occurrences of October earthquakes) and compared to the background hazard given by law (MPS04), and to observations at some reference sites. We also show the results of

  6. Listening to Earthquakes with Infrasound (United States)

    Mucek, A. E.; Langston, C. A.


    A tripartite infrasound array was installed to listen to earthquakes occurring along the Guy-Greenbrier fault in Arkansas. The active earthquake swarm is believed to be caused by deep waste water injections and will allow us to explain the mechanisms causing earthquake "booms" that have been heard during an earthquake. The array has an aperture of 50 meters and is installed next to the X301 seismograph station run by the Center for Earthquake Research and Information (CERI). This arrangement allows simultaneous recording of seismic and acoustic changes from the arrival of an earthquake. Other acoustic and seismic sources that have been found include thunder from thunderstorms, gunshots, quarry explosions and hydraulic fracturing activity from the local gas wells. The duration of the experiment is from the last week of June to the last week of September 2011. During the first month and a half, seven local earthquakes were recorded, along with numerous occurrences of the other infrasound sources. Phase arrival times of the recorded waves allow us to estimate wave slowness and azimuth of infrasound events. Using these two properties, we can determine whether earthquake "booms" occur at a site from the arrival of the P-wave or whether the earthquake "booms" occur elsewhere and travel through the atmosphere. Preliminary results show that the infrasound correlates well to the ground motion during an earthquake for frequencies below 15 Hertz.

  7. The Magnitude Distribution of Earthquakes Near Southern California Faults (United States)


    Lindh , 1985; Jackson and Kagan, 2006]. We do not consider time dependence in this study, but focus instead on the magnitude distribution for this fault...90032-7. Bakun, W. H., and A. G. Lindh (1985), The Parkfield, California, earth- quake prediction experiment, Science, 229(4714), 619–624, doi:10.1126

  8. Development of volcano monitoring technique using repeating earthquakes observed by the Volcano Observation Network of NIED (United States)

    Kohno, Y.; Ueda, H.; Kimura, H.; Nagai, M.; Miyagi, Y.; Fujita, E.; Kozono, T.; Tanada, T.


    After the Grate East Japan Earthquake (M9.0) on March 11, 2011, the M6.4 earthquake occurred beneath Mt. Fuji on March 15, 2011. Although the hypocenter seemed to be very close to an assumed magma chamber of Fuji volcano, no anomalies in volcanic activity have been observed until August 2012. As an example, after the M6.1 earthquake occurred in 1998 at southwest of Iwate volcano, a change of seismic velocity structure (e.g. Nishimura et al., 2000) was observed as well as active seismicity and crustal deformation. It had affected waveforms of repeating earthquakes occurring at a plate subduction zone, that is, the waveform similarities were reduced just after the earthquake due to upwelling of magma. In this study, first we analyzed for Mt. Fuji where such changes are expected by the occurrence of the earthquake to try to develop a tool for monitoring active volcanoes using the Volcano Observation network (V-net) data. We used seismic waveform data of repeating earthquakes observed by short period seismometers of V-net and the High Sensitivity Seismograph Network Japan (Hi-net) stations near Fuji volcano after 2007. The seismic data were recorded with a sampling rate of 100 Hz, and we applied 4-8 Hz band pass filter to reduce noise. The repeating earthquakes occurred at the plate subduction zone and their catalog is compiled by Hi-net data (Kimura et al., 2006). We extracted repeating earthquake groups that include earthquakes before and after the M6.4 earthquake on March 15, 2011. A waveform of the first event of the group and waveforms of the other events are compared and calculated cross-correlation coefficients. We adjusted P wave arrivals of each event and calculate the coefficients and lag times of the latter part of the seismic waves with the time window of 1.25 s. We searched the best fit maximizing the cross-correlation coefficients with 0.1 s shift time at each time window. As a result we found three remarkable points at this time. [1] Comparing lag times

  9. Long-term predictability of regions and dates of strong earthquakes (United States)

    Kubyshen, Alexander; Doda, Leonid; Shopin, Sergey


    Results on the long-term predictability of strong earthquakes are discussed. It is shown that dates of earthquakes with M>5.5 could be determined in advance of several months before the event. The magnitude and the region of approaching earthquake could be specified in the time-frame of a month before the event. Determination of number of M6+ earthquakes, which are expected to occur during the analyzed year, is performed using the special sequence diagram of seismic activity for the century time frame. Date analysis could be performed with advance of 15-20 years. Data is verified by a monthly sequence diagram of seismic activity. The number of strong earthquakes expected to occur in the analyzed month is determined by several methods having a different prediction horizon. Determination of days of potential earthquakes with M5.5+ is performed using astronomical data. Earthquakes occur on days of oppositions of Solar System planets (arranged in a single line). At that, the strongest earthquakes occur under the location of vector "Sun-Solar System barycenter" in the ecliptic plane. Details of this astronomical multivariate indicator still require further research, but it's practical significant is confirmed by practice. Another one empirical indicator of approaching earthquake M6+ is a synchronous variation of meteorological parameters: abrupt decreasing of minimal daily temperature, increasing of relative humidity, abrupt change of atmospheric pressure (RAMES method). Time difference of predicted and actual date is no more than one day. This indicator is registered 104 days before the earthquake, so it was called as Harmonic 104 or H-104. This fact looks paradoxical, but the works of A. Sytinskiy and V. Bokov on the correlation of global atmospheric circulation and seismic events give a physical basis for this empirical fact. Also, 104 days is a quarter of a Chandler period so this fact gives insight on the correlation between the anomalies of Earth orientation

  10. Investigation of the Three-Dimensional Hinge Moment Characteristics Generated by the ONERA-M6 Wing with an Aileron

    Directory of Open Access Journals (Sweden)

    G. Q. Zhang


    Full Text Available The hinge moment characteristics for ONERA-M6 wing with aileron configuration have been investigated numerically based on the different gaps and deflecting angles. The results show that the effects on the wing made by the deflecting aileron are notable. Comparing with the nonaileron case, the chordwise pressure coefficient distribution for the wing with aileron has shown the totally different trends. The small gap can force the air flow through and form the extremely strong spraying flow. It can directly destroy the previously formed leading edge vortex (LEV. Due to the presence of the positive deflecting angle, the trailing edge vortex (TEV will begin to generate at the trailing edge of the aileron. The induced secondary LEV will be mixed with the developing TEVs and form the stronger TEVs at the downstream position. Comparing with the subsonic flow, the curve for the supersonic flow has shown a good linear. The corresponding hinge moments are also extremely sensitive to the changing angle of attack, and the slope of curves is also bigger than that of the subsonic flow. The bigger gap and deflecting angle can result in the curve of hinge moment bending upward at high angle of attack. The corresponding pressure cloud and streamlines have also been obtained computationally and analyzed in detail.

  11. Sequence of deep-focus earthquakes beneath the Bonin Islands identified by the NIED nationwide dense seismic networks Hi-net and F-net (United States)

    Takemura, Shunsuke; Saito, Tatsuhiko; Shiomi, Katsuhiko


    An M 6.8 ( Mw 6.5) deep-focus earthquake occurred beneath the Bonin Islands at 21:18 (JST) on June 23, 2015. Observed high-frequency (>1 Hz) seismograms across Japan, which contain several sets of P- and S-wave arrivals for the 10 min after the origin time, indicate that moderate-to-large earthquakes occurred sequentially around Japan. Snapshots of the seismic energy propagation illustrate that after one deep-focus earthquake occurred beneath the Sea of Japan, two deep-focus earthquakes occurred sequentially after the first ( Mw 6.5) event beneath the Bonin Islands in the next 4 min. The United States Geological Survey catalog includes three Bonin deep-focus earthquakes with similar hypocenter locations, but their estimated magnitudes are inconsistent with seismograms from across Japan. The maximum-amplitude patterns of the latter two earthquakes were similar to that of the first Bonin earthquake, which indicates similar locations and mechanisms. Furthermore, based on the ratios of the S-wave amplitudes to that of the first event, the magnitudes of the latter events are estimated as M 6.5 ± 0.02 and M 5.8 ± 0.02, respectively. Three magnitude-6-class earthquakes occurred sequentially within 4 min in the Pacific slab at 480 km depth, where complex heterogeneities exist within the slab.[Figure not available: see fulltext.

  12. Solar activity and earthquake

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, J.


    Prolonged astronomical observations have discovered that the Sun, which is the nearest star to the Earth, is not calm and serene. On the solar surface, there are often windstorms, electrical lights, and sometimes large flame eruptions; and there are regularly black spots in patches which are also active. The Sun not only disperses light and heat, but also throws out large quantities of currents of charged particles to be scattered in space and to reach the Earth, sometimes, which are called by some solar winds. These activities in the Sun can induce many physical phenomena on earth, including magnetic storms, polar light, sudden disruption or attenuation of medium- and short-wave radio, and many atmospheric changes. Some scientists believe they are perhaps also related to the occurrence of earthquakes. This paper explains these solar activities and their possible relationship to earthquakes.

  13. Do Earthquakes Shake Stock Markets? (United States)

    Ferreira, Susana; Karali, Berna


    This paper examines how major earthquakes affected the returns and volatility of aggregate stock market indices in thirty-five financial markets over the last twenty years. Results show that global financial markets are resilient to shocks caused by earthquakes even if these are domestic. Our analysis reveals that, in a few instances, some macroeconomic variables and earthquake characteristics (gross domestic product per capita, trade openness, bilateral trade flows, earthquake magnitude, a tsunami indicator, distance to the epicenter, and number of fatalities) mediate the impact of earthquakes on stock market returns, resulting in a zero net effect. However, the influence of these variables is market-specific, indicating no systematic pattern across global capital markets. Results also demonstrate that stock market volatility is unaffected by earthquakes, except for Japan.

  14. Earthquake engineering for nuclear facilities

    CERN Document Server

    Kuno, Michiya


    This book is a comprehensive compilation of earthquake- and tsunami-related technologies and knowledge for the design and construction of nuclear facilities. As such, it covers a wide range of fields including civil engineering, architecture, geotechnical engineering, mechanical engineering, and nuclear engineering, for the development of new technologies providing greater resistance against earthquakes and tsunamis. It is crucial both for students of nuclear energy courses and for young engineers in nuclear power generation industries to understand the basics and principles of earthquake- and tsunami-resistant design of nuclear facilities. In Part I, "Seismic Design of Nuclear Power Plants", the design of nuclear power plants to withstand earthquakes and tsunamis is explained, focusing on buildings, equipment's, and civil engineering structures. In Part II, "Basics of Earthquake Engineering", fundamental knowledge of earthquakes and tsunamis as well as the dynamic response of structures and foundation ground...

  15. Pain after earthquake

    Directory of Open Access Journals (Sweden)

    Angeletti Chiara


    Full Text Available Abstract Introduction On 6 April 2009, at 03:32 local time, an Mw 6.3 earthquake hit the Abruzzi region of central Italy causing widespread damage in the City of L Aquila and its nearby villages. The earthquake caused 308 casualties and over 1,500 injuries, displaced more than 25,000 people and induced significant damage to more than 10,000 buildings in the L'Aquila region. Objectives This observational retrospective study evaluated the prevalence and drug treatment of pain in the five weeks following the L'Aquila earthquake (April 6, 2009. Methods 958 triage documents were analysed for patients pain severity, pain type, and treatment efficacy. Results A third of pain patients reported pain with a prevalence of 34.6%. More than half of pain patients reported severe pain (58.8%. Analgesic agents were limited to available drugs: anti-inflammatory agents, paracetamol, and weak opioids. Reduction in verbal numerical pain scores within the first 24 hours after treatment was achieved with the medications at hand. Pain prevalence and characterization exhibited a biphasic pattern with acute pain syndromes owing to trauma occurring in the first 15 days after the earthquake; traumatic pain then decreased and re-surged at around week five, owing to rebuilding efforts. In the second through fourth week, reports of pain occurred mainly owing to relapses of chronic conditions. Conclusions This study indicates that pain is prevalent during natural disasters, may exhibit a discernible pattern over the weeks following the event, and current drug treatments in this region may be adequate for emergency situations.

  16. April 25, 2015, Gorkha Earthquake, Nepal and Sequence of Aftershocks: Key Lessons (United States)

    Guragain, R.; Dixit, A. M.; Shrestha, S. N.


    The Gorkha Earthquake of M7.8 hit Nepal on April 25, 2015 at 11:56 am local time. The epicenter of this earthquake was Barpak, Gorkha, 80 km northwest of Kathmandu Valley. The main shock was followed by hundreds of aftershocks including M6.6 and M6.7 within 48 hours and M7.3 on May 12, 2015. According to the Government of Nepal, a total of 8,686 people lost their lives, 16,808 people injured, over 500,000 buildings completely collapsed and more than 250,000 building partially damaged. The National Society for Earthquake Technology - Nepal (NSET), a not-for-profit civil society organization that has been focused on earthquake risk reduction in Nepal for past 21 years, conducted various activities to support people and the government in responding to the earthquake disaster. The activities included: i) assisting people and critical facility institutions to conduct rapid visual building damage assessment including the training; ii) information campaign to provide proper information regarding earthquake safety; iii) support rescue organizations on search and rescue operations; and iv) provide technical support to common people on repair, retrofit of damaged houses. NSET is also involved in carrying out studies related to earthquake damage, geotechnical problems, and causes of building damages. Additionally, NSET has done post-earthquake detail damage assessment of buildings throughout the affected areas. Prior to the earthquake, NSET has been working with several institutions to improve seismic performance of school buildings, private residential houses, and other critical structures. Such activities implemented during the past decade have shown the effectiveness of risk reduction. Retrofitted school buildings performed very well during the earthquake. Preparedness activities implemented at community levels have helped communities to respond immediately and save lives. Higher level of earthquake awareness achieved including safe behavior, better understanding of

  17. Liquefaction-induced lateral spreading in Oceano, California, during the 2003 San Simeon Earthquake (United States)

    Holzer, Thomas L.; Noce, Thomas E.; Bennett, Michael J.; Di Alessandro, Carola; Boatwright, John; Tinsley, John C.; Sell, Russell W.; Rosenberg, Lewis I.


    The December 22, 2003, San Simeon, California, (M6.5) earthquake caused damage to houses, road surfaces, and underground utilities in Oceano, California. The community of Oceano is approximately 50 miles (80 km) from the earthquake epicenter. Damage at this distance from a M6.5 earthquake is unusual. To understand the causes of this damage, the U.S. Geological Survey conducted extensive subsurface exploration and monitoring of aftershocks in the months after the earthquake. The investigation included 37 seismic cone penetration tests, 5 soil borings, and aftershock monitoring from January 28 to March 7, 2004. The USGS investigation identified two earthquake hazards in Oceano that explain the San Simeon earthquake damage?site amplification and liquefaction. Site amplification is a phenomenon observed in many earthquakes where the strength of the shaking increases abnormally in areas where the seismic-wave velocity of shallow geologic layers is low. As a result, earthquake shaking is felt more strongly than in surrounding areas without similar geologic conditions. Site amplification in Oceano is indicated by the physical properties of the geologic layers beneath Oceano and was confirmed by monitoring aftershocks. Liquefaction, which is also commonly observed during earthquakes, is a phenomenon where saturated sands lose their strength during an earthquake and become fluid-like and mobile. As a result, the ground may undergo large permanent displacements that can damage underground utilities and well-built surface structures. The type of displacement of major concern associated with liquefaction is lateral spreading because it involves displacement of large blocks of ground down gentle slopes or towards stream channels. The USGS investigation indicates that the shallow geologic units beneath Oceano are very susceptible to liquefaction. They include young sand dunes and clean sandy artificial fill that was used to bury and convert marshes into developable lots. Most of

  18. LIDAR Helps Identify Source of 1872 Earthquake Near Chelan, Washington (United States)

    Sherrod, B. L.; Blakely, R. J.; Weaver, C. S.


    One of the largest historic earthquakes in the Pacific Northwest occurred on 15 December 1872 (M6.5-7) near the south end of Lake Chelan in north-central Washington State. Lack of recognized surface deformation suggested that the earthquake occurred on a blind, perhaps deep, fault. New LiDAR data show landslides and a ~6 km long, NW-side-up scarp in Spencer Canyon, ~30 km south of Lake Chelan. Two landslides in Spencer Canyon impounded small ponds. An historical account indicated that dead trees were visible in one pond in AD1884. Wood from a snag in the pond yielded a calibrated age of AD1670-1940. Tree ring counts show that the oldest living trees on each landslide are 130 and 128 years old. The larger of the two landslides obliterated the scarp and thus, post-dates the last scarp-forming event. Two trenches across the scarp exposed a NW-dipping thrust fault. One trench exposed alluvial fan deposits, Mazama ash, and scarp colluvium cut by a single thrust fault. Three charcoal samples from a colluvium buried during the last fault displacement had calibrated ages between AD1680 and AD1940. The second trench exposed gneiss thrust over colluvium during at least two, and possibly three fault displacements. The younger of two charcoal samples collected from a colluvium below gneiss had a calibrated age of AD1665- AD1905. For an historical constraint, we assume that the lack of felt reports for large earthquakes in the period between 1872 and today indicates that no large earthquakes capable of rupturing the ground surface occurred in the region after the 1872 earthquake; thus the last displacement on the Spencer Canyon scarp cannot post-date the 1872 earthquake. Modeling of the age data suggests that the last displacement occurred between AD1840 and AD1890. These data, combined with the historical record, indicate that this fault is the source of the 1872 earthquake. Analyses of aeromagnetic data reveal lithologic contacts beneath the scarp that form an ENE

  19. Foreshocks of strong earthquakes (United States)

    Guglielmi, A. V.; Sobisevich, L. E.; Sobisevich, A. L.; Lavrov, I. P.


    The specific enhancement of ultra-low-frequency (ULF) electromagnetic oscillations a few hours prior to the strong earthquakes, which was previously mentioned in the literature, motivated us to search for the distinctive features of the mechanical (foreshock) activity of the Earth's crust in the epicentral zones of the future earthquakes. Activation of the foreshocks three hours before the main shock is revealed, which is roughly similar to the enhancement of the specific electromagnetic ULF emission. It is hypothesized that the round-the-world seismic echo signals from the earthquakes, which form the peak of energy release 2 h 50 min before the main events, act as the triggers of the main shocks due to the cumulative action of the surface waves converging to the epicenter. It is established that the frequency of the fluctuations in the foreshock activity decreases at the final stages of the preparation of the main shocks, which probably testifies to the so-called mode softening at the approach of the failure point according to the catastrophe theory.

  20. Earthquake forecasting: Statistics and Information

    CERN Document Server

    Gertsik, V; Krichevets, A


    We present an axiomatic approach to earthquake forecasting in terms of multi-component random fields on a lattice. This approach provides a method for constructing point estimates and confidence intervals for conditional probabilities of strong earthquakes under conditions on the levels of precursors. Also, it provides an approach for setting multilevel alarm system and hypothesis testing for binary alarms. We use a method of comparison for different earthquake forecasts in terms of the increase of Shannon information. 'Forecasting' and 'prediction' of earthquakes are equivalent in this approach.

  1. Earthquake forecasting and its verification

    Directory of Open Access Journals (Sweden)

    J. R. Holliday


    Full Text Available No proven method is currently available for the reliable short time prediction of earthquakes (minutes to months. However, it is possible to make probabilistic hazard assessments for earthquake risk. In this paper we discuss a new approach to earthquake forecasting based on a pattern informatics (PI method which quantifies temporal variations in seismicity. The output, which is based on an association of small earthquakes with future large earthquakes, is a map of areas in a seismogenic region ('hotspots'' where earthquakes are forecast to occur in a future 10-year time span. This approach has been successfully applied to California, to Japan, and on a worldwide basis. Because a sharp decision threshold is used, these forecasts are binary--an earthquake is forecast either to occur or to not occur. The standard approach to the evaluation of a binary forecast is the use of the relative (or receiver operating characteristic (ROC diagram, which is a more restrictive test and less subject to bias than maximum likelihood tests. To test our PI method, we made two types of retrospective forecasts for California. The first is the PI method and the second is a relative intensity (RI forecast based on the hypothesis that future large earthquakes will occur where most smaller earthquakes have occurred in the recent past. While both retrospective forecasts are for the ten year period 1 January 2000 to 31 December 2009, we performed an interim analysis 5 years into the forecast. The PI method out performs the RI method under most circumstances.

  2. Changing M3G/M6G ratios and pharmacodynamics in a cancer patient during long-term morphine treatment

    DEFF Research Database (Denmark)

    Andersen, Gertrud; Christrup, Lona Louring; Sjøgren, Per;


    A cancer patient receiving long-term oral sustained-release morphine treatment and periodically presenting with unusually high plasma M3G/M6G ratios is described. We found the patient's formation of M6G more unstable and perhaps delayed compared to the formation of M3G. There is no apparent...... explanation for this phenomenon and the high M3G/M6G ratios had no implications for the patient's pain experience or side effects from the morphine treatment....

  3. Evolution of Photospheric Flow and Magnetic Fields Associated with the 2015 June 22 M6.5 Flare (United States)

    Wang, Jiasheng; Liu, Chang; Deng, Na; Wang, Haimin


    The evolution of photospheric flow and magnetic fields before and after flares can provide important information regarding the flare triggering and back reaction processes. However, such studies on the flow field are rare due to the paucity of high-resolution observations covering the entire flaring period. Here we study the structural evolution of penumbra and shear flows associated with the 2015 June 22 M6.5 flare in NOAA AR 12371, using high-resolution imaging observation in the TiO band taken by the 1.6 m New Solar Telescope at Big Bear Solar Observatory, with the aid of the differential affine velocity estimator(DAVE) method for flow tracking. The accompanied photospheric vector magnetic field changes are also analyzed using data from the Helioseismic and Magnetic Imager. As a result, we found, for a penumbral segment in the negative field adjacent to the magnetic polarity inversion line (PIL), an enhancement of penumbral flows (up to ~2 km s-1) and extension of penumbral fibrils after the first peak of the flare hard X-ray (HXR) emission. We also found a shear flow region at the PIL, which is co-spatial with a precursor brightening kernel and exhibits a gradual increase of shear flow velocity (up to ~0.9 km s-1) after the flare. The enhancing penumbral and shear flow regions are also accompanied by an increase of horizontal field and decrease of magnetic inclination angle. These results are discussed in the context of the theory of back reaction of coronal restructuring on the photosphere as a result of flare energy release.

  4. Are segment boundaries of subduction earthquakes structurally controlled? Evidences from the Ecuador-Colombia 20th century earthquake sequence (United States)

    Collot, J.-Y.; Marcaillou, B.; Sage, F.; Gutscher, M.-A.; Charvis, P.; Michaud, F.


    , and across a highly deformed outer ridge, indicating a relatively weak margin/strong plate inter-face. In addition to separating areas of differing long-term deformation, MF appears to serve as location for high stress concentration during the earthquake cycle as indicated by the clustering of 1958 aftershock events of M=~6.0 near the fault. These data show that the MF controls the rupture zone of subduction earthquakes of Mw 7.7 to 8.2, by decoupling the tectonic blocks of the margin from the underlying plate interface.

  5. Earthquakes in Oita triggered by the 2016 M7.3 Kumamoto earthquake (United States)

    Yoshida, Shingo


    During the passage of the seismic waves from the M7.3 Kumamoto, Kyushu, earthquake on April 16, 2016, a M5.7 [semiofficial value estimated by the Japan Meteorological Agency (JMA)] event occurred in the central part of Oita prefecture, approximately 80 km far away from the mainshock. Although there have been a number of reports that M 5 triggered events. In this paper, we firstly confirm that this event is a M6-class event by re-estimating the magnitude using the strong-motion records of K-NET and KiK-net, and crustal deformation data at the Yufuin station observed by the Geospatial Information Authority of Japan. Next, by investigating the aftershocks of 45 mainshocks which occurred over the past 20 years based on the JMA earthquake catalog (JMAEC), we found that the delay time of the 2016 M5.7 event in Oita was the shortest. Therefore, the M5.7 event could be regarded as an exceptional M > 5 event that was triggered by passing seismic waves, unlike the usual triggered events and aftershocks. Moreover, a search of the JMAEC shows that in the 2016 Oita aftershock area, swarm earthquake activity was low over the past 30 years compared with neighboring areas. We also found that in the past, probably or possibly triggered events frequently occurred in the 2016 Oita aftershock area. The Oita area readily responds to remote triggering because of high geothermal activity and young volcanism in the area. The M5.7 Oita event was triggered by passing seismic waves, probably because large dynamic stress change was generated by the mainshock at a short distance and because the Oita area was already loaded to a critical stress state without a recent energy release as suggested by the past low swarm activity.[Figure not available: see fulltext.

  6. The Bay Area Earthquake Cycle:A Paleoseismic Perspective (United States)

    Schwartz, D. P.; Seitz, G.; Lienkaemper, J. J.; Dawson, T. E.; Hecker, S.; William, L.; Kelson, K.


    /SH/RC/NC/(SG?) sequence likely occurred subsequent to the penultimate San Andreas event. Although offset data, which reflect M, are limited, observations indicate that the penultimate SA event ruptured essentially the same fault length as 1906 (Schwartz et al, 1998). In addition, measured point-specific slip (RC, 1.8-2.3m; SG, 3.5-5m) and modeled average slip (SH, 1.9m) for the MREs indicate large magnitude earthquakes on the other regional faults. The major observation from the new paleoseismic data is that during a maximum interval of 176 years (1600 to 1776), significant seismic moment was released in the SFBR by large (M*6.7) surface-faulting earthquakes on the SA, RC, SH, NH, NC and possibly SG faults. This places an upper limit on the duration of San Andreas interaction effects (stress shadow) on the regional fault system. In fact, the interval between the penultimate San Andreas rupture and large earthquakes on other SFBR faults could have been considerably shorter. We are now 95 years out from the 1906 and the SFBR Working Group 99 probability time window extends to 2030, an interval of 124 years. The paleoearthquake data allow that within this amount of time following the penultimate San Andreas event one or more large earthquakes may have occurred on Bay Area faults. Longer paleoearthquake chronologies with more precise event dating in the SFBR and other locales provide the exciting potential for defining regional earthquake cycles and modeling long-term fault interactions.

  7. Hard X-ray and Microwave Simulation of 2015-06-22 M6.6 flare (United States)

    Kuroda, Natsuha; Wang, Haimin; Gary, Dale E.; Fleishman, Gregory D.; Nita, Gelu M.; Chen, Bin; Xu, Yan; Jing, Ju


    It is well known that the time profiles of the hard X-ray (HXR) emission and the microwave (MW) emission during the impulsive phase of the solar flare are well correlated, and this has led to the expectation that these emissions come from a common population of flare-accelerated electrons. However, the energy ranges of the electrons producing two emissions are believed to be different (below and above several hundred keV for HXR-producing and MW-producing electrons, respectively), and some studies have shown that the indices of their energy spectra may differ as well. To better understand the energy distributions of the electrons producing these emissions, we present realistic forward-fit simulations of the HXR and the MW emissions of 2015 June 22, M6.6 flare using the newly developed, IDL-based platform GX simulator. We use the 3D magnetic field model extrapolated from magnetogram data from the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO), the images and the electron energy distribution parameters deduced from the photon spectrum from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), and the spatially integrated MW spectrum and the cross-correlated amplitude data from the Expanded Owens Valley Solar Array (EOVSA) to guide the modeling. We have observed a possible above the-loop-top HXR source in 20-25 keV image, well separated from the source seen in 6-12 keV that is typically interpreted as a thermal loop-top source. Therefore, we simulate the HXR emissions by combining two flux tubes at different heights: the lower loop dominated by thermal electrons and the higher loop dominated by nonthermal electrons. The MW and HXR emissions produced from the forward-fit model are compared with observations to investigate possible differences in the energy spectra of the HXR-producing and the MW-producing electrons and what they can tell us about particle acceleration.

  8. Stduy on White rot fungi (m-6) lignin enzyme activity in the W/O microemlsion%油包水(W/O)型微乳液中m-6菌木质素降解酶的研究

    Institute of Scientific and Technical Information of China (English)

    李娜; 冯贵颖; 刘晓风; 袁月祥; 闫志英; 贺蓉娜; 廖银章


    [目的]研究W/O十六烷基三甲基溴化铵(CTAB)微乳液体系和HAc-NaAc缓冲液体系中,m-6菌株固态发酵后产生的胞外木质素降解酶漆酶(Lac)、木质素过氧化物酶(Lip)、锰过氧化物酶(Mnp)酶促反应的最佳反应条件.[方法]通过分别对Lip(以2,2′-连氮-二(3-乙苯基并噻唑-磺酸)为底物)、Mnp(以MnSO4>为底物)和Lac(以藜芦醇为底物)3种木质素降解酶在W/O型CTAB微乳液体系和HAc-NaAc缓冲液体系中的酶活测定,研究不同温度、pH和底物浓度对酶促反应的影响,探讨最佳反应条件,并比较两种反应体系下的酶活.[结果]在W/O型CTAB微乳液体系中,Lip、Mnp和Lac 3种酶酶促反应的最佳条件为:温度37℃、pH分别为4.5,4.5和3.5;在HAc-NaAc缓冲液体系中,3种酶酶促反应的最佳反应条件为:温度37℃,pH分别为5.0,5.0和4.0;在两种反应体系中的最佳反应底物浓度相同,即藜芦醇、MnSO4>、2,2′-连氮-二(3-乙苯基并噻唑-磺酸)浓度分别为0.053,0.116,0.492 mmol/L.在W/O型CTAB微乳液中,Lip和Mnp酶活比其在HAc-NaAc缓冲液体系中分别提高了81.45%和36.75%,但Lac酶活却减少了2.914倍.[结论]得到了Lac、Lip和Mnp酶3种木质素降解酶在W/O型CTAB微乳液体系和HAc-NaAc缓冲液体系中的最佳反应条件,为木质纤维素的生物降解奠定了基础.

  9. Imbricated slip rate processes during slow slip transients imaged by low-frequency earthquakes (United States)

    Lengliné, O.; Frank, W. B.; Marsan, D.; Ampuero, J.-P.


    Low Frequency Earthquakes (LFEs) often occur in conjunction with transient strain episodes, or Slow Slip Events (SSEs), in subduction zones. Their focal mechanism and location consistent with shear failure on the plate interface argue for a model where LFEs are discrete dynamic ruptures in an otherwise slowly slipping interface. SSEs are mostly observed by surface geodetic instruments with limited resolution and it is likely that only the largest ones are detected. The time synchronization of LFEs and SSEs suggests that we could use the recorded LFEs to constrain the evolution of SSEs, and notably of the geodetically-undetected small ones. However, inferring slow slip rate from the temporal evolution of LFE activity is complicated by the strong temporal clustering of LFEs. Here we apply dedicated statistical tools to retrieve the temporal evolution of SSE slip rates from the time history of LFE occurrences in two subduction zones, Mexico and Cascadia, and in the deep portion of the San Andreas fault at Parkfield. We find temporal characteristics of LFEs that are similar across these three different regions. The longer term episodic slip transients present in these datasets show a slip rate decay with time after the passage of the SSE front possibly as t - 1 / 4. They are composed of multiple short term transients with steeper slip rate decay as t-α with α between 1.4 and 2. We also find that the maximum slip rate of SSEs has a continuous distribution. Our results indicate that creeping faults host intermittent deformation at various scales resulting from the imbricated occurrence of numerous slow slip events of various amplitudes.

  10. Earthquakes Threaten Many American Schools (United States)

    Bailey, Nancy E.


    Millions of U.S. children attend schools that are not safe from earthquakes, even though they are in earthquake-prone zones. Several cities and states have worked to identify and repair unsafe buildings, but many others have done little or nothing to fix the problem. The reasons for ignoring the problem include political and financial ones, but…

  11. Make an Earthquake: Ground Shaking! (United States)

    Savasci, Funda


    The main purposes of this activity are to help students explore possible factors affecting the extent of the damage of earthquakes and learn the ways to reduce earthquake damages. In these inquiry-based activities, students have opportunities to develop science process skills and to build an understanding of the relationship among science,…

  12. Make an Earthquake: Ground Shaking! (United States)

    Savasci, Funda


    The main purposes of this activity are to help students explore possible factors affecting the extent of the damage of earthquakes and learn the ways to reduce earthquake damages. In these inquiry-based activities, students have opportunities to develop science process skills and to build an understanding of the relationship among science,…

  13. Anthropogenic triggering of large earthquakes. (United States)

    Mulargia, Francesco; Bizzarri, Andrea


    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor "foreshocks", since the induction may occur with a delay up to several years.

  14. Heavy tails and earthquake probabilities (United States)

    Ellsworth, William L.


    The 21st century has already seen its share of devastating earthquakes, some of which have been labeled as “unexpected,” at least in the eyes of some seismologists and more than a few journalists. A list of seismological surprises could include the 2004 Sumatra-Andaman Islands; 2008 Wenchuan, China; 2009 Haiti; 2011 Christchurch, New Zealand; and 2011 Tohoku, Japan, earthquakes

  15. Earthquakes Threaten Many American Schools (United States)

    Bailey, Nancy E.


    Millions of U.S. children attend schools that are not safe from earthquakes, even though they are in earthquake-prone zones. Several cities and states have worked to identify and repair unsafe buildings, but many others have done little or nothing to fix the problem. The reasons for ignoring the problem include political and financial ones, but…

  16. Can Satellites Aid Earthquake Predictions?

    Institute of Scientific and Technical Information of China (English)

    John Roach; 李晓辉


    @@ Earthquake prediction is an imprecise science, and to illustrate the point,many experts point to the story of Tangshen①, China. On July 28, 1976, a magnitude② 7. 6 earthquake struck the city of Tangshen, China, without warning. None of the signs of the successful prediction from a year and half earlier were present. An estimated 250,000 people died.

  17. Earthquake Loss Estimation Uncertainties (United States)

    Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Aleksander


    The paper addresses the reliability issues of strong earthquakes loss assessment following strong earthquakes with worldwide Systems' application in emergency mode. Timely and correct action just after an event can result in significant benefits in saving lives. In this case the information about possible damage and expected number of casualties is very critical for taking decision about search, rescue operations and offering humanitarian assistance. Such rough information may be provided by, first of all, global systems, in emergency mode. The experience of earthquakes disasters in different earthquake-prone countries shows that the officials who are in charge of emergency response at national and international levels are often lacking prompt and reliable information on the disaster scope. Uncertainties on the parameters used in the estimation process are numerous and large: knowledge about physical phenomena and uncertainties on the parameters used to describe them; global adequacy of modeling techniques to the actual physical phenomena; actual distribution of population at risk at the very time of the shaking (with respect to immediate threat: buildings or the like); knowledge about the source of shaking, etc. Needless to be a sharp specialist to understand, for example, that the way a given building responds to a given shaking obeys mechanical laws which are poorly known (if not out of the reach of engineers for a large portion of the building stock); if a carefully engineered modern building is approximately predictable, this is far not the case for older buildings which make up the bulk of inhabited buildings. The way population, inside the buildings at the time of shaking, is affected by the physical damage caused to the buildings is not precisely known, by far. The paper analyzes the influence of uncertainties in strong event parameters determination by Alert Seismological Surveys, of simulation models used at all stages from, estimating shaking intensity

  18. Testing earthquake source inversion methodologies

    KAUST Repository

    Page, Morgan T.


    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  19. Earthquake forecasting: statistics and information

    Directory of Open Access Journals (Sweden)

    Vladimir Gertsik


    Full Text Available The paper presents a decision rule forming a mathematical basis of earthquake forecasting problem. We develop an axiomatic approach to earthquake forecasting in terms of multicomponent random fields on a lattice. This approach provides a method for constructing point estimates and confidence intervals for conditional probabilities of strong earthquakes under conditions on the levels of precursors. Also, it provides an approach for setting a multilevel alarm system and hypothesis testing for binary alarms. We use a method of comparison for different algorithms of earthquake forecasts in terms of the increase of Shannon information. ‘Forecasting’ (the calculation of the probabilities and ‘prediction’ (the alarm declaring of earthquakes are equivalent in this approach.

  20. Are Earthquakes a Critical Phenomenon? (United States)

    Ramos, O.


    Earthquakes, granular avalanches, superconducting vortices, solar flares, and even stock markets are known to evolve through power-law distributed events. During decades, the formalism of equilibrium phase transition has coined these phenomena as critical, which implies that they are also unpredictable. This work revises these ideas and uses earthquakes as the paradigm to demonstrate that slowly driven systems evolving through uncorrelated and power-law distributed avalanches (UPLA) are not necessarily critical systems, and therefore not necessarily unpredictable. By linking the correlation length to the pdf of the distribution, and comparing it with the one obtained at a critical point, a condition of criticality is introduced. Simulations in the classical Olami-Feder-Christensen (OFC) earthquake model confirm the findings, showing that earthquakes are not a critical phenomenon. However, one single catastrophic earthquake may show critical properties and, paradoxically, the emergence of this temporal critical behaviour may eventually carry precursory signs of catastrophic events.

  1. The CATDAT damaging earthquakes database

    Directory of Open Access Journals (Sweden)

    J. E. Daniell


    Full Text Available The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes.

    Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon.

    Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected, and economic losses (direct, indirect, aid, and insured.

    Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto ($214 billion USD damage; 2011 HNDECI-adjusted dollars compared to the 2011 Tohoku (>$300 billion USD at time of writing, 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product, exchange rate, wage information, population, HDI (Human Development Index, and insurance information have been collected globally to form comparisons.

    This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global

  2. The 2015 Gorkha Nepal Earthquake: Insights from Earthquake Damage Survey

    Directory of Open Access Journals (Sweden)

    Katsuichiro eGoda


    Full Text Available The 2015 Gorkha Nepal earthquake caused tremendous damage and loss. To gain valuable lessons from this tragic event, an earthquake damage investigation team was dispatched to Nepal from 1 May 2015 to 7 May 2015. A unique aspect of the earthquake damage investigation is that first-hand earthquake damage data were obtained 6 to 11 days after the mainshock. To gain deeper understanding of the observed earthquake damage in Nepal, the paper reviews the seismotectonic setting and regional seismicity in Nepal and analyzes available aftershock data and ground motion data. The earthquake damage observations indicate that the majority of the damaged buildings were stone/brick masonry structures with no seismic detailing, whereas the most of RC buildings were undamaged. This indicates that adequate structural design is the key to reduce the earthquake risk in Nepal. To share the gathered damage data widely, the collected damage data (geo-tagged photos and observation comments are organized using Google Earth and the kmz file is made publicly available.

  3. Biological Anomalies around the 2009 L'Aquila Earthquake. (United States)

    Fidani, Cristiano


    The April 6, 2009 L'Aquila earthquake was the strongest seismic event to occur in Italy over the last thirty years with a magnitude of M = 6.3. Around the time of the seismic swarm many instruments were operating in Central Italy, even if not dedicated to biological effects associated with the stress field variations, including seismicity. Testimonies were collected using a specific questionnaire immediately after the main shock, including data on earthquake lights, gas leaks, human diseases, and irregular animal behavior. The questionnaire was made up of a sequence of arguments, based upon past historical earthquake observations and compiled over seven months after the main shock. Data on animal behavior, before, during and after the main shocks, were analyzed in space/time distributions with respect to the epicenter area, evidencing the specific responses of different animals. Several instances of strange animal behavior were observed which could causally support the hypotheses that they were induced by the physical presence of gas, electric charges and electromagnetic waves in atmosphere. The aim of this study was to order the biological observations and thereby allow future work to determine whether these observations were influenced by geophysical parameters.

  4. Subdiffusion of volcanic earthquakes

    CERN Document Server

    Abe, Sumiyoshi


    A comparative study is performed on volcanic seismicities at Mt.Eyjafjallajokull in Iceland and Mt. Etna in Sicily, Italy, from the viewpoint of science of complex systems, and the discovery of remarkable similarities between them regarding their exotic spatio-temporal properties is reported. In both of the volcanic seismicities as point processes, the jump probability distributions of earthquakes are found to obey the exponential law, whereas the waiting-time distributions follow the power law. In particular, a careful analysis is made about the finite size effects on the waiting-time distributions, and accordingly, the previously reported results for Mt. Etna [S. Abe and N. Suzuki, EPL 110, 59001 (2015)] are reinterpreted. It is shown that spreads of the volcanic earthquakes are subdiffusive at both of the volcanoes. The aging phenomenon is observed in the "event-time-averaged" mean-squared displacements of the hypocenters. A comment is also made on presence/absence of long term memories in the context of t...

  5. A Multi-parametric Climatological Approach to Study the 2016 Amatrice-Norcia (Central Italy) Earthquake Preparatory Phase (United States)

    Piscini, Alessandro; De Santis, Angelo; Marchetti, Dedalo; Cianchini, Gianfranco


    Based on observations prior to earthquakes, recent theoretical considerations suggest that some geophysical quantities reveal abnormal changes that anticipate moderate and strong earthquakes, within a defined spatial area (the so-called Dobrovolsky area) according to a lithosphere-atmosphere-ionosphere coupling model. One of the possible pre-earthquake effects could be the appearance of some climatological anomalies in the epicentral region, weeks/months before the major earthquakes. In this paper, the period of 2 months preceding the Amatrice-Norcia (Central Italy) earthquake sequence, that started on 24 August 2016 with an M6 earthquake and a few months later produced other two major shocks (i.e. an M5.9 on 26 October and then an M6.5 on 30 October), was analyzed in terms of skin temperature, total column water vapour and total column of ozone, compared with the past 37-year trend. The novelty of the method stands in the way the complete time series is reduced, where also the possible effect of global warming is properly removed. The simultaneous analysis showed the presence of persistent contemporary anomalies in all of the analysed parameters. To validate the technique, a confutation/confirmation analysis was undertaken where these parameters were successfully analyzed in the same months but considering a seismically "calm" year, when significant seismicity was not present. We also extended the analysis to all available years to construct a confusion matrix comparing the occurrence of climatological data anomalies with real seismicity. This work confirms the potentiality of multi parameters in anticipating the occurrence of large earthquakes in Central Italy, thus reinforcing the idea of considering such behaviour an effective tool for an integrated system of future earthquake prediction.

  6. Determination of Design Basis Earthquake ground motion

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Muneaki [Japan Atomic Power Co., Tokyo (Japan)


    This paper describes principle of determining of Design Basis Earthquake following the Examination Guide, some examples on actual sites including earthquake sources to be considered, earthquake response spectrum and simulated seismic waves. In sppendix of this paper, furthermore, seismic safety review for N.P.P designed before publication of the Examination Guide was summarized with Check Basis Earthquake. (J.P.N.)

  7. Earthquakes: Risk, Monitoring, Notification, and Research (United States)


    far away as Bangladesh , Taiwan, Thailand, and Vietnam. Several large aftershocks have occurred since the main seismic event. The May 12 earthquake...motion of tectonic plates; ! Earthquake geology and paleoseismology: studies of the history, effects, and mechanics of earthquakes; ! Earthquake hazards

  8. The 11 April 2012 east Indian Ocean earthquake triggered large aftershocks worldwide (United States)

    Pollitz, Fred F.; Stein, Ross S.; Sevilgen, Volkan; Burgmann, Roland


    Large earthquakes trigger very small earthquakes globally during passage of the seismic waves and during the following several hours to days1, 2, 3, 4, 5, 6, 7, 8, 9, 10, but so far remote aftershocks of moment magnitude M≥5.5 have not been identified11, with the lone exception of an M=6.9 quake remotely triggered by the surface waves from an M=6.6 quake 4,800 kilometres away12. The 2012 east Indian Ocean earthquake that had a moment magnitude of 8.6 is the largest strike-slip event ever recorded. Here we show that the rate of occurrence of remote M≥5.5 earthquakes (>1,500 kilometres from the epicentre) increased nearly fivefold for six days after the 2012 event, and extended in magnitude to M≥7. These global aftershocks were located along the four lobes of Love-wave radiation; all struck where the dynamic shear strain is calculated to exceed 10-7 for at least 100 seconds during dynamic-wave passage. The other M≥8.5 mainshocks during the past decade are thrusts; after these events, the global rate of occurrence of remote M≥5.5 events increased by about one-third the rate following the 2012 shock and lasted for only two days, a weaker but possibly real increase. We suggest that the unprecedented delayed triggering power of the 2012 earthquake may have arisen because of its strike-slip source geometry or because the event struck at a time of an unusually low global earthquake rate, perhaps increasing the number of nucleation sites that were very close to failure.

  9. Spatial Verification of Earthquake Simulators Using Self-Consistent Metrics for Off-Fault Seismicity (United States)

    Wilson, J. M.; Yoder, M. R.; Rundle, J. B.


    We address the problem of verifying the self-consistency of earthquake simulators with the data from which their parameters are drawn. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of the earthquake fault system on which the earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements can be included in these simulations as well. In general, the parameters are adjusted so that natural earthquake sequences are matched in their scaling properties in an optimal way. Generally, these parameters choices are based on paleoseismic data extending over many hundreds and thousands of years. However, one of the problems encountered is the verification of the simulations applied to current earthquake seismicity. It is this problem, for which no currently accepted solution has been proposed, that is the objective of the present paper. Physically-based earthquake simulators allow the generation of many thousands of years of simulated seismicity, allowing for robust capture of statistical properties of large, damaging earthquakes that have long recurrence time scales for observation. Following past simulator and forecast model verification efforts, we approach the challenges in spatial forecast verification fo simulators; namely, that simulator output events are confined to the modeled faults, while observed earthquakes often occur off of known faults. We present two methods for overcoming this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a variation of the Epidemic-type aftershock (ETAS) model, which smears the simulator catalog seismicity over the entire test region. To test these methods, a Receiver Operating Characteristic (ROC) plot was produced by comparing the rate maps to observed m>6.0 earthquakes since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the modified ETAS

  10. The 2012 MW5.6 earthquake in the vicinity of the city of Sofia (United States)

    Simeonova, Stela; Solakov, Dimcho; Aleksandrova, Irena; Dimitrova, Liliya; Popova, Iliana; Raykova, Plamena


    The territory of Bulgaria represents a typical example of high seismic risk area in the eastern part of the Balkan Peninsula. The neotectonic movements on the Balkan Peninsula were controlled by extensional collapse of the Late Alpin orogen, and were influenced by extension behind the Aegean arc and by the complicated vertical and horizontal movements in the Pannonian region. The city of Sofia is the capital of Bulgaria. It is situated in the centre of the Sofia seismic zone that is the most populated (more than 1.2 mil. inhabitants), industrial and cultural region of Bulgaria that faces considerable earthquake risk. Seismicity in the zone is related mainly to the marginal neotectonic faults of Sofia graben. The available historical documents prove the occurrence of destructive earthquakes during the 15th-18th centuries in the Sofia zone. In 19th century the city of Sofia has experienced two strong earthquakes: the 1818 earthquake with epicentral intensity I0=8-9 MSK and the 1858 earthquake with I0=IX-X MSK64. The 1858 earthquake caused heavy destruction in the town of Sofia and the appearance of thermal springs in the western part of the town. After a quiescence of about 50 years a strong event with M=6.5 occurred in 1905 near the western marginal part of the Sofia zone. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK64). The earthquake caused a lot of damages in the town and changed the capacity of the thermal mineral springs in Sofia and the surrounding villages. The earthquake was felt in an area of 50000 km2 and followed by aftershocks, which lasted more than one year. Almost a century later (95 years) an earthquake of moment magnitude 5.6 hit Sofia seismic zone, on May 22nd, 2012, at 25 km south west of the city of Sofia. This shallow earthquake was largely felt in the region and up to Greece, FYROM, Serbia and Romania. No severe injuries have been reported so far, though

  11. Scaling and Stress Release in the Darfield-Christchurch, New Zealand Earthquake Sequence (United States)

    Abercrombie, R. E.; Fry, B.; Doser, D. I.


    The Canterbury earthquake sequence began with the M7.1 Darfield earthquake in 2010, and includes the devastating M6.2 Christchurch earthquake in 2011. The high ground accelerations and damage in Christchurch suggested that the larger eartthquakes may be high stress drop events. This is consistent with the hypothesis that faults in low-strain rate regions with long inter-event times rupture in higher stress drop earthquakes. The wide magnitude range of this prolific sequence, and the high-quality recording enable us to test this. The spatial migration of the sequence, from Darfield through Christchurch and then offshore, enables us to investigate whether we can resolve any spatial or temporal variation in earthquake stress drop. An independent study of 500 aftershocks (Oth & Kaiser, 2014) found no magnitude dependence, and identified spatially varying stress drop. Such patterns can be more confidently interpreted if observed by independent studies using different approaches. We use a direct wave, empirical Green's function (EGF) approach that includes measurement uncertainties, and objective criteria for assessing the quality of each spectral ratio (Abercrombie, 2013). The large number of earthquakes in the sequence enables us to apply the same approach to a wide range of magnitudes (M~2-6) recorded at the same stations, and so minimize the effects of any systematic biases in results. In our preliminary study, we include 2500 earthquakes recorded at a number of strong motion and broadband stations. We use multiple EGFs for each event, and find 300 earthquakes with well-resolved ratios at 5 or more stations. The stress drops are magnitude independent and there is broad correlation with the results of Oth & Kaiser. We apply the same approach to a much larger data set and compare our results to those of Oth & Kaiser, and also to other regions studied using our EGF method.

  12. 2010 Chile Earthquake Aftershock Response (United States)

    Barientos, Sergio


    The Mw=8.8 earthquake off the coast of Chile on 27 February 2010 is the 5th largest megathrust earthquake ever to be recorded and provides an unprecedented opportunity to advance our understanding of megathrust earthquakes and associated phenomena. The 2010 Chile earthquake ruptured the Concepcion-Constitucion segment of the Nazca/South America plate boundary, south of the Central Chile region and triggered a tsunami along the coast. Following the 2010 earthquake, a very energetic aftershock sequence is being observed in an area that is 600 km along strike from Valparaiso to 150 km south of Concepcion. Within the first three weeks there were over 260 aftershocks with magnitude 5.0 or greater and 18 with magnitude 6.0 or greater (NEIC, USGS). The Concepcion-Constitucion segment lies immediately north of the rupture zone associated with the great magnitude 9.5 Chile earthquake, and south of the 1906 and the 1985 Valparaiso earthquakes. The last great subduction earthquake in the region dates back to the February 1835 event described by Darwin (1871). Since 1835, part of the region was affected in the north by the Talca earthquake in December 1928, interpreted as a shallow dipping thrust event, and by the Chillan earthquake (Mw 7.9, January 1939), a slab-pull intermediate depth earthquake. For the last 30 years, geodetic studies in this area were consistent with a fully coupled elastic loading of the subduction interface at depth; this led to identify the area as a mature seismic gap with potential for an earthquake of magnitude of the order 8.5 or several earthquakes of lesser magnitude. What was less expected was the partial rupturing of the 1985 segment toward north. Today, the 2010 earthquake raises some disturbing questions: Why and how the rupture terminated where it did at the northern end? How did the 2010 earthquake load the adjacent segment to the north and did the 1985 earthquake only partially ruptured the plate interface leaving loaded asperities since

  13. Coulomb stress evolution along Xianshuihe-Xiaojiang Fault System since 1713 and its interaction with Wenchuan earthquake, May 12, 2008 (United States)

    Shan, Bin; Xiong, Xiong; Wang, Rongjiang; Zheng, Yong; Yang, Song


    The curved left-lateral strike-slip Xianshuihe-Xiaojiang Fault System (XXFS) in southwestern China extends at least 1400 km in the eastern margin of the Tibetan Plateau. Fieldworks confirm that the XXFS is one of the longest and most seismically active faults in China. The strain released by the slip motion on the XXFS is related to the convergence between the Indian and Eurasian plates. The entire fault system has experienced at least 35 earthquakes of M>6 in the recent 300 years and almost all segments of the system have been the locus of major historical earthquakes. Since the XXFS region is heavily populated (over 50 million people), understanding the migration of the large earthquakes in space and time is of crucial importance for the seismic hazard assessment in this region. We analyze a sequence of 25 earthquakes (M⩾6.5) that occurred along the XXFS since 1713, and investigate their influence on the 2008 Mw7.9 Wenchuan earthquake occurred on the adjacent Longmenshan fault. In our analysis, the relevant parameters for the earth crust are constrained by seismic studies. The locations and geometries of the earthquake faults as well as the rupture distributions are taken from field observations and seismological studies. Results from the Coulomb failure stress modeling indicate significant interactions among the earthquakes. After the 1713 earthquake, 19 out of 24 earthquakes occurred in the positive stress zone of the preceding earthquakes. The other 5 earthquakes located in the area without significant stress changes induced by the preceding events. In particular, we can identify 4 visible earthquake gaps with increasing seismic hazard along the XXFS, consistent with the findings from the paleo-seismological studies. The seismic activity and tectonic motion on the XXFS reduced the Coulomb stress accumulation at the hypocenter of 2008 Mw7.9 Wenchuan earthquake, implying that the Wenchuan earthquake might not be triggered directly by the seismic activities on

  14. The physics of an earthquake (United States)

    McCloskey, John


    The Sumatra-Andaman earthquake of 26 December 2004 (Boxing Day 2004) and its tsunami will endure in our memories as one of the worst natural disasters of our time. For geophysicists, the scale of the devastation and the likelihood of another equally destructive earthquake set out a series of challenges of how we might use science not only to understand the earthquake and its aftermath but also to help in planning for future earthquakes in the region. In this article a brief account of these efforts is presented. Earthquake prediction is probably impossible, but earth scientists are now able to identify particularly dangerous places for future events by developing an understanding of the physics of stress interaction. Having identified such a dangerous area, a series of numerical Monte Carlo simulations is described which allow us to get an idea of what the most likely consequences of a future earthquake are by modelling the tsunami generated by lots of possible, individually unpredictable, future events. As this article was being written, another earthquake occurred in the region, which had many expected characteristics but was enigmatic in other ways. This has spawned a series of further theories which will contribute to our understanding of this extremely complex problem.

  15. Fracking, wastewater disposal, and earthquakes (United States)

    McGarr, Arthur


    In the modern oil and gas industry, fracking of low-permeability reservoirs has resulted in a considerable increase in the production of oil and natural gas, but these fluid-injection activities also can induce earthquakes. Earthquakes induced by fracking are an inevitable consequence of the injection of fluid at high pressure, where the intent is to enhance permeability by creating a system of cracks and fissures that allow hydrocarbons to flow to the borehole. The micro-earthquakes induced during these highly-controlled procedures are generally much too small to be felt at the surface; indeed, the creation or reactivation of a large fault would be contrary to the goal of enhancing permeability evenly throughout the formation. Accordingly, the few case histories for which fracking has resulted in felt earthquakes have been due to unintended fault reactivation. Of greater consequence for inducing earthquakes, modern techniques for producing hydrocarbons, including fracking, have resulted in considerable quantities of coproduced wastewater, primarily formation brines. This wastewater is commonly disposed by injection into deep aquifers having high permeability and porosity. As reported in many case histories, pore pressure increases due to wastewater injection were channeled from the target aquifers into fault zones that were, in effect, lubricated, resulting in earthquake slip. These fault zones are often located in the brittle crystalline rocks in the basement. Magnitudes of earthquakes induced by wastewater disposal often exceed 4, the threshold for structural damage. Even though only a small fraction of disposal wells induce earthquakes large enough to be of concern to the public, there are so many of these wells that this source of seismicity contributes significantly to the seismic hazard in the United States, especially east of the Rocky Mountains where standards of building construction are generally not designed to resist shaking from large earthquakes.

  16. Ionospheric phenomena before strong earthquakes

    Directory of Open Access Journals (Sweden)

    A. S. Silina


    Full Text Available A statistical analysis of several ionospheric parameters before earthquakes with magnitude M > 5.5 located less than 500 km from an ionospheric vertical sounding station is performed. Ionospheric effects preceding "deep" (depth h > 33 km and "crust" (h 33 km earthquakes were analysed separately. Data of nighttime measurements of the critical frequencies foF2 and foEs, the frequency fbEs and Es-spread at the middle latitude station Dushanbe were used. The frequencies foF2 and fbEs are proportional to the square root of the ionization density at heights of 300 km and 100 km, respectively. It is shown that two days before the earthquakes the values of foF2 averaged over the morning hours (00:00 LT–06:00 LT and of fbEs averaged over the nighttime hours (18:00 LT–06:00 LT decrease; the effect is stronger for the "deep" earthquakes. Analysing the coefficient of semitransparency which characterizes the degree of small-scale turbulence, it was shown that this value increases 1–4 days before "crust" earthquakes, and it does not change before "deep" earthquakes. Studying Es-spread which manifests itself as diffuse Es track on ionograms and characterizes the degree of large-scale turbulence, it was found that the number of Es-spread observations increases 1–3 days before the earthquakes; for "deep" earthquakes the effect is more intensive. Thus it may be concluded that different mechanisms of energy transfer from the region of earthquake preparation to the ionosphere occur for "deep" and "crust" events.

  17. 重组人CREG蛋白与溶酶体组织蛋白酶和M6P/IGFⅡR的相互作用%Interactions between the recombinant human CREG protein and cathepsins and M6P/IGFIIR

    Institute of Scientific and Technical Information of China (English)

    孙鸣宇; 闫承慧; 田孝祥; 李洋; 韩雅玲


    BACKGROUND:It has been found that cel ular repressor of E1A-stimulated genes (CREG) is a lysosomal protein binding directly to the mannose-6-phosphate (M6P)/insulin-like growth factor II receptor (IGFIIR) and depends on the interaction with M6P receptors for efficient delivery to lysosomes OBJECTIVE:To study the interactions between the exogenous CREG protein and cathepsins and M6P/IGFIIR and to confirm the effect of CREG protein on expression and distribution of M6P/IGFIIR. METHODS:Double-stained immunofluorescence and coimmunoprecipitation were applied to observe the interactions between the exogenous CREG protein and cathepsin B, cathepsin L and M6P/IGFIIR. Using gain-of-function and loss-of-function approaches, the effect of CREG on expression and distribution of M6P/IGFIIR were studied by western blot assay and immunofluorescence staining. RESULTS AND CONCLUSION:Double-stained immunofluorescence and coimmunoprecipitation analyses confirmed the direct interactions between the exogenous CREG protein and cathepsin B, cathepsin L and M6P/IGFIIR. It was verified that CREG plays a critical role not in the expression but in the distribution of M6P/IGFIIR using gain-of-function and loss-of-function approaches. These findings provide evidence that exogenous CREG protein is located in lysosomes and has interactions with cathepsins and M6P/IGFIIR, also CREG plays a critical role in the distribution of M6P/IGFIIR.%背景:以往研究证实,CREG是一种与M6P/IGFⅡR直接结合的溶酶体蛋白,并依赖于与M6P受体的相互作用有效转运至溶酶体。  目的:分析外源性CREG蛋白与溶酶体组织蛋白酶和M6P/IGFⅡR的相互作用关系及其对M6P/IGFⅡR表达变化及细胞内定位的影响。  方法:应用细胞免疫荧光双染和免疫共沉淀方法,观察外源性 CREG 蛋白与溶酶体组织蛋白酶和 M6P/IGFⅡR的相互作用关系,并应用gain-of-function和loss-of-function模型,通过Western blot和细胞

  18. Study of the 11th July, 1915, Portuguese offshore earthquake, in the Atlantic from contemporary seismograms and bulletins (United States)

    Batllo-Ortiz, J.; Custodio, S.; Macia, R.; Teves-Costa, P.


    The seismicity rate in the contact of the Nubian and Euro-Asiatic plates along the Azores-Gibraltar region can be considered moderate. Nevertheless, large earthquakes do occur, as is well known from historical records. The sensibility of seismographic networks to earthquakes with oceanic origin has been extremely low until recent times. Oceanic M5 earthquakes have not been consistently recorded up to the second third of the XX century, precluding a proper knowledge of seismicity rates and other parameters of interest for earthquake hazard. Nevertheless, information for some events does exist, most of which remains to be properly studied. In this paper we analyze historical data for the 11th July, 1915 earthquake, which occurred offshore and was felt over the whole mainland Portugal. This event is one of the largest occurred during the instrumental period in the region of diffuse seismicity around the Gorringe bank. However it has been little studied, probably because it did not cause serious damage. The 11th July, 1915 earthquake is of great interest due to its size, estimated on the order of M6, and to its unique location with respect to the regions of large earthquakes in the Atlantic. In this paper, we present source parameters for this earthquake based on the analysis of the available contemporary seismograms and related documents. After throughout collection and selection, 23 seismograms obtained at 11 different European stations were digitized and processed. The event was relocated and its magnitude recalculated. Its focal mechanism has also been studied through waveform modeling and first motion polarity. We present the results of this analysis, compare the source of the 1915 earthquake with that of present earthquakes in the same region, and interpret the new results in light of the regional seismicity and seismo-tectonics.

  19. The threat of silent earthquakes (United States)

    Cervelli, Peter


    Not all earthquakes shake the ground. The so-called silent types are forcing scientists to rethink their understanding of the way quake-prone faults behave. In rare instances, silent earthquakes that occur along the flakes of seaside volcanoes may cascade into monstrous landslides that crash into the sea and trigger towering tsunamis. Silent earthquakes that take place within fault zones created by one tectonic plate diving under another may increase the chance of ground-shaking shocks. In other locations, however, silent slip may decrease the likelihood of destructive quakes, because they release stress along faults that might otherwise seem ready to snap.

  20. Earthquakes: Thinking about the unpredictable (United States)

    Geller, Robert J.

    The possibility of predicting earthquakes has been investigated by professionals and amateurs, seismologists and nonseismologists, for over 100 years. More than once, hopes of a workable earthquake prediction scheme have been raised only to be dashed. Such schemes—on some occasions accompanied by claims of an established track record—continue to be proposed, not only by Earth scientists, but also by workers in other fields. The assessment of these claims is not just a scientific or technical question. Public administrators and policy makers must make decisions regarding appropriate action in response to claims that some scheme has a predictive capability, or to specific predictions of imminent earthquakes.

  1. Fractal Models of Earthquake Dynamics

    CERN Document Server

    Bhattacharya, Pathikrit; Kamal,; Samanta, Debashis


    Our understanding of earthquakes is based on the theory of plate tectonics. Earthquake dynamics is the study of the interactions of plates (solid disjoint parts of the lithosphere) which produce seismic activity. Over the last about fifty years many models have come up which try to simulate seismic activity by mimicking plate plate interactions. The validity of a given model is subject to the compliance of the synthetic seismic activity it produces to the well known empirical laws which describe the statistical features of observed seismic activity. Here we present a review of two such models of earthquake dynamics with main focus on a relatively new model namely The Two Fractal Overlap Model.


    Directory of Open Access Journals (Sweden)

    Savaş TOPAL


    Full Text Available Earthquake-induced deformation structures which are called seismites may helpful to clasify the paleoseismic history of a location and to estimate the magnitudes of the potention earthquakes in the future. In this paper, seismites were investigated according to the types formed in deep and shallow lake sediments. Seismites are observed forms of sand dikes, introduced and fractured gravels and pillow structures in shallow lakes and pseudonodules, mushroom-like silts protruding laminites, mixed layers, disturbed varved lamination and loop bedding in deep lake sediments. Earthquake-induced deformation structures, by benefiting from previous studies, were ordered according to their formations and earthquake magnitudes. In this order, the lowest eartquake's record is loop bedding and the highest one is introduced and fractured gravels in lacustrine deposits.

  3. Microearthquake detection at 2012 M4.9 Qiaojia earthquake source area , the north of the Xiaojiang Fault in Yunnan, China (United States)

    Li, Y.; Yang, H.; Zhou, S.; Yan, C.


    We perform a comprehensive analysis in Yunnan area based on continuous seismic data of 38 stations of Qiaojia Network in Xiaojiang Fault from 2012.3 to 2015.2. We use an effective method: Match and Locate (M&L, Zhang&Wen, 2015) to detect and locate microearthquakes to conduct our research. We first study dynamic triggering around the Xiaojiang Fault in Yunnan. The triggered earthquakes are identified as two impulsive seismic arrivals in 2Hz-highpass-filtered velocity seismograms during the passage of surface waves of large teleseismic earthquakes. We only find two earthquakes that may have triggered regional earthquakes through inspecting their spectrograms: Mexico Mw7.4 earthquake in 03/20/2012 and El Salvador Mw7.3 earthquake in 10/14/2014. To confirm the two earthquakes are triggered instead of coincidence, we use M&L to search if there are any repeating earthquakes. The result of the coefficients shows that it is a coincidence during the surface waves of El Salvador earthquake and whether 2012 Mexico have triggered earthquake is under discussion. We then visually inspect the 2-8Hz-bandpass-filterd velocity envelopes of these years to search for non-volcanic tremor. We haven't detected any signals similar to non-volcanic tremors yet. In the following months, we are going to study the 2012 M4.9 Qiaojia earthquake. It occurred only 30km west of the epicenter of the 2014 M6.5 Ludian earthquake. We use Match and Locate (M&L) technique to detect and relocate microearthquakes that occurred 2 days before and 3 days after the mainshock. Through this, we could obtain several times more events than listed in the catalogs provided by NEIC and reduce the magnitude of completeness Mc. We will also detect microearthquakes along Xiaojiang Fault using template earthquakes listed in the catalogs to learn more about fault shape and other properties of Xiaojiang Fault. Analyzing seismicity near Xiaojiang Fault systematically may cast insight on our understanding of the features of

  4. Twitter earthquake detection: Earthquake monitoring in a social world (United States)

    Earle, Paul S.; Bowden, Daniel C.; Guy, Michelle R.


    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets) with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word "earthquake" clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  5. Discussion on Earthquake Forecasting and Early Warning

    Institute of Scientific and Technical Information of China (English)

    Zhang Xiaodong; Jiang Haikun; Li Mingxiao


    Through analysis of natural and social attributes of earthquake forecasting,the relationship between the natural and social attributes of earthquake forecasting (early warning) has been discussed.Regarding the natural attributes of earthquake forecasting,it only attempts to forecast the magnitude,location and occurrence time of future earthquake based on the aualysis of observational data and relevant theories and taking into consideration the present understanding of seismogeny and earthquake generation.It need not consider the consequences an earthquake forecast involves,and its purpose is to check out the level of scientific understanding of earthquakes.In respect of the social aspect of earthquake forecasting,people also focus on the consequence that the forecasting involves,in addition to its natural aspect,such as the uncertainty of earthquake prediction itself,the impact of earthquake prediction,and the earthquake resistant capability of structures (buildings),lifeline works,etc.In a word,it highlights the risk of earthquake forecasting and tries to mitigate the earthquake hazard as much as possible.In this paper,the authors also discuss the scientific and social challenges faced in earthquake prediction and analyze preliminarily the meanings and content of earthquake early warning.

  6. Earthquakes in cities revisited

    CERN Document Server

    Wirgin, Armand


    During the last twenty years, a number of publications of theoretical-numerical nature have appeared which come to the apparently-reassuring conclusion that seismic motion on the ground in cities is smaller than what this motion would be in the absence of the buildings (but for the same underground and seismic load). Other than the fact that this finding tells nothing about the motion within the buildings, it must be confronted with the overwhelming empirical evidence (e.g, earthquakes in Sendai (2011), Kathmandu (2015), Tainan City (2016), etc.) that shaking within buildings of a city is often large enough to damage or even destroy these structures. I show, on several examples, that theory can be reconciled with empirical evidence, and suggest that the crucial subject of seismic response in cities is in need of more thorough research.

  7. Earthquake Breccias (Invited) (United States)

    Rowe, C. D.; Melosh, B. L.; Lamothe, K.; Schnitzer, V.; Bate, C.


    Fault breccias are one of the fundamental classes of fault rocks and are observed in many exhumed faults. Some breccias have long been assumed to form co-seismically, but textural or mechanistic evidence for the association with earthquakes has never been documented. For example, at dilational jogs in brittle faults, it is common to find small bodies of chaotic breccia in lenticular or rhombohedral voids bounded by main slip surfaces and linking segments. Sibson interpreted these 'implosion breccias' as evidence of wall rock fracturing during sudden unloading when the dilational jogs open during earthquake slip (Sibson 1985, PAGEOPH v. 124, n. 1, 159-175). However, the role of dynamic fracturing in forming these breccias has not been tested. Moreover, the criteria for identifying implosion breccia have not been defined - do all breccias in dilational jogs or step-overs represent earthquake slip? We are building a database of breccia and microbreccia textures to develop a strictly observational set of criteria for distinction of breccia texture classes. Here, we present observations from the right-lateral Pofadder Shear Zone, South Africa, and use our textural criteria to identify the relative roles of dynamic and quasi-static fracture patterns, comminution/grinding and attrition, hydrothermal alteration, dissolution, and cementation. Nearly 100% exposure in the hyper-arid region south of the Orange River allowed very detailed mapping of frictional fault traces associated with rupture events, containing one or more right-steps in each rupture trace. Fracture patterns characteristic of on- and off-fault damage associated with propagation of dynamic rupture are observed along straight segments of the faults. The wall rock fractures are regularly spaced, begin at the fault trace and propagate at a high angle to the fault, and locally branch into subsidiary fractures before terminating a few cm away. This pattern of fractures has been previously linked to dynamic

  8. Sichuan Earthquake in China (United States)


    The Sichuan earthquake in China occurred on May 12, 2008, along faults within the mountains, but near and almost parallel the mountain front, northwest of the city of Chengdu. This major quake caused immediate and severe damage to many villages and cities in the area. Aftershocks pose a continuing danger, but another continuing hazard is the widespread occurrence of landslides that have formed new natural dams and consequently new lakes. These lakes are submerging roads and flooding previously developed lands. But an even greater concern is the possible rapid release of water as the lakes eventually overflow the new dams. The dams are generally composed of disintegrated rock debris that may easily erode, leading to greater release of water, which may then cause faster erosion and an even greater release of water. This possible 'positive feedback' between increasing erosion and increasing water release could result in catastrophic debris flows and/or flooding. The danger is well known to the Chinese earthquake response teams, which have been building spillways over some of the new natural dams. This ASTER image, acquired on June 1, 2008, shows two of the new large landslide dams and lakes upstream from the town of Chi-Kua-Kan at 32o12'N latitude and 104o50'E longitude. Vegetation is green, water is blue, and soil is grayish brown in this enhanced color view. New landslides appear bright off-white. The northern (top) lake is upstream from the southern lake. Close inspection shows a series of much smaller lakes in an elongated 'S' pattern along the original stream path. Note especially the large landslides that created the dams. Some other landslides in this area, such as the large one in the northeast corner of the image, occur only on the mountain slopes, so do not block streams, and do not form lakes.

  9. Sichuan Earthquake in China (United States)


    The Sichuan earthquake in China occurred on May 12, 2008, along faults within the mountains, but near and almost parallel the mountain front, northwest of the city of Chengdu. This major quake caused immediate and severe damage to many villages and cities in the area. Aftershocks pose a continuing danger, but another continuing hazard is the widespread occurrence of landslides that have formed new natural dams and consequently new lakes. These lakes are submerging roads and flooding previously developed lands. But an even greater concern is the possible rapid release of water as the lakes eventually overflow the new dams. The dams are generally composed of disintegrated rock debris that may easily erode, leading to greater release of water, which may then cause faster erosion and an even greater release of water. This possible 'positive feedback' between increasing erosion and increasing water release could result in catastrophic debris flows and/or flooding. The danger is well known to the Chinese earthquake response teams, which have been building spillways over some of the new natural dams. This ASTER image, acquired on June 1, 2008, shows two of the new large landslide dams and lakes upstream from the town of Chi-Kua-Kan at 32o12'N latitude and 104o50'E longitude. Vegetation is green, water is blue, and soil is grayish brown in this enhanced color view. New landslides appear bright off-white. The northern (top) lake is upstream from the southern lake. Close inspection shows a series of much smaller lakes in an elongated 'S' pattern along the original stream path. Note especially the large landslides that created the dams. Some other landslides in this area, such as the large one in the northeast corner of the image, occur only on the mountain slopes, so do not block streams, and do not form lakes.

  10. Extreme value statistics and thermodynamics of earthquakes. Large earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Lavenda, B. [Camerino Univ., Camerino, MC (Italy); Cipollone, E. [ENEA, Centro Ricerche Casaccia, S. Maria di Galeria, RM (Italy). National Centre for Research on Thermodynamics


    A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershocks sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Frechet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions show that self-similar power laws are transformed into non scaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Frechet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same catalogue of Chinese earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Frechet distribution. Earthquake temperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  11. Video data files to accompany USGS OFR 2015-1142--Assessment of existing and potential landslide hazards resulting from the April 25, 2015 Gorkha, Nepal earthquake sequence (United States)

    U.S. Geological Survey, Department of the Interior — On April 25, 2015, a large ( M7.8) earthquake shook much of central Nepal and was followed by a series of M>6 aftershocks, including a M7.3 event on May 12, 2015....

  12. Comparison between different earthquake magnitudes determined by China Seismograph Network

    Institute of Scientific and Technical Information of China (English)

    LIU Rui-feng; CHEN Yun-tai; REN Xiao; XU Zhi-guo; SUN Li; YANG Hui; LIANG Jian-hong; REN Ke-xin


    By linear regression and orthogonal regression methods, comparisons are made between different magnitudes (local magnitude ML, surface wave magnitudes MS and MS7, long-period body wave magnitude mB and short-period body wave magnitude mb) determined by Institute of Geophysics, China Earthquake Administration, on the basis of observation data collected by China Seismograph Network between 1983 and 2004. Empirical relations between different magnitudes have been obtained. The result shows that: ①As different magnitude scales reflect radiated energy by seismic waves within different periods, earthquake magnitudes can be described more objectively by using different scales for earthquakes of different magnitudes. When the epicentral distance is less than 1 000 km, local magnitude ML can be a preferable scale; In case MMS, i.e., MS underestimates magnitudes of such events, therefore, mB can be a better choice; In case M>6.0, MS>mB>mb, both mB and mb underestimate the magnitudes, so MS is a preferable scale for determining magnitudes of such events (6.08.5, a saturation phenomenon appears in MS, which cannot give an accurate reflection of the magnitudes of such large events; ②In China, when the epicentral distance is less than 1 000 km, there is almost no difference between ML and MS, and thus there is no need to convert between the two magnitudes in practice; ③Although MS and MS7 are both surface wave magnitudes, MS is in general greater than MS7 by 0.2~0.3 magnitude, because different instruments and calculation formulae are used; ④mB is almost equal to mb for earthquakes around mB4.0, but mB is larger than mb for those of mB(4.5, because the periods of seismic waves used for measuring mB and mb are different though the calculation formulae are the same.

  13. Behavior of Columns During Earthquakes (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The behavior of columns during earthquakes is very important since column failures may lead to additional structural failures and result in total building collapses....

  14. Medical complications associated with earthquakes. (United States)

    Bartels, Susan A; VanRooyen, Michael J


    Major earthquakes are some of the most devastating natural disasters. The epidemiology of earthquake-related injuries and mortality is unique for these disasters. Because earthquakes frequently affect populous urban areas with poor structural standards, they often result in high death rates and mass casualties with many traumatic injuries. These injuries are highly mechanical and often multisystem, requiring intensive curative medical and surgical care at a time when the local and regional medical response capacities have been at least partly disrupted. Many patients surviving blunt and penetrating trauma and crush injuries have subsequent complications that lead to additional morbidity and mortality. Here, we review and summarise earthquake-induced injuries and medical complications affecting major organ systems.

  15. Statistical earthquake focal mechanism forecasts

    CERN Document Server

    Kagan, Yan Y


    Forecasts of the focal mechanisms of future earthquakes are important for seismic hazard estimates and Coulomb stress and other models of earthquake occurrence. Here we report on a high-resolution global forecast of earthquake rate density as a function of location, magnitude, and focal mechanism. In previous publications we reported forecasts of 0.5 degree spatial resolution, covering the latitude range magnitude, and focal mechanism. In previous publications we reported forecasts of 0.5 degree spatial resolution, covering the latitude range from -75 to +75 degrees, based on the Global Central Moment Tensor earthquake catalog. In the new forecasts we've improved the spatial resolution to 0.1 degree and the latitude range from pole to pole. Our focal mechanism estimates require distance-weighted combinations of observed focal mechanisms within 1000 km of each grid point. Simultaneously we calculate an average rotation angle between the forecasted mechanism and all the surrounding mechanisms, using the method ...

  16. Earthquake scenario and probabilistic ground-shaking hazard maps for the Albuquerque-Belen-Santa Fe, New Mexico, corridor (United States)

    Wong, I.; Olig, S.; Dober, M.; Silva, W.; Wright, D.; Thomas, P.; Gregor, N.; Sanford, A.; Lin, K.-W.; Love, D.


    New Mexico's population is concentrated along the corridor that extends from Belen in the south to Española in the north and includes Albuquerque and Santa Fe. The Rio Grande rift, which encompasses the corridor, is a major tectonically, volcanically, and seismically active continental rift in the western U.S. Although only one large earthquake (moment magnitude (M) ≥ 6) has possibly occurred in the New Mexico portion of the rift since 1849, paleoseismic data indicate that prehistoric surface-faulting earthquakes of M 6.5 and greater have occurred on aver- age every 400 yrs on many faults throughout the Rio Grande rift.

  17. The GIS and analysis of earthquake damage distribution of the 1303 Hongtong M=8 earthquake

    Institute of Scientific and Technical Information of China (English)

    高孟潭; 金学申; 安卫平; 吕晓健


    The geography information system of the 1303 Hongtong M=8 earthquake has been established. Using the spatial analysis function of GIS, the spatial distribution characteristics of damage and isoseismal of the earthquake are studied. By comparing with the standard earthquake intensity attenuation relationship, the abnormal damage distribution of the earthquake is found, so the relationship of the abnormal distribution with tectonics, site condition and basin are analyzed. In this paper, the influence on the ground motion generated by earthquake source and the underground structures near source also are studied. The influence on seismic zonation, anti-earthquake design, earthquake prediction and earthquake emergency responding produced by the abnormal density distribution are discussed.

  18. Multi-Parameter Observation and Detection of Pre-Earthquake Signals in Seismically Active Areas (United States)

    Ouzounov, D.; Pulinets, S.; Parrot, M.; Liu, J. Y.; Hattori, K.; Kafatos, M.; Taylor, P.


    The recent large earthquakes (M9.0 Tohoku, 03/2011; M7.0 Haiti, 01/2010; M6.7 L Aquila, 04/2008; and M7.9 Wenchuan 05/2008) have renewed interest in pre-anomalous seismic signals associated with them. Recent workshops (DEMETER 2006, 2011 and VESTO 2009 ) have shown that there were precursory atmospheric /ionospheric signals observed in space prior to these events. Our initial results indicate that no single pre-earthquake observation (seismic, magnetic field, electric field, thermal infrared [TIR], or GPS/TEC) can provide a consistent and successful global scale early warning. This is most likely due to complexity and chaotic nature of earthquakes and the limitation in existing ground (temporal/spatial) and global satellite observations. In this study we analyze preseismic temporal and spatial variations (gas/radon counting rate, atmospheric temperature and humidity change, long-wave radiation transitions and ionospheric electron density/plasma variations) which we propose occur before the onset of major earthquakes:. We propose an Integrated Space -- Terrestrial Framework (ISTF), as a different approach for revealing pre-earthquake phenomena in seismically active areas. ISTF is a sensor web of a coordinated observation infrastructure employing multiple sensors that are distributed on one or more platforms; data from satellite sensors (Terra, Aqua, POES, DEMETER and others) and ground observations, e.g., Global Positioning System, Total Electron Content (GPS/TEC). As a theoretical guide we use the Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model to explain the generation of multiple earthquake precursors. Using our methodology, we evaluated retrospectively the signals preceding the most devastated earthquakes during 2005-2011. We observed a correlation between both atmospheric and ionospheric anomalies preceding most of these earthquakes. The second phase of our validation include systematic retrospective analysis for more than 100 major earthquakes (M>5

  19. Earthquakes - Volcanoes (Causes and Forecast) (United States)

    Tsiapas, E.


    EARTHQUAKES - VOLCANOES (CAUSES AND FORECAST) ELIAS TSIAPAS RESEARCHER NEA STYRA, EVIA,GREECE TEL.0302224041057 The earthquakes are caused by large quantities of liquids (e.g. H2O, H2S, SO2, ect.) moving through lithosphere and pyrosphere (MOHO discontinuity) till they meet projections (mountains negative projections or projections coming from sinking lithosphere). The liquids are moved from West Eastward carried away by the pyrosphere because of differential speed of rotation of the pyrosphere by the lithosphere. With starting point an earthquake which was noticed at an area and from statistical studies, we know when, where and what rate an earthquake may be, which earthquake is caused by the same quantity of liquids, at the next east region. The forecast of an earthquake ceases to be valid if these components meet a crack in the lithosphere (e.g. limits of lithosphere plates) or a volcano crater. In this case the liquids come out into the atmosphere by the form of gasses carrying small quantities of lava with them (volcano explosion).

  20. Two models for earthquake forerunners (United States)

    Mjachkin, V.I.; Brace, W.F.; Sobolev, G.A.; Dieterich, J.H.


    Similar precursory phenomena have been observed before earthquakes in the United States, the Soviet Union, Japan, and China. Two quite different physical models are used to explain these phenomena. According to a model developed by US seismologists, the so-called dilatancy diffusion model, the earthquake occurs near maximum stress, following a period of dilatant crack expansion. Diffusion of water in and out of the dilatant volume is required to explain the recovery of seismic velocity before the earthquake. According to a model developed by Soviet scientists growth of cracks is also involved but diffusion of water in and out of the focal region is not required. With this model, the earthquake is assumed to occur during a period of falling stress and recovery of velocity here is due to crack closure as stress relaxes. In general, the dilatancy diffusion model gives a peaked precursor form, whereas the dry model gives a bay form, in which recovery is well under way before the earthquake. A number of field observations should help to distinguish between the two models: study of post-earthquake recovery, time variation of stress and pore pressure in the focal region, the occurrence of pre-existing faults, and any changes in direction of precursory phenomena during the anomalous period. ?? 1975 Birkha??user Verlag.

  1. Earthquake damage to underground facilities

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, H.R.; Hustrulid, W.A. Stephenson, D.E.


    The potential seismic risk for an underground nuclear waste repository will be one of the considerations in evaluating its ultimate location. However, the risk to subsurface facilities cannot be judged by applying intensity ratings derived from the surface effects of an earthquake. A literature review and analysis were performed to document the damage and non-damage due to earthquakes to underground facilities. Damage from earthquakes to tunnels, s, and wells and damage (rock bursts) from mining operations were investigated. Damage from documented nuclear events was also included in the study where applicable. There are very few data on damage in the subsurface due to earthquakes. This fact itself attests to the lessened effect of earthquakes in the subsurface because mines exist in areas where strong earthquakes have done extensive surface damage. More damage is reported in shallow tunnels near the surface than in deep mines. In mines and tunnels, large displacements occur primarily along pre-existing faults and fractures or at the surface entrance to these facilities.Data indicate vertical structures such as wells and shafts are less susceptible to damage than surface facilities. More analysis is required before seismic criteria can be formulated for the siting of a nuclear waste repository.

  2. Large earthquakes and creeping faults (United States)

    Harris, Ruth A.


    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  3. Intracontinental basins and strong earthquakes

    Institute of Scientific and Technical Information of China (English)

    邓起东; 高孟潭; 赵新平; 吴建春


    The September 17, 1303 Hongtong M=8 earthquake occurred in Linfen basin of Shanxi down-faulted basin zone. It is the first recorded M=8 earthquake since the Chinese historical seismic records had started and is a great earthquake occurring in the active intracontinental basin. We had held a Meeting of the 700th Anniversary of the 1303 Hongtong M=8 Earthquake in Shanxi and a Symposium on Intracontinental Basins and Strong Earthquakes in Taiyuan City of Shanxi Province on September 17~18, 2003. The articles presented on the symposium discussed the relationships between active intracontinental basins of different properties, developed in different regions, including tensional graben and semi-graben basins in tensile tectonic regions, compression-depression basins and foreland basins in compressive tectonic regions and pull-apart basins in strike-slip tectonic zones, and strong earthquakes in China. In this article we make a brief summary of some problems. The articles published in this special issue are a part of the articles presented on the symposium.

  4. Triggering of volcanic eruptions by large earthquakes (United States)

    Nishimura, Takeshi


    When a large earthquake occurs near an active volcano, there is often concern that volcanic eruptions may be triggered by the earthquake. In this study, recently accumulated, reliable data were analyzed to quantitatively evaluate the probability of the occurrence of new eruptions of volcanoes located near the epicenters of large earthquakes. For volcanoes located within 200 km of large earthquakes of magnitude 7.5 or greater, the eruption occurrence probability increases by approximately 50% for 5 years after the earthquake origin time. However, no significant increase in the occurrence probability of new eruptions was observed at distant volcanoes or for smaller earthquakes. The present results strongly suggest that new eruptions are likely triggered by static stress changes and/or strong ground motions caused by nearby large earthquakes. This is not similar to the previously presented evidence that volcanic earthquakes at distant volcanoes are remotely triggered by surface waves generated by large earthquakes.

  5. Study pre-earthquake features in the Earth atmosphere-ionosphere environment associated with 2016 Amatrice-Norcia (Central Italy) seismic sequence (United States)

    Ouzounov, Dimitar; Pulinets, Sergey; Giuliani, Gioacchino; Hernández-Pajares, Manuel; García-Rigo, Alberto


    The 2016 Amatrice-Norcia (Central Italy) seismic sequence (M6.3, M6.1 and M6.5), became one of the unusual and important modern earthquake events. Recent studies indicate (including April 6th 2009 Abruzzo earthquake) an enhanced coupling between the atmospheric boundary layer and the ionosphere, which have been proposed to be related with large (>M6) earthquakes. This relationship has been studied for the 2016 Central Italy sequence using an integrated set of observations of five physical and environmental parameters. We present observational data from January to November 2016 of five physical parameters- radon, seismicity, temperature of the atmosphere boundary layer, outgoing earth infrared radiation and GPS/TEC and their temporal and spatial variations several days before the onset of the Amatrice-Norcia earthquake sequence. The Aug 24 M6.2 foreshock was situated about 70 kilometers from the 2 stations of radon near L'Aquila. These data show an increase prior to the main earthquake beginning in July-August this enhancement of radon coincides (with some delay) with an increase in the atmospheric chemical potential (Aug 11) measured near the epicentral area from satellite. And subsequently from Aug12 there was an association with the acceleration of outgoing infrared radiation observed on the top of the atmosphere from EOS satellite (Aug 16). The GPS/Total Electron Content data indicate an increase of electron concentration in ionosphere on August 22 and October 26, 1-2 days before the M6.2 foreshock and the M6.5 main shock on Oct 30, 2016. Both ground and satellite data have in common that they were evident in about the last ten days before the M6.2 foreshock of Aug 24 and continuously up to the main shock of Oct 30, although the radon variations started 2 months earlier. We examined the possible correlation between different pre-earthquake signals in the frame of a multidisciplinary investigation of Lithosphere -Atmosphere -Ionosphere coupling concept.

  6. Stress drops and radiated energies of aftershocks of the 1994 Northridge, California, earthquake (United States)

    Mori, Jim; Abercrombie, Rachel E.; Kanamori, Hiroo


    We study stress levels and radiated energy to infer the rupture characteristics and scaling relationships of aftershocks and other southern California earthquakes. We use empirical Green functions to obtain source time functions for 47 of the larger (M ≥ 4.0) aftershocks of the 1994 Northridge, California earthquake (M6.7). We estimate static and dynamic stress drops from the source time functions and compare them to well-calibrated estimates of the radiated energy. Our measurements of radiated energy are relatively low compared to the static stress drops, indicating that the static and dynamic stress drops are of similar magnitude. This is confirmed by our direct estimates of the dynamic stress drops. Combining our results for the Northridge aftershocks with data from other southern California earthquakes appears to show an increase in the ratio of radiated energy to moment, with increasing moment. There is no corresponding increase in the static stress drop. This systematic change in earthquake scaling from smaller to larger (M3 to M7) earthquakes suggests differences in rupture properties that may be attributed to differences of dynamic friction or stress levels on the faults.

  7. Anomalous variation in the wireless signals propagation associated with earthquake preparation processes (United States)

    Ouzounov, Dimitar; Velichkova-Yotsova, Sylvia; Pulinets, Sergey


    propagation correlated with earthquake preparation processes. Our observations revealed a phenomena associated with the artificially enhancement of the intensity 3.5GHz signals by using WiMax technology (no change in the transmitting level) as a result of electric and electrochemical processes in atmosphere over the regions of ongoing earthquake preparation. To illustrate the nature of such variations in the range of 3.5GHz in relation to earthquake processes we present two case studies: 1/ for M5.8 of May 22, 2012 in Bulgaria and 2/ for M6.9 of May 24, 2014 in Aegean Sea. Concerning the M5.8 of May 22, 2012 the abnormal intensity modulation started on 05.17.2012 (five days in advance) and reached 200% increase. Epicenter of the M5.8 of May 25 was on 15 km from the wireless receiver. Concerning and M6.9 of May 24, 2014 in Aegean Sea abnormal signal was observed on May 22 (two days in advance) with 30% intensity increase. Epicenter of M6.9 of May 24 was at 260 km from the wireless receiver. Most likely the observed increase in the intensity is a direct result of the change in the atmospheric properties in the Atmospheric boundary level (ABL) triggered by intensification of radon and other gases release, which lead to change in lowers atmosphere conductivity, already suggested by Lithosphere-Atmosphere-Ionosphere Coupling concept (Pulinets and Ouzounov, 2011). Another possible reason is the forward scattering of WiMax signal (similar to meteor wakes scattering) on aerosol layers formed over the earthquake preparation zone. We are registering an effect of systematic increase (with different rate) at 3.5 GHz associated with the regional seismicity and no significant intensify modulation with an absence of major seismicity in the region.

  8. The RNA m(6)A Reader YTHDF2 Is Essential for the Post-transcriptional Regulation of the Maternal Transcriptome and Oocyte Competence. (United States)

    Ivanova, Ivayla; Much, Christian; Di Giacomo, Monica; Azzi, Chiara; Morgan, Marcos; Moreira, Pedro N; Monahan, Jack; Carrieri, Claudia; Enright, Anton J; O'Carroll, Dónal


    YTHDF2 binds and destabilizes N(6)-methyladenosine (m(6)A)-modified mRNA. The extent to which this branch of m(6)A RNA-regulatory pathway functions in vivo and contributes to mammalian development remains unknown. Here we find that YTHDF2 deficiency is partially permissive in mice and results in female-specific infertility. Using conditional mutagenesis, we demonstrate that YTHDF2 is autonomously required within the germline to produce MII oocytes that are competent to sustain early zygotic development. Oocyte maturation is associated with a wave of maternal RNA degradation, and the resulting relative changes to the MII transcriptome are integral to oocyte quality. The loss of YTHDF2 results in the failure to regulate transcript dosage of a cohort of genes during oocyte maturation, with enrichment observed for the YTHDF2-binding consensus and evidence of m(6)A in these upregulated genes. In summary, the m(6)A-reader YTHDF2 is an intrinsic determinant of mammalian oocyte competence and early zygotic development. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Twitter earthquake detection: earthquake monitoring in a social world

    Directory of Open Access Journals (Sweden)

    Daniel C. Bowden


    Full Text Available The U.S. Geological Survey (USGS is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word “earthquake” clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  10. Mouse Maternal High-Fat Intake Dynamically Programmed mRNA m6A Modifications in Adipose and Skeletal Muscle Tissues in Offspring (United States)

    Li, Xiao; Yang, Jing; Zhu, Youbo; Liu, Yuan; Shi, Xin’e; Yang, Gongshe


    Epigenetic mechanisms have an important role in the pre- and peri-conceptional programming by maternal nutrition. Yet, whether or not RNA m6A methylation—an old epigenetic marker receiving increased attention recently—is involved remains an unknown question. In this study, mouse high-fat feeding prior to conception was shown to induce overweight and glucose intolerant dams, which then continued to be exposed to a high-fat diet during gestation and lactation. The dams on a standard diet throughout the whole experiment were used as a control. Results showed that maternal high-fat intake impaired postnatal growth in male offspring, indicated by decreased body weight and Lee’s index at 3, 8 and 15 weeks old, but the percentages of visceral fat and tibialis anterior relative to the whole body weights were significantly increased at eight weeks of age. The maternal high-fat exposure significantly increased mRNA N6-methyladenosine (m6A) levels in visceral fat at three weeks old, combined with downregulated Fat mass and obesity-associated gene (FTO) and upregulated Methyltransferase like 3 (METTL3) transcription, and these changes were reversed at eight weeks of age. In the tibialis anterior muscle, the maternal high-fat diet significantly enhanced m6A modifications at three weeks, and lowered m6A levels at 15 weeks of age. Accordingly, FTO transcription was significantly inhibited at three weeks and stimulated at 15 weeks of age, and METTL3 transcripts were significantly improved at three weeks. Interestingly, both FTO and METTL3 transcription was significantly elevated at eight weeks of age, and yet the m6A modifications remained unchanged. Our study showed that maternal high-fat intake could affect mRNA m6A modifications and its related genes in offspring in a tissue-specific and development-dependent way, and provided an interesting indication of the working of the m6A system during the transmission from maternal nutrition to subsequent generations. PMID:27548155

  11. Mouse Maternal High-Fat Intake Dynamically Programmed mRNA m6A Modifications in Adipose and Skeletal Muscle Tissues in Offspring

    Directory of Open Access Journals (Sweden)

    Xiao Li


    Full Text Available Epigenetic mechanisms have an important role in the pre- and peri-conceptional programming by maternal nutrition. Yet, whether or not RNA m6A methylation—an old epigenetic marker receiving increased attention recently—is involved remains an unknown question. In this study, mouse high-fat feeding prior to conception was shown to induce overweight and glucose intolerant dams, which then continued to be exposed to a high-fat diet during gestation and lactation. The dams on a standard diet throughout the whole experiment were used as a control. Results showed that maternal high-fat intake impaired postnatal growth in male offspring, indicated by decreased body weight and Lee’s index at 3, 8 and 15 weeks old, but the percentages of visceral fat and tibialis anterior relative to the whole body weights were significantly increased at eight weeks of age. The maternal high-fat exposure significantly increased mRNA N6-methyladenosine (m6A levels in visceral fat at three weeks old, combined with downregulated Fat mass and obesity-associated gene (FTO and upregulated Methyltransferase like 3 (METTL3 transcription, and these changes were reversed at eight weeks of age. In the tibialis anterior muscle, the maternal high-fat diet significantly enhanced m6A modifications at three weeks, and lowered m6A levels at 15 weeks of age. Accordingly, FTO transcription was significantly inhibited at three weeks and stimulated at 15 weeks of age, and METTL3 transcripts were significantly improved at three weeks. Interestingly, both FTO and METTL3 transcription was significantly elevated at eight weeks of age, and yet the m6A modifications remained unchanged. Our study showed that maternal high-fat intake could affect mRNA m6A modifications and its related genes in offspring in a tissue-specific and development-dependent way, and provided an interesting indication of the working of the m6A system during the transmission from maternal nutrition to subsequent

  12. The Electronic Encyclopedia of Earthquakes (United States)

    Benthien, M.; Marquis, J.; Jordan, T.


    The Electronic Encyclopedia of Earthquakes is a collaborative project of the Southern California Earthquake Center (SCEC), the Consortia of Universities for Research in Earthquake Engineering (CUREE) and the Incorporated Research Institutions for Seismology (IRIS). This digital library organizes earthquake information online as a partner with the NSF-funded National Science, Technology, Engineering and Mathematics (STEM) Digital Library (NSDL) and the Digital Library for Earth System Education (DLESE). When complete, information and resources for over 500 Earth science and engineering topics will be included, with connections to curricular materials useful for teaching Earth Science, engineering, physics and mathematics. Although conceived primarily as an educational resource, the Encyclopedia is also a valuable portal to anyone seeking up-to-date earthquake information and authoritative technical sources. "E3" is a unique collaboration among earthquake scientists and engineers to articulate and document a common knowledge base with a shared terminology and conceptual framework. It is a platform for cross-training scientists and engineers in these complementary fields and will provide a basis for sustained communication and resource-building between major education and outreach activities. For example, the E3 collaborating organizations have leadership roles in the two largest earthquake engineering and earth science projects ever sponsored by NSF: the George E. Brown Network for Earthquake Engineering Simulation (CUREE) and the EarthScope Project (IRIS and SCEC). The E3 vocabulary and definitions are also being connected to a formal ontology under development by the SCEC/ITR project for knowledge management within the SCEC Collaboratory. The E3 development system is now fully operational, 165 entries are in the pipeline, and the development teams are capable of producing 20 new, fully reviewed encyclopedia entries each month. Over the next two years teams will

  13. Evidence for Ancient Mesoamerican Earthquakes (United States)

    Kovach, R. L.; Garcia, B.


    Evidence for past earthquake damage at Mesoamerican ruins is often overlooked because of the invasive effects of tropical vegetation and is usually not considered as a casual factor when restoration and reconstruction of many archaeological sites are undertaken. Yet the proximity of many ruins to zones of seismic activity would argue otherwise. Clues as to the types of damage which should be soughtwere offered in September 1999 when the M = 7.5 Oaxaca earthquake struck the ruins of Monte Alban, Mexico, where archaeological renovations were underway. More than 20 structures were damaged, 5 of them seriously. Damage features noted were walls out of plumb, fractures in walls, floors, basal platforms and tableros, toppling of columns, and deformation, settling and tumbling of walls. A Modified Mercalli Intensity of VII (ground accelerations 18-34 %b) occurred at the site. Within the diffuse landward extension of the Caribbean plate boundary zone M = 7+ earthquakes occur with repeat times of hundreds of years arguing that many Maya sites were subjected to earthquakes. Damage to re-erected and reinforced stelae, walls, and buildings were witnessed at Quirigua, Guatemala, during an expedition underway when then 1976 M = 7.5 Guatemala earthquake on the Motagua fault struck. Excavations also revealed evidence (domestic pttery vessels and skeleton of a child crushed under fallen walls) of an ancient earthquake occurring about the teim of the demise and abandonment of Quirigua in the late 9th century. Striking evidence for sudden earthquake building collapse at the end of the Mayan Classic Period ~A.D. 889 was found at Benque Viejo (Xunantunich), Belize, located 210 north of Quirigua. It is argued that a M = 7.5 to 7.9 earthquake at the end of the Maya Classic period centered in the vicinity of the Chixoy-Polochic and Motagua fault zones cound have produced the contemporaneous earthquake damage to the above sites. As a consequences this earthquake may have accelerated the

  14. A Prospect of Earthquake Prediction Research

    CERN Document Server

    Ogata, Yosihiko


    Earthquakes occur because of abrupt slips on faults due to accumulated stress in the Earth's crust. Because most of these faults and their mechanisms are not readily apparent, deterministic earthquake prediction is difficult. For effective prediction, complex conditions and uncertain elements must be considered, which necessitates stochastic prediction. In particular, a large amount of uncertainty lies in identifying whether abnormal phenomena are precursors to large earthquakes, as well as in assigning urgency to the earthquake. Any discovery of potentially useful information for earthquake prediction is incomplete unless quantitative modeling of risk is considered. Therefore, this manuscript describes the prospect of earthquake predictability research to realize practical operational forecasting in the near future.

  15. Bounding Ground Motions for Hayward Fault Scenario Earthquakes Using Suites of Stochastic Rupture Models (United States)

    Rodgers, A. J.; Xie, X.; Petersson, A.


    The next major earthquake in the San Francisco Bay area is likely to occur on the Hayward-Rodgers Creek Fault system. Attention on the southern Hayward section is appropriate given the upcoming 140th anniversary of the 1868 M 7 rupture coinciding with the estimated recurrence interval. This presentation will describe ground motion simulations for large (M > 6.5) earthquakes on the Hayward Fault using a recently developed elastic finite difference code and high-performance computers at Lawrence Livermore National Laboratory. Our code easily reads the recent USGS 3D seismic velocity model of the Bay Area developed in 2005 and used for simulations of the 1906 San Francisco and 1989 Loma Prieta earthquakes. Previous work has shown that the USGS model performs very well when used to model intermediate period (4-33 seconds) ground motions from moderate (M ~ 4-5) earthquakes (Rodgers et al., 2008). Ground motions for large earthquakes are strongly controlled by the hypocenter location, spatial distribution of slip, rise time and directivity effects. These are factors that are impossible to predict in advance of a large earthquake and lead to large epistemic uncertainties in ground motion estimates for scenario earthquakes. To bound this uncertainty, we are performing suites of simulations of scenario events on the Hayward Fault using stochastic rupture models following the method of Liu et al. (Bull. Seism. Soc. Am., 96, 2118-2130, 2006). These rupture models have spatially variable slip, rupture velocity, rise time and rake constrained by characterization of inferred finite fault ruptures and expert opinion. Computed ground motions show variability due to the variability in rupture models and can be used to estimate the average and spread of ground motion measures at any particular site. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No.W-7405-Eng-48. This is

  16. Comparison of two large earthquakes: the 2008 Sichuan Earthquake and the 2011 East Japan Earthquake. (United States)

    Otani, Yuki; Ando, Takayuki; Atobe, Kaori; Haiden, Akina; Kao, Sheng-Yuan; Saito, Kohei; Shimanuki, Marie; Yoshimoto, Norifumi; Fukunaga, Koichi


    Between August 15th and 19th, 2011, eight 5th-year medical students from the Keio University School of Medicine had the opportunity to visit the Peking University School of Medicine and hold a discussion session titled "What is the most effective way to educate people for survival in an acute disaster situation (before the mental health care stage)?" During the session, we discussed the following six points: basic information regarding the Sichuan Earthquake and the East Japan Earthquake, differences in preparedness for earthquakes, government actions, acceptance of medical rescue teams, earthquake-induced secondary effects, and media restrictions. Although comparison of the two earthquakes was not simple, we concluded that three major points should be emphasized to facilitate the most effective course of disaster planning and action. First, all relevant agencies should formulate emergency plans and should supply information regarding the emergency to the general public and health professionals on a normal basis. Second, each citizen should be educated and trained in how to minimize the risks from earthquake-induced secondary effects. Finally, the central government should establish a single headquarters responsible for command, control, and coordination during a natural disaster emergency and should centralize all powers in this single authority. We hope this discussion may be of some use in future natural disasters in China, Japan, and worldwide.

  17. Earthquake Source and Ground Motion Characteristics of Great Kanto Earthquakes (United States)

    Somerville, P. G.; Sato, T.; Wald, D. J.; Graves, R. W.; Dan, K.


    This paper describes the derivation of a rupture model of the 1923 Kanto earthquake, and the estimation of ground motions that occurred during that earthquake and that might occur during future great Kanto earthquakes. The rupture model was derived from the joint inversion of geodetic and teleseismic data. The leveling and triangulation data place strong constraints on the distribution and orientation of slip on the fault. The most concentrated slip is in the shallow central and western part of the fault. The location of the hypocenter on the western part of the fault gives rise to strong near fault rupture directivity effects, which are largest toward the east in the Boso Peninsula. To estimate the ground motions caused by this earthquake, we first calibrated 1D and 3D wave propagation path effects using the Odawara earthquake of 5 August 1990 (M 5.1), the first earthquake larger than M 5 in the last 60 years near the hypocenter of the 1923 Kanto earthquake. The simulation of the moderate-sized Odawara earthquake demonstrates that the 3D velocity model works quite well at reproducing the recorded long-period (T > 3.33 sec) strong motions, including basin-generated surface waves, for a number of sites located throughout the Kanto basin region. Using this validated 3D model along with the rupture model described above, we simulated the long-period (T > 4 sec) ground motions in this region for the 1923 Kanto earthquake. The largest ground motions occur east of the epicenter along the central and southern part of the Boso Peninsula. These large motions arise from strong rupture directivity effects and are comprised of relatively simple, source-controlled pulses with a dominant period of about 10 sec. Other rupture models and hypocenter locations generally produce smaller long period ground motion levels in this region that those of the 1923 event. North of the epicentral region, in the Tokyo area, 3D basin-generated phases are quite significant, and these phases

  18. Earthquake cycles on rate-state faults: how does recurrence interval and its variability depend on fault length? (United States)

    Cattania, C.; Segall, P.


    The concept of earthquake cycles is often invoked when discussing seismic risk. However, large faults exhibit more complex behavior than periodic stick-slip cycles. Some events, such as the 2004 Parkfield earthquake, are delayed relative to the mean recurrence interval; in other cases, ruptures are larger or smaller than expected. In contrast, small earthquakes can be very predictable: locked patches surrounded by aseismic creep can rupture periodically in events with similar waveforms. We use numerical tools and ideas from fracture mechanics to study the factors determining recurrence interval (T), rupture size and their variability at different scales. T has been estimated by assuming a constant stress drop and stressing rate inversely proportional to fault length (D). However, Werner & Rubin (2013) found that an energy criterion better explains the scaling of T vs. D in numerical models: on faults loaded from below, full ruptures occur when the elastic energy release rate at the top of the fault reaches the fracture energy. We run simulations of seismic cycles on rate state faults including dynamic weakening from thermal pressurization. A fault composed of a velocity weakening part over a velocity strengthening one is loaded from below at constant slip rate. We find that T increases with thermal pressurization, and verify that the energy argument, modified to account for the fracture energy from thermal pressurization, provides a good estimate of T and its scaling with D. We suggest that the recurrence interval is determined by two timescales: the time required to accumulate sufficient elastic energy for full rupture (tf), and the nucleation time, controlled by the propagation of a creep front into the velocity weakening region (tn). Both timescales depend on fault length: tf increases with D, and tn decreases. The latter is due to faster afterslip in the velocity strengthening region on larger faults. If tn < tf, partial ruptures occur; for large faults, tn

  19. Extreme value statistics and thermodynamics of earthquakes: large earthquakes

    Directory of Open Access Journals (Sweden)

    B. H. Lavenda


    Full Text Available A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershock sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Fréchet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions shows that self-similar power laws are transformed into nonscaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Fréchet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same Catalogue of Chinese Earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Fréchet distribution. Earthquaketemperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  20. Earthquake, GIS and multimedia. The 1883 Casamicciola earthquake

    Directory of Open Access Journals (Sweden)

    M. Rebuffat


    Full Text Available A series of multimedia monographs concerning the main seismic events that have affected the Italian territory are in the process of being produced for the Documental Integrated Multimedia Project (DIMP started by the Italian National Seismic Survey (NSS. The purpose of the project is to reconstruct the historical record of earthquakes and promote an earthquake public education. Producing the monographs. developed in ARC INFO and working in UNIX. involved designing a special filing and management methodology to integrate heterogeneous information (images, papers, cartographies, etc.. This paper describes the possibilities of a GIS (Geographic Information System in the filing and management of documental information. As an example we present the first monograph on the 1883 Casamicciola earthquake. on the island of Ischia (Campania, Italy. This earthquake is particularly interesting for the following reasons: I historical-cultural context (first destructive seismic event after the unification of Italy; 2 its features (volcanic earthquake; 3 the socioeconomic consequences caused at such an important seaside resort.

  1. A new algorithm to find earthquake clusters using neighboring cell connection and rate analysis. (United States)

    Peng, W.; Toda, S.


    To study earthquake interaction, it is important to objectively find a group of earthquakes occurred closely in space and time. Earthquake clusters are chosen with previous techniques that characterize them as mainshock-aftershock sequences or swarm sequences by empirical laws (e.g., Omori-Utsu; ETAS) or direct assumptions about physical processes such as stress transfer, transient stress loading, and fluid migration. Recent papers instead proposed non-parameterized techniques such as a kernel-based smoothing method. The cumulative rate clustering method (CURATE, Jacobs et al., 2013) is one of the approaches without any direct assumptions. The CURATE method was applied in New Zealand and provided a good result for selecting the swarm sequences comparing with the ETAS model. However, it is still difficult to choose a proper confined area and a time interval for extracting sequences from the catalog. To avoid arbitrariness in space and time parameters, here we propose a new method modifying the CURATE approach. We first identify the spatial clusters by looking into the spatial distribution in a 2-D cell-gridded map. The spatial clusters defined as multiple neighboring cells, each of which contains at least one earthquake in a time period T. From the selected spatial clusters, we then evaluate temporal clustering which is defined as a transient increase of seismicity rate comparing to the rate before the target event. We tested this method focusing on shallow crustal seismicity, northern Honshu, Japan. We chose the parameter range from T = 1 to 100 days and cell size = 0.01°to 0.1°. As a result, the number of the clusters increase with longer T and larger cell size. By choosing the T = 30 days and cell size = 0.05°, we successfully selected the long-lasting aftershock sequences associated with the 2004 M6.8 Chuetsu and 2007 M6.8 Chuetsu-oki earthquakes, while other empirical models and CURATE method failed to decluster.

  2. Tsunami Waveform Inversion without Assuming Fault Models- Application to Recent Three Earthquakes around Japan (United States)

    Namegaya, Y.; Ueno, T.; Satake, K.; Tanioka, Y.


    Tsunami waveform inversion is often used to study the source of tsunamigenic earthquakes. In this method, subsurface fault planes are divided into small subfaults, and the slip distribution, then seafloor deformation are estimated. However, it is sometimes difficult to judge the actual fault plane for offshore earthquake such as those along the eastern margin of Japan Sea. We developed an inversion method to estimate vertical seafloor deformation directly from observed tsunami waveforms. The tsunami source area is divided into many nodes, and the vertical seafloor deformation is calculated around each node by using the B-spline functions. The tsunami waveforms are calculated from each node, and used as the Green’s functions for inversion. To stabilize inversion or avoid overestimation of data errors, we introduce smoothing equations like Laplace’s equations. The optimum smoothing strength is estimated from the Akaike’s Bayesian information criterion (ABIC) Method. Advantage of this method is to estimate the vertical seafloor deformation can be estimated without assuming a fault plane. We applied the method to three recent earthquakes around Japan: the 2007 Chuetsu-oki, 2007 Noto Hanto, and 2003 Tokachi-oki earthquakes. The Chuetsu-oki earthquake (M6.8) occurred off the Japan Sea coast of central Japan on 16 July 2007. For this earthquake, complicated aftershock distribution makes it difficult to judge which of the southeast dipping fault or the northwest dipping fault was the actual fault plane. The tsunami inversion result indicates that the uplifted area extends about 10 km from the coastline, and there are two peaks of uplift: about 40 cm in the south and about 20 cm in the north. TheNoto Hanto earthquake (M6.9) occurred off Noto peninsula, also along the Japan Sea coast of central Japan, on 25 March 2007. The inversion result indicates that the uplifted area extends about 10 km off the coast, and the largest uplift amount is more than 40 cm. Location of

  3. Earthquakes in Action: Incorporating Multimedia, Internet Resources, Large-scale Seismic Data, and 3-D Visualizations into Innovative Activities and Research Projects for Today's High School Students (United States)

    Smith-Konter, B.; Jacobs, A.; Lawrence, K.; Kilb, D.


    :// In addition to daily lecture and lab exercises, COSMOS students also conduct a mini-research project of their choice that uses data ranging from the 2004 Parkfield Earthquake, to Southern California seismicity, to global seismicity. Students collect seismic data from the Internet and evaluate earthquake locations, magnitudes, temporal sequence of seismic activity, active fault planes, and plate tectonic boundaries using research quality techniques. Students are given the opportunity to build 3-D visualizations of their research data sets and archive these at the SIO Visualization Center's online library, which is globally accessible to students, teachers, researchers, and the general public ( These student- generated visualizations have become a practical resource for not only students and teachers, but also geophysical researchers that use the visual objects as research tools to better explore and understand their data. Through Earthquakes in Action, we offer both the tools for scientific exploration and the thrills of scientific discovery, providing students with valuable knowledge, novel research experience, and a unique sense of scientific contribution.

  4. Earthquake fault superhighways (United States)

    Robinson, D. P.; Das, S.; Searle, M. P.


    Motivated by the observation that the rare earthquakes which propagated for significant distances at supershear speeds occurred on very long straight segments of faults, we examine every known major active strike-slip fault system on land worldwide and identify those with long (> 100 km) straight portions capable not only of sustained supershear rupture speeds but having the potential to reach compressional wave speeds over significant distances, and call them "fault superhighways". The criteria used for identifying these are discussed. These superhighways include portions of the 1000 km long Red River fault in China and Vietnam passing through Hanoi, the 1050 km long San Andreas fault in California passing close to Los Angeles, Santa Barbara and San Francisco, the 1100 km long Chaman fault system in Pakistan north of Karachi, the 700 km long Sagaing fault connecting the first and second cities of Burma, Rangoon and Mandalay, the 1600 km Great Sumatra fault, and the 1000 km Dead Sea fault. Of the 11 faults so classified, nine are in Asia and two in North America, with seven located near areas of very dense populations. Based on the current population distribution within 50 km of each fault superhighway, we find that more than 60 million people today have increased seismic hazards due to them.

  5. The music of earthquakes and Earthquake Quartet #1 (United States)

    Michael, Andrew J.


    Earthquake Quartet #1, my composition for voice, trombone, cello, and seismograms, is the intersection of listening to earthquakes as a seismologist and performing music as a trombonist. Along the way, I realized there is a close relationship between what I do as a scientist and what I do as a musician. A musician controls the source of the sound and the path it travels through their instrument in order to make sound waves that we hear as music. An earthquake is the source of waves that travel along a path through the earth until reaching us as shaking. It is almost as if the earth is a musician and people, including seismologists, are metaphorically listening and trying to understand what the music means.

  6. Using earthquake intensities to forecast earthquake occurrence times

    Directory of Open Access Journals (Sweden)

    J. R. Holliday


    Full Text Available It is well known that earthquakes do not occur randomly in space and time. Foreshocks, aftershocks, precursory activation, and quiescence are just some of the patterns recognized by seismologists. Using the Pattern Informatics technique along with relative intensity analysis, we create a scoring method based on time dependent relative operating characteristic diagrams and show that the occurrences of large earthquakes in California correlate with time intervals where fluctuations in small earthquakes are suppressed relative to the long term average. We estimate a probability of less than 1% that this coincidence is due to random clustering. Furthermore, we show that the methods used to obtain these results may be applicable to other parts of the world.

  7. Stress drop Scaling and Stress Release in the Darfield-Christchurch, New Zealand Earthquake Sequence (United States)

    Abercrombie, R. E.; Fry, B.; Gerstenberger, M. C.; Doser, D. I.; Bannister, S. C.


    To investigate earthquake rupture dynamics, and which factors (e.g. normal stress, strain rate, fluids, rheology) govern the earthquake source and consequent ground motions, we need to study earthquakes over a wide range of magnitudes, from a diverse range of tectonic environments. The uncertainties and discrepancies between studies of earthquake stress drop are a frustration to all those who are interested in earthquake source and fault dynamics. There is controversy over whether the earthquake rupture process is self-similar and whether it varies with tectonic setting; different studies give different results. It is unclear whether this is due to differences between the earthquakes, or the analysis methods. We are developing a direct wave, spectral ratio analysis approach that includes realistic estimates of uncertainties and has strict objective criteria for assessing the quality of an EGF derived spectral ratio (Abercrombie, 2012, submitted). Comparing this approach to other methods reveals significant random and systematic biases, enabling us to improve our understanding of the real uncertainties. The Canterbury earthquake sequence that began with the M7.1 Darfield earthquake in September 2010, and includes the devastating M6.2 Christchurch earthquake in February 2011 is a very active sequence within a low strain rate tectonic setting. To date there have been 15 earthquakes with M>5.5. High quality recording and accurate relocations make this an ideal sequence to investigate any spatial, temporal, or magnitude dependence to stress drop. The largest earthquakes appear to have relatively high stress drops (and apparent stress), consistent with the high ground accelerations and damage in Christchurch. This observation is also consistent with the hypothesis that faults in low-strain rate regions with long inter-event times rupture in higher stress drop earthquakes. We use recordings from the various GeoNet broadband stations deployed to record the ongoing

  8. Retrospection on the Conclusions of Earthquake Tendency Forecast before the Wenchuan Ms8.0 Earthquake

    Institute of Scientific and Technical Information of China (English)

    Liu Jie; Guo Tieshuan; Yang Liming; Su Youjin; Li Gang


    The reason for the failure to forecast the Wenchuan Ms8.0 earthquake is under study, based on the systematically collection of the seismicity anomalies and their analysis results from annual earthquake tendency forecasts between the 2001 Western Kuulun Mountains Pass Ms8.1 earthquake and the 2008 Wenchuan Ms8.0 earthquake. The results show that the earthquake tendency estimation of Chinese Mainland is for strong earthquakes to occur in the active stage, and that there is still potential for the occurrence of a Ms8.0 large earthquake in Chinese Mainland after the 2001 Western Kunlun Mountains Pass earthquake. However the phenomena that many large earthquakes occurred around Chinese Mainland, and the 6-year long quietude of Ms7.0 earthquake and an obvious quietude of Ms5.0 and Ms6.0 earthquakes during 2002 ~2007 led to the distinctly lower forecast estimation of earthquake tendency in Chinese Mainland after 2006. The middle part in the north-south seismic belt has been designated a seismic risk area of strong earthquake in recent years, but, the estimation of the risk degree in Southwestern China is insufficient after the Ning'er Ms6.4 earthquake in Yunnan in 2007. There are no records of earthquakes with Ms≥7.0 in the Longmenshan fault, which is one of reasons that this fault was not considered a seismic risk area of strong earthquakes in recent years.

  9. Earthquake forecast via neutrino tomography

    Institute of Scientific and Technical Information of China (English)

    WANG Bin; CHEN Ya-Zheng; LI Xue-Qian


    We discuss the possibility of forecasting earthquakes by means of (anti)neutrino tomography. An- tineutrinos emitted from reactors are used as a probe. As the antineutrinos traverse through a region prone to earthquakes, observable variations in the matter effect on the antineutrino oscillation would provide a tomog- raphy of the vicinity of the region. In this preliminary work, we adopt a simplified model for the geometrical profile and matter density in a fault zone. We calculate the survival probability of electron antineutrinos for cases without and with an anomalous accumulation of electrons which can be considered as a clear signal of the coming earthquake, at the geological region with a fault zone, and find that the variation may reach as much as 3% for ν emitted from a reactor. The case for a ν beam from a neutrino factory is also investigated, and it is noted that, because of the typically high energy associated with such neutrinos, the oscillation length is too large and the resultant variation is not practically observable. Our conclusion is that with the present reactor facilities and detection techniques, it is still a difficult task to make an earthquake forecast using such a scheme, though it seems to be possible from a theoretical point of view while ignoring some uncertainties. However, with the development of the geology, especially the knowledge about the fault zone, and with the improvement of the detection techniques, etc., there is hope that a medium-term earthquake forecast would be feasible.

  10. Extreme value distribution of earthquake magnitude (United States)

    Zi, Jun Gan; Tung, C. C.


    Probability distribution of maximum earthquake magnitude is first derived for an unspecified probability distribution of earthquake magnitude. A model for energy release of large earthquakes, similar to that of Adler-Lomnitz and Lomnitz, is introduced from which the probability distribution of earthquake magnitude is obtained. An extensive set of world data for shallow earthquakes, covering the period from 1904 to 1980, is used to determine the parameters of the probability distribution of maximum earthquake magnitude. Because of the special form of probability distribution of earthquake magnitude, a simple iterative scheme is devised to facilitate the estimation of these parameters by the method of least-squares. The agreement between the empirical and derived probability distributions of maximum earthquake magnitude is excellent.

  11. Earthquakes in Central California, 1980-1984 (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — There have been many earthquake occurrences in central California. This set of slides shows earthquake damage from the following events: Livermore, 1980, Coalinga,...

  12. Inside a Crustal Earthquake - the Rock Evidence (United States)

    Sibson, R. H.


    Exhumed fault rock assemblages provide insights into fault zone structure, rupture processes and physical conditions of seismogenesis which can be melded with high-resolution geophysical information on modern earthquakes. The transition from dominantly cataclasite-series to mylonite-series fault rocks at greenschist and greater grades of metamorphism is the basis of fault zone models and rheological strength profiles defining the FR-VS (frictional-viscous) transition which governs the base of the microseismically defined seismogenic zone, within which larger ruptures are mostly contained. In areas of crust deforming under moderate-to-high heat flow (e.g. Japan, California) there is good correlation between geothermal gradient and the base of microseismic activity in the crust. However, compositional variations (e.g. quartz- vs. feldspar-dominant rheology) plus other factors such as water content locally perturb the base of the seismogenic zone, creating strength asperities which may affect the nucleation of large ruptures (e.g.1989 M6.9 Loma Prieta earthquake). The level of shear stress driving rupturing within the seismogenic zone remains problematic. While some estimates (e.g. those inferred from pseudotachylyte friction-melts) are broadly consistent with expectations for the frictional strength of optimally oriented faults with 'Byerlee friction' (τ ~ 80-240 MPa at 10 km depth, depending on faulting mode), others (e.g. faults with associated hydrothermal extension veins) appear to slip at much lower levels of shear stress (max. τ 90% of global seismic moment release) and areas of active compressional inversion (e.g. NE Honshu). However, while fault overpressuring is more easily generated and sustained in compressional regimes, it may be more widespread than once thought. The presence of incrementally deposited hydrothermal veins along fault slip surfaces (often associated with subsidiary extension vein arrays) is not uncommon in fault assemblages exhumed from

  13. A Public Health Issue Related To Collateral Seismic Hazards: The Valley Fever Outbreak Triggered By The 1994 Northridge, California Earthquake (United States)

    Jibson, Randall W.

    Following the 17 January 1994 Northridge, California earthquake (M = 6.7), Ventura County, California, experienced a major outbreak ofcoccidioidomycosis (CM), commonly known as valley fever, a respiratory disease contracted byinhaling airborne fungal spores. In the 8 weeks following the earthquake (24 Januarythrough 15 March), 203 outbreak-associated cases were reported, which is about an order of magnitude more than the expected number of cases, and three of these cases were fatal.Simi Valley, in easternmost Ventura County, had the highest attack rate in the county,and the attack rate decreased westward across the county. The temporal and spatial distribution of CM cases indicates that the outbreak resulted from inhalation of spore-contaminated dust generated by earthquake-triggered landslides. Canyons North East of Simi Valleyproduced many highly disrupted, dust-generating landslides during the earthquake andits aftershocks. Winds after the earthquake were from the North East, which transporteddust into Simi Valley and beyond to communities to the West. The three fatalities from the CM epidemic accounted for 4 percent of the total earthquake-related fatalities.

  14. Lithospheric flexure under the Hawaiian volcanic load: Internal stresses and a broken plate revealed by earthquakes (United States)

    Klein, Fred W.


    Several lines of earthquake evidence indicate that the lithospheric plate is broken under the load of the island of Hawai`i, where the geometry of the lithosphere is circular with a central depression. The plate bends concave downward surrounding a stress-free hole, rather than bending concave upward as with past assumptions. Earthquake focal mechanisms show that the center of load stress and the weak hole is between the summits of Mauna Loa and Mauna Kea where the load is greatest. The earthquake gap at 21 km depth coincides with the predicted neutral plane of flexure where horizontal stress changes sign. Focal mechanism P axes below the neutral plane display a striking radial pattern pointing to the stress center. Earthquakes above the neutral plane in the north part of the island have opposite stress patterns; T axes tend to be radial. The M6.2 Honomu and M6.7 Kiholo main shocks (both at 39 km depth) are below the neutral plane and show radial compression, and the M6.0 Kiholo aftershock above the neutral plane has tangential compression. Earthquakes deeper than 20 km define a donut of seismicity around the stress center where flexural bending is a maximum. The hole is interpreted as the soft center where the lithospheric plate is broken. Kilauea's deep conduit is seismically active because it is in the ring of maximum bending. A simplified two-dimensional stress model for a bending slab with a load at one end yields stress orientations that agree with earthquake stress axes and radial P axes below the neutral plane. A previous inversion of deep Hawaiian focal mechanisms found a circular solution around the stress center that agrees with the model. For horizontal faults, the shear stress within the bending slab matches the slip in the deep Kilauea seismic zone and enhances outward slip of active flanks.

  15. Nonstationary ETAS models for nonstandard earthquakes


    Kumazawa, Takao; Ogata, Yosihiko


    The conditional intensity function of a point process is a useful tool for generating probability forecasts of earthquakes. The epidemic-type aftershock sequence (ETAS) model is defined by a conditional intensity function, and the corresponding point process is equivalent to a branching process, assuming that an earthquake generates a cluster of offspring earthquakes (triggered earthquakes or so-called aftershocks). Further, the size of the first-generation cluster depends on the magnitude of...

  16. The October 12, 1992, Dahshur, Egypt, Earthquake (United States)

    Thenhaus, P.C.; Celebi, M.; Sharp, R.V.


    Cairo and northeastern Egypt experienced a rare, damaging earthquake on October 12, 1992. The earthquake, which measured 5.9 on the Richter magnitude scale, was centered near the village of Dahshur, about 18 km south of Cairo. The computed hypocentral depth of the earthquake, about 25 km, is consistent with the fact that fault rupture associated with the earthquake did not reach the surface. 


    Directory of Open Access Journals (Sweden)

    Mustafa ULAS


    Full Text Available lot of people have died because of earthquakes every year. Therefore It is crucial to predict the time of the earthquakes reasonable time before it had happed. This paper presents recent information published in the literature about precursors of earthquakes. The relationships between earthquakes and ionosphere are targeted to guide new researches in order to study further to find novel prediction methods.

  18. 新BMW M5和新BMW M6车型即将进入中国市场

    Institute of Scientific and Technical Information of China (English)


    2006年夏天即将在中国上市的新BMW M车型包括新BMW M5和新BMW M6,他们拥有强悍的动力单元,把超级跑车的外观和Grand Touring赛车的性能完美地结合在了一起。

  19. Fault geometry and earthquake mechanics

    Directory of Open Access Journals (Sweden)

    D. J. Andrews


    Full Text Available Earthquake mechanics may be determined by the geometry of a fault system. Slip on a fractal branching fault surface can explain: 1 regeneration of stress irregularities in an earthquake; 2 the concentration of stress drop in an earthquake into asperities; 3 starting and stopping of earthquake slip at fault junctions, and 4 self-similar scaling of earthquakes. Slip at fault junctions provides a natural realization of barrier and asperity models without appealing to variations of fault strength. Fault systems are observed to have a branching fractal structure, and slip may occur at many fault junctions in an earthquake. Consider the mechanics of slip at one fault junction. In order to avoid a stress singularity of order 1/r, an intersection of faults must be a triple junction and the Burgers vectors on the three fault segments at the junction must sum to zero. In other words, to lowest order the deformation consists of rigid block displacement, which ensures that the local stress due to the dislocations is zero. The elastic dislocation solution, however, ignores the fact that the configuration of the blocks changes at the scale of the displacement. A volume change occurs at the junction; either a void opens or intense local deformation is required to avoid material overlap. The volume change is proportional to the product of the slip increment and the total slip since the formation of the junction. Energy absorbed at the junction, equal to confining pressure times the volume change, is not large enongh to prevent slip at a new junction. The ratio of energy absorbed at a new junction to elastic energy released in an earthquake is no larger than P/µ where P is confining pressure and µ is the shear modulus. At a depth of 10 km this dimensionless ratio has th value P/µ= 0.01. As slip accumulates at a fault junction in a number of earthquakes, the fault segments are displaced such that they no longer meet at a single point. For this reason the

  20. EARTHQUAKES - VOLCANOES (Causes - Forecast - Counteraction) (United States)

    Tsiapas, Elias


    Earthquakes and volcanoes are caused by: 1)Various liquid elements (e.g. H20, H2S, S02) which emerge from the pyrosphere and are trapped in the space between the solid crust and the pyrosphere (Moho discontinuity). 2)Protrusions of the solid crust at the Moho discontinuity (mountain range roots, sinking of the lithosphere's plates). 3)The differential movement of crust and pyrosphere. The crust misses one full rotation for approximately every 100 pyrosphere rotations, mostly because of the lunar pull. The above mentioned elements can be found in small quantities all over the Moho discontinuity, and they are constantly causing minor earthquakes and small volcanic eruptions. When large quantities of these elements (H20, H2S, SO2, etc) concentrate, they are carried away by the pyrosphere, moving from west to east under the crust. When this movement takes place under flat surfaces of the solid crust, it does not cause earthquakes. But when these elements come along a protrusion (a mountain root) they concentrate on its western side, displacing the pyrosphere until they fill the space created. Due to the differential movement of pyrosphere and solid crust, a vacuum is created on the eastern side of these protrusions and when the aforementioned liquids overfill this space, they explode, escaping to the east. At the point of their escape, these liquids are vaporized and compressed, their flow accelerates, their temperature rises due to fluid friction and they are ionized. On the Earth's surface, a powerful rumbling sound and electrical discharges in the atmosphere, caused by the movement of the gasses, are noticeable. When these elements escape, the space on the west side of the protrusion is violently taken up by the pyrosphere, which collides with the protrusion, causing a major earthquake, attenuation of the protrusions, cracks on the solid crust and damages to structures on the Earth's surface. It is easy to foresee when an earthquake will occur and how big it is

  1. Historical earthquake investigations in Greece

    Directory of Open Access Journals (Sweden)

    K. Makropoulos


    Full Text Available The active tectonics of the area of Greece and its seismic activity have always been present in the country?s history. Many researchers, tempted to work on Greek historical earthquakes, have realized that this is a task not easily fulfilled. The existing catalogues of strong historical earthquakes are useful tools to perform general SHA studies. However, a variety of supporting datasets, non-uniformly distributed in space and time, need to be further investigated. In the present paper, a review of historical earthquake studies in Greece is attempted. The seismic history of the country is divided into four main periods. In each one of them, characteristic examples, studies and approaches are presented.

  2. Proliferation response of peripheral blood mononuclear cells to streptococcus M6 protein in patients with psoriasis%银屑病患者外周血单一核细胞对链球菌M6蛋白的反应

    Institute of Scientific and Technical Information of China (English)

    刘雯; 赵广; 刘玉峰


    目的: 探讨链球菌M6蛋白(M6P)在点滴型银屑病发病中的作用机制. 方法: MTT法检测点滴型银屑病(GP)、斑块型银屑病(PP)患者及正常人外周血单一核细胞(PBMC)对极微量(100 ng)M6P、葡萄球菌肠毒素B(staphylococcal endotoxin B, SEB)的增殖反应;双抗夹心ELISA法检测M6P刺激培养72 h点滴型银屑病PBMC上清中IFN-γ,IL-4水平. 结果: M6P可引起点滴型银屑病患者PBMC明显的增殖反应,与RPMI 1640组(空白对照)差异非常显著(P<0.01);与斑块型组及正常对照组相比差异极显著(P<0.01);点滴型银屑病组M6P刺激72 h培养上清中IFN-γ含量较空白对照组明显升高(P<0.01),IL-4含量均未检出. 结论: 在点滴型银屑病的发病中,M6P可能作为细菌超抗原刺激T细胞大量增殖后释放Th1型细胞因子而起作用.

  3. 13 CFR 120.174 - Earthquake hazards. (United States)


    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Earthquake hazards. 120.174... Applying to All Business Loans Requirements Imposed Under Other Laws and Orders § 120.174 Earthquake..., the construction must conform with the “National Earthquake Hazards Reduction Program (“NEHRP...

  4. Earthquake Education in Prime Time (United States)

    de Groot, R.; Abbott, P.; Benthien, M.


    Since 2001, the Southern California Earthquake Center (SCEC) has collaborated on several video production projects that feature important topics related to earthquake science, engineering, and preparedness. These projects have also fostered many fruitful and sustained partnerships with a variety of organizations that have a stake in hazard education and preparedness. The Seismic Sleuths educational video first appeared in the spring season 2001 on Discovery Channel's Assignment Discovery. Seismic Sleuths is based on a highly successful curriculum package developed jointly by the American Geophysical Union and The Department of Homeland Security Federal Emergency Management Agency. The California Earthquake Authority (CEA) and the Institute for Business and Home Safety supported the video project. Summer Productions, a company with a reputation for quality science programming, produced the Seismic Sleuths program in close partnership with scientists, engineers, and preparedness experts. The program has aired on the National Geographic Channel as recently as Fall 2004. Currently, SCEC is collaborating with Pat Abbott, a geology professor at San Diego State University (SDSU) on the video project Written In Stone: Earthquake Country - Los Angeles. Partners on this project include the California Seismic Safety Commission, SDSU, SCEC, CEA, and the Insurance Information Network of California. This video incorporates live-action demonstrations, vivid animations, and a compelling host (Abbott) to tell the story about earthquakes in the Los Angeles region. The Written in Stone team has also developed a comprehensive educator package that includes the video, maps, lesson plans, and other supporting materials. We will present the process that facilitates the creation of visually effective, factually accurate, and entertaining video programs. We acknowledge the need to have a broad understanding of the literature related to communication, media studies, science education, and

  5. Scaling relation for earthquake networks

    CERN Document Server

    Abe, Sumiyoshi


    The scaling relation derived by Dorogovtsev, Goltsev, Mendes and Samukhin [Phys. Rev. E, 68 (2003) 046109] states that the exponents of the power-law connectivity distribution, gamma, and the power-law eigenvalue distribution of the adjacency matrix, delta, of a locally treelike scale-free network satisfy 2*gamma - delta = 1 in the mean field approximation. Here, it is shown that this relation holds well for the reduced simple earthquake networks (without tadpole-loops and multiple edges) constructed from the seismic data taken from California and Japan. The result is interpreted from the viewpoint of the hierarchical organization of the earthquake networks.

  6. Computational methods in earthquake engineering

    CERN Document Server

    Plevris, Vagelis; Lagaros, Nikos


    This is the third book in a series on Computational Methods in Earthquake Engineering. The purpose of this volume is to bring together the scientific communities of Computational Mechanics and Structural Dynamics, offering a wide coverage of timely issues on contemporary Earthquake Engineering. This volume will facilitate the exchange of ideas in topics of mutual interest and can serve as a platform for establishing links between research groups with complementary activities. The computational aspects are emphasized in order to address difficult engineering problems of great social and economic importance. .

  7. Earthquakes triggered by fluid extraction (United States)

    Segall, P.


    Seismicity is correlated in space and time with production from some oil and gas fields where pore pressures have declined by several tens of megapascals. Reverse faulting has occurred both above and below petroleum reservoirs, and normal faulting has occurred on the flanks of at least one reservoir. The theory of poroelasticity requires that fluid extraction locally alter the state of stress. Calculations with simple geometries predict stress perturbations that are consistent with observed earthquake locations and focal mechanisms. Measurements of surface displacement and strain, pore pressure, stress, and poroelastic rock properties in such areas could be used to test theoretical predictions and improve our understanding of earthquake mechanics. -Author

  8. Predicted Liquefaction in the Greater Oakland and Northern Santa Clara Valley Areas for a Repeat of the 1868 Hayward Earthquake (United States)

    Holzer, T. L.; Noce, T. E.; Bennett, M. J.


    Probabilities of surface manifestations of liquefaction due to a repeat of the 1868 (M6.7-7.0) earthquake on the southern segment of the Hayward Fault were calculated for two areas along the margin of San Francisco Bay, California: greater Oakland and the northern Santa Clara Valley. Liquefaction is predicted to be more common in the greater Oakland area than in the northern Santa Clara Valley owing to the presence of 57 km2 of susceptible sandy artificial fill. Most of the fills were placed into San Francisco Bay during the first half of the 20th century to build military bases, port facilities, and shoreline communities like Alameda and Bay Farm Island. Probabilities of liquefaction in the area underlain by this sandy artificial fill range from 0.2 to ~0.5 for a M7.0 earthquake, and decrease to 0.1 to ~0.4 for a M6.7 earthquake. In the greater Oakland area, liquefaction probabilities generally are less than 0.05 for Holocene alluvial fan deposits, which underlie most of the remaining flat-lying urban area. In the northern Santa Clara Valley for a M7.0 earthquake on the Hayward Fault and an assumed water-table depth of 1.5 m (the historically shallowest water level), liquefaction probabilities range from 0.1 to 0.2 along Coyote and Guadalupe Creeks, but are less than 0.05 elsewhere. For a M6.7 earthquake, probabilities are greater than 0.1 along Coyote Creek but decrease along Guadalupe Creek to less than 0.1. Areas with high probabilities in the Santa Clara Valley are underlain by latest Holocene alluvial fan levee deposits where liquefaction and lateral spreading occurred during large earthquakes in 1868 and 1906. The liquefaction scenario maps were created with ArcGIS ModelBuilder. Peak ground accelerations first were computed with the new Boore and Atkinson NGA attenuation relation (2008, Earthquake Spectra, 24:1, p. 99-138), using VS30 to account for local site response. Spatial liquefaction probabilities were then estimated using the predicted ground motions

  9. Dancing Earthquake Science Assists Recovery from the Christchurch Earthquakes (United States)

    Egan, Candice J.; Quigley, Mark C.


    The 2010-2012 Christchurch (Canterbury) earthquakes in New Zealand caused loss of life and psychological distress in residents throughout the region. In 2011, student dancers of the Hagley Dance Company and dance professionals choreographed the performance "Move: A Seismic Journey" for the Christchurch Body Festival that explored…

  10. Dancing Earthquake Science Assists Recovery from the Christchurch Earthquakes (United States)

    Egan, Candice J.; Quigley, Mark C.


    The 2010-2012 Christchurch (Canterbury) earthquakes in New Zealand caused loss of life and psychological distress in residents throughout the region. In 2011, student dancers of the Hagley Dance Company and dance professionals choreographed the performance "Move: A Seismic Journey" for the Christchurch Body Festival that explored…

  11. Virtual California, ETAS, and OpenHazards web services: Responding to earthquakes in the age of Big Data (United States)

    Yoder, M. R.; Schultz, K.; Rundle, J. B.; Glasscoe, M. T.; Donnellan, A.


    The response to the 2014 m=6 Napa earthquake showcased data driven services and technologies that aided first responders and decision makers to quickly assess damage, estimate aftershock hazard, and efficiently allocate resources where where they were most needed. These tools have been developed from fundamental research as part of a broad collaboration -- facilitated in no small party by the California Earthquake Clearinghouse, between researchers, policy makers, and executive decision makers and practiced and honed during numerous disaster response exercises over the past several years. On 24 August 2014, and the weeks following the m=6 Napa event, it became evident that these technologies will play an important role in the response to natural (and other) disasters in the 21st century. Given the continued rapid growth of computational capabilities, remote sensing technologies, and data gathering capacities -- including by unpiloted aerial vehicles (UAVs), it is reasonable to expect that both the volume and variety of data available during a response scenario will grow significantly in the decades to come. Inevitably, modern Data Science will be critical to effective disaster response in the 21st century. In this work, we discuss the roles that earthquake simulators, statistical seismicity models, and remote sensing technologies played in the the 2014 Napa earthquake response. We further discuss "Big Data" technologies and data models that facilitate the transformation of raw data into disseminable information and actionable products, and we outline a framework for the next generation of disaster response data infrastructure.

  12. Investigations on Real-time GPS for Earthquake Early Warning (United States)

    Grapenthin, R.; Aranha, M. A.; Melgar, D.; Allen, R. M.


    The Geodetic Alarm System (G-larmS) is a software system developed in a collaboration between the Berkeley Seismological Laboratory (BSL) and New Mexico Tech (NMT) primarily for real-time Earthquake Early Warning (EEW). It currently uses high rate (1Hz), low latency (inversion on these offsets to determine slip on a finite fault, which we use to estimate moment magnitude. These computations are repeated every second for the duration of the event. G-larmS has been in continuous operation at the BSL for over a year using event triggers from the California Integrated Seismic Network (CISN) ShakeAlert system and real-time position time series from a fully triangulated network consisting of BARD, PBO and USGS stations across northern California. Pairs of stations are processed as baselines using trackRT (MIT software package). G-larmS produced good results in real-time during the South Napa (M 6.0, August 2014) earthquake as well as on several replayed and simulated test cases. We evaluate the performance of G-larmS for EEW by analysing the results using a set of well defined test cases to investigate the following: (1) using multiple fault regimes and concurrent processing with the ultimate goal of achieving model generation (slip and magnitude computations) within each 1 second GPS epoch on very large magnitude earthquakes (up to M 9.0), (2) the use of Precise Point Positioning (PPP) real-time data streams of various operators, accuracies, latencies and formats along with baseline data streams, (3) collaboratively expanding EEW coverage along the U.S. West Coast on a regional network basis for Northern California, Southern California and Cascadia.

  13. Detection and implication of significant temporal b-value variation during earthquake sequences (United States)

    Gulia, Laura; Tormann, Thessa; Schorlemmer, Danijel; Wiemer, Stefan


    Earthquakes tend to cluster in space and time and periods of increased seismic activity are also periods of increased seismic hazard. Forecasting models currently used in statistical seismology and in Operational Earthquake Forecasting (e.g. ETAS) consider the spatial and temporal changes in the activity rates whilst the spatio-temporal changes in the earthquake size distribution, the b-value, are not included. Laboratory experiments on rock samples show an increasing relative proportion of larger events as the system approaches failure, and a sudden reversal of this trend after the main event. The increasing fraction of larger events during the stress increase period can be mathematically represented by a systematic b-value decrease, while the b-value increases immediately following the stress release. We investigate whether these lab-scale observations also apply to natural earthquake sequences and can help to improve our understanding of the physical processes generating damaging earthquakes. A number of large events nucleated in low b-value regions and spatial b-value variations have been extensively documented in the past. Detecting temporal b-value evolution with confidence is more difficult, one reason being the very different scales that have been suggested for a precursory drop in b-value, from a few days to decadal scale gradients. We demonstrate with the results of detailed case studies of the 2009 M6.3 L'Aquila and 2011 M9 Tohoku earthquakes that significant and meaningful temporal b-value variability can be detected throughout the sequences, which e.g. suggests that foreshock probabilities are not generic but subject to significant spatio-temporal variability. Such potential conclusions require and motivate the systematic study of many sequences to investigate whether general patterns exist that might eventually be useful for time-dependent or even real-time seismic hazard assessment.

  14. Long-Delayed Aftershocks in New Zealand and the 2016 M7.8 Kaikoura Earthquake (United States)

    Shebalin, P.; Baranov, S.


    We study aftershock sequences of six major earthquakes in New Zealand, including the 2016 M7.8 Kaikaoura and 2016 M7.1 North Island earthquakes. For Kaikaoura earthquake, we assess the expected number of long-delayed large aftershocks of M5+ and M5.5+ in two periods, 0.5 and 3 years after the main shocks, using 75 days of available data. We compare results with obtained for other sequences using same 75-days period. We estimate the errors by considering a set of magnitude thresholds and corresponding periods of data completeness and consistency. To avoid overestimation of the expected rates of large aftershocks, we presume a break of slope of the magnitude-frequency relation in the aftershock sequences, and compare two models, with and without the break of slope. Comparing estimations to the actual number of long-delayed large aftershocks, we observe, in general, a significant underestimation of their expected number. We can suppose that the long-delayed aftershocks may reflect larger-scale processes, including interaction of faults, that complement an isolated relaxation process. In the spirit of this hypothesis, we search for symptoms of the capacity of the aftershock zone to generate large events months after the major earthquake. We adapt an algorithm EAST, studying statistics of early aftershocks, to the case of secondary aftershocks within aftershock sequences of major earthquakes. In retrospective application to the considered cases, the algorithm demonstrates an ability to detect in advance long-delayed aftershocks both in time and space domains. Application of the EAST algorithm to the 2016 M7.8 Kaikoura earthquake zone indicates that the most likely area for a delayed aftershock of M5.5+ or M6+ is at the northern end of the zone in Cook Strait.

  15. 3D Spontaneous Rupture Models of Large Earthquakes on the Hayward Fault, California (United States)

    Barall, M.; Harris, R. A.; Simpson, R. W.


    We are constructing 3D spontaneous rupture computer simulations of large earthquakes on the Hayward and central Calaveras faults. The Hayward fault has a geologic history of producing many large earthquakes (Lienkaemper and Williams, 2007), with its most recent large event a M6.8 earthquake in 1868. Future large earthquakes on the Hayward fault are not only possible, but probable (WGCEP, 2008). Our numerical simulation efforts use information about the complex 3D fault geometry of the Hayward and Calaveras faults and information about the geology and physical properties of the rocks that surround the Hayward and Calaveras faults (Graymer et al., 2005). Initial stresses on the fault surface are inferred from geodetic observations (Schmidt et al., 2005), seismological studies (Hardebeck and Aron, 2008), and from rate-and- state simulations of the interseismic interval (Stuart et al., 2008). In addition, friction properties on the fault surface are inferred from laboratory measurements of adjacent rock types (Morrow et al., 2008). We incorporate these details into forward 3D computer simulations of dynamic rupture propagation, using the FaultMod finite-element code (Barall, 2008). The 3D fault geometry is constructed using a mesh-morphing technique, which starts with a vertical planar fault and then distorts the entire mesh to produce the desired fault geometry. We also employ a grid-doubling technique to create a variable-resolution mesh, with the smallest elements located in a thin layer surrounding the fault surface, which provides the higher resolution needed to model the frictional behavior of the fault. Our goals are to constrain estimates of the lateral and depth extent of future large Hayward earthquakes, and to explore how the behavior of large earthquakes may be affected by interseismic stress accumulation and aseismic slip.

  16. Earthquake Risk Management of Underground Lifelines in the Urban Area of Catania (United States)

    Grasso, S.; Maugeri, M.


    Lifelines typically include the following five utility networks: potable water, sewage natural gas, electric power, telecommunication and transportation system. The response of lifeline systems, like gas and water networks, during a strong earthquake, can be conveniently evaluated with the estimated average number of ruptures per km of pipe. These ruptures may be caused either by fault ruptures crossing, or by permanent deformations of the soil mass (landslides, liquefaction), or by transient soil deformations caused by seismic wave propagation. The possible consequences of damaging earthquakes on transportation systems may be the reduction or the interruption of traffic flow, as well as the impact on the emergency response and on the recovery assistance. A critical element in the emergency management is the closure of roads due to fallen obstacles and debris of collapsed buildings. The earthquake-induced damage to buried pipes is expressed in terms of repair rate (RR), defined as the number of repairs divided by the pipe length (km) exposed to a particular level of seismic demand; this number is a function of the pipe material (and joint type), of the pipe diameter and of the ground shaking level, measured in terms of peak horizontal ground velocity (PGV) or permanent ground displacement (PGD). The development of damage algorithms for buried pipelines is primarily based on empirical evidence, tempered with engineering judgment and sometimes by analytical formulations. For the city of Catania, in the present work use has been made of the correlation between RR and peak horizontal ground velocity by American Lifelines Alliance (ALA, 2001), for the verifications of main buried pipelines. The performance of the main buried distribution networks has been evaluated for the Level I earthquake scenario (January 11, 1693 event I = XI, M 7.3) and for the Level II earthquake scenario (February 20, 1818 event I = IX, M 6.2). Seismic damage scenario of main gas pipelines and

  17. Jumping over the hurdles to effectively communicate the Operational Earthquake Forecast (United States)

    McBride, S.; Wein, A. M.; Becker, J.; Potter, S.; Tilley, E. N.; Gerstenberger, M.; Orchiston, C.; Johnston, D. M.


    Probabilities, uncertainties, statistics, science, and threats are notoriously difficult topics to communicate with members of the public. The Operational Earthquake Forecast (OEF) is designed to provide an understanding of potential numbers and sizes of earthquakes and the communication of it must address all of those challenges. Furthermore, there are other barriers to effective communication of the OEF. These barriers include the erosion of trust in scientists and experts, oversaturation of messages, fear and threat messages magnified by the sensalisation of the media, fractured media environments and online echo chambers. Given the complexities and challenges of the OEF, how can we overcome barriers to effective communication? Crisis and risk communication research can inform the development of communication strategies to increase the public understanding and use of the OEF, when applied to the opportunities and challenges of practice. We explore ongoing research regarding how the OEF can be more effectively communicated - including the channels, tools and message composition to engage with a variety of publics. We also draw on past experience and a study of OEF communication during the Canterbury Earthquake Sequence (CES). We demonstrate how research and experience has guided OEF communications during subsequent events in New Zealand, including the M5.7 Valentine's Day earthquake in 2016 (CES), M6.0 Wilberforce earthquake in 2015, and the Cook Strait/Lake Grassmere earthquakes in 2013. We identify the successes and lessons learned of the practical communication of the OEF. Finally, we present future projects and directions in the communication of OEF, informed by both practice and research.

  18. Using a physics-based earthquake simulator to evaluate seismic hazard in NW Iran (United States)

    Khodaverdian, A.; Zafarani, H.; Rahimian, M.


    NW Iran is a region of active deformation in the Eurasia-Arabia collision zone. This high strain field has caused intensive faulting accompanied by several major (M > 6.5) earthquakes as it is evident from historical records. Whereas seismic data (i.e. instrumental and historical catalogues) are either short, or inaccurate and inhomogeneous, physics-based long-term simulations are beneficial to better assess seismic hazard. In this study, a deterministic seismicity model, which consists of major active faults, is first constructed, and used to generate a synthetic catalogue of large-magnitude (M > 5.5) earthquakes. The frequency-magnitude distribution of the synthetic earthquake catalogue, which is based on the physical characteristic and slip rate of the mapped faults, is consistent with the empirical distribution evaluated using record of instrumental and historical events. The obtained results are also in accordance with palaeoseismic studies and other independent kinematic deformation models of the Iranian Plateau. Using the synthetic catalogue, characteristic magnitude for all 16 active faults in the study area is determined. Magnitude and epicentre of these earthquakes are comparable with the historical records. Large earthquake recurrence times and their variations are evaluated, either for an individual fault or for the region as a whole. Goodness-of-fitness tests revealed that recurrence times can be well described by the Weibull distribution. Time-dependent conditional probabilities for large earthquakes in the study area are also estimated for different time intervals. The resulting synthetic catalogue can be utilized as a useful data set for hazard and risk assessment instead of short, incomplete and inhomogeneous available catalogues.

  19. Evidence for a twelfth large earthquake on the southern hayward fault in the past 1900 years (United States)

    Lienkaemper, J.J.; Williams, P.L.; Guilderson, T.P.


    We present age and stratigraphic evidence for an additional paleoearthquake at the Tyson Lagoon site. The acquisition of 19 additional radiocarbon dates and the inclusion of this additional event has resolved a large age discrepancy in our earlier earthquake chronology. The age of event E10 was previously poorly constrained, thus increasing the uncertainty in the mean recurrence interval (RI), a critical factor in seismic hazard evaluation. Reinspection of many trench logs revealed substantial evidence suggesting that an additional earthquake occurred between E10 and E9 within unit u45. Strata in older u45 are faulted in the main fault zone and overlain by scarp colluviums in two locations.We conclude that an additional surfacerupturing event (E9.5) occurred between E9 and E10. Since 91 A.D. (??40 yr, 1??), 11 paleoearthquakes preceded the M 6:8 earthquake in 1868, yielding a mean RI of 161 ?? 65 yr (1??, standard deviation of recurrence intervals). However, the standard error of the mean (SEM) is well determined at ??10 yr. Since ~1300 A.D., the mean rate has increased slightly, but is indistinguishable from the overall rate within the uncertainties. Recurrence for the 12-event sequence seems fairly regular: the coefficient of variation is 0.40, and it yields a 30-yr earthquake probability of 29%. The apparent regularity in timing implied by this earthquake chronology lends support for the use of time-dependent renewal models rather than assuming a random process to forecast earthquakes, at least for the southern Hayward fault.

  20. Automatic earthquake confirmation for early warning system (United States)

    Kuyuk, H. S.; Colombelli, S.; Zollo, A.; Allen, R. M.; Erdik, M. O.


    Earthquake early warning studies are shifting real-time seismology in earthquake science. They provide methods to rapidly assess earthquakes to predict damaging ground shaking. Preventing false alarms from these systems is key. Here we developed a simple, robust algorithm, Authorizing GRound shaking for Earthquake Early warning Systems (AGREEs), to reduce falsely issued alarms. This is a network threshold-based algorithm, which differs from existing approaches based on apparent velocity of P and S waves. AGREEs is designed to function as an external module to support existing earthquake early warning systems (EEWSs) and filters out the false events, by evaluating actual shaking near the epicenter. Our retrospective analyses of the 2009 L'Aquila and 2012 Emilia earthquakes show that AGREEs could help an EEWS by confirming the epicentral intensity. Furthermore, AGREEs is able to effectively identify three false events due to a storm, a teleseismic earthquake, and broken sensors in Irpinia Seismic Network, Italy.

  1. Earthquake design for controlled structures

    Directory of Open Access Journals (Sweden)

    Nikos G. Pnevmatikos


    Full Text Available An alternative design philosophy, for structures equipped with control devices, capable to resist an expected earthquake while remaining in the elastic range, is described. The idea is that a portion of the earthquake loading is under¬taken by the control system and the remaining by the structure which is designed to resist elastically. The earthquake forces assuming elastic behavior (elastic forces and elastoplastic behavior (design forces are first calculated ac¬cording to the codes. The required control forces are calculated as the difference from elastic to design forces. The maximum value of capacity of control devices is then compared to the required control force. If the capacity of the control devices is larger than the required control force then the control devices are accepted and installed in the structure and the structure is designed according to the design forces. If the capacity is smaller than the required control force then a scale factor, α, reducing the elastic forces to new design forces is calculated. The structure is redesigned and devices are installed. The proposed procedure ensures that the structure behaves elastically (without damage for the expected earthquake at no additional cost, excluding that of buying and installing the control devices.

  2. Using Smartphones to Detect Earthquakes (United States)

    Kong, Q.; Allen, R. M.


    We are using the accelerometers in smartphones to record earthquakes. In the future, these smartphones may work as a supplement network to the current traditional network for scientific research and real-time applications. Given the potential number of smartphones, and small separation of sensors, this new type of seismic dataset has significant potential provides that the signal can be separated from the noise. We developed an application for android phones to record the acceleration in real time. These records can be saved on the local phone or transmitted back to a server in real time. The accelerometers in the phones were evaluated by comparing performance with a high quality accelerometer while located on controlled shake tables for a variety of tests. The results show that the accelerometer in the smartphone can reproduce the characteristic of the shaking very well, even the phone left freely on the shake table. The nature of these datasets is also quite different from traditional networks due to the fact that smartphones are moving around with their owners. Therefore, we must distinguish earthquake signals from other daily use. In addition to the shake table tests that accumulated earthquake records, we also recorded different human activities such as running, walking, driving etc. An artificial neural network based approach was developed to distinguish these different records. It shows a 99.7% successful rate of distinguishing earthquakes from the other typical human activities in our database. We are now at the stage ready to develop the basic infrastructure for a smartphone seismic network.

  3. Seismicity dynamics and earthquake predictability

    Directory of Open Access Journals (Sweden)

    G. A. Sobolev


    Full Text Available Many factors complicate earthquake sequences, including the heterogeneity and self-similarity of the geological medium, the hierarchical structure of faults and stresses, and small-scale variations in the stresses from different sources. A seismic process is a type of nonlinear dissipative system demonstrating opposing trends towards order and chaos. Transitions from equilibrium to unstable equilibrium and local dynamic instability appear when there is an inflow of energy; reverse transitions appear when energy is dissipating. Several metastable areas of a different scale exist in the seismically active region before an earthquake. Some earthquakes are preceded by precursory phenomena of a different scale in space and time. These include long-term activation, seismic quiescence, foreshocks in the broad and narrow sense, hidden periodical vibrations, effects of the synchronization of seismic activity, and others. Such phenomena indicate that the dynamic system of lithosphere is moving to a new state – catastrophe. A number of examples of medium-term and short-term precursors is shown in this paper. However, no precursors identified to date are clear and unambiguous: the percentage of missed targets and false alarms is high. The weak fluctuations from outer and internal sources play a great role on the eve of an earthquake and the occurrence time of the future event depends on the collective behavior of triggers. The main task is to improve the methods of metastable zone detection and probabilistic forecasting.

  4. Locations and magnitudes of earthquakes in Central Asia from seismic intensity data (United States)

    Bindi, D.; Parolai, S.; Gómez-Capera, A.; Locati, M.; Kalmetyeva, Z.; Mikhailova, N.


    .e. the Aksu and Chon-Aksu segments), where most of the seismic moment was released (Arrowsmith et al. in Eos Trans Am Geophys Union 86(52), 2005). The second location is located on the westernmost sub-faults (i.e. the Dzhil'-Aryk segment), close to the intensity centre location obtained for the 1938, M 6.9 Chu-Kemin earthquake (MILH = 6.9 and MIW = 6.8).

  5. Earthquake swarms in South America (United States)

    Holtkamp, S. G.; Pritchard, M. E.; Lohman, R. B.


    We searched for earthquake swarms in South America between 1973 and 2009 using the global Preliminary Determination of Epicenters (PDE) catalogue. Seismicity rates vary greatly over the South American continent, so we employ a manual search approach that aims to be insensitive to spatial and temporal scales or to the number of earthquakes in a potential swarm. We identify 29 possible swarms involving 5-180 earthquakes each (with total swarm moment magnitudes between 4.7 and 6.9) within a range of tectonic and volcanic locations. Some of the earthquake swarms on the subduction megathrust occur as foreshocks and delineate the limits of main shock rupture propagation for large earthquakes, including the 2010 Mw 8.8 Maule, Chile and 2007 Mw 8.1 Pisco, Peru earthquakes. Also, subduction megathrust swarms commonly occur at the location of subduction of aseismic ridges, including areas of long-standing seismic gaps in Peru and Ecuador. The magnitude-frequency relationship of swarms we observe appears to agree with previously determined magnitude-frequency scaling for swarms in Japan. We examine geodetic data covering five of the swarms to search for an aseismic component. Only two of these swarms (at Copiapó, Chile, in 2006 and near Ticsani Volcano, Peru, in 2005) have suitable satellite-based Interferometric Synthetic Aperture Radar (InSAR) observations. We invert the InSAR geodetic signal and find that the ground deformation associated with these swarms does not require a significant component of aseismic fault slip or magmatic intrusion. Three swarms in the vicinity of the volcanic arc in southern Peru appear to be triggered by the Mw= 8.5 2001 Peru earthquake, but predicted static Coulomb stress changes due to the main shock were very small at the swarm locations, suggesting that dynamic triggering processes may have had a role in their occurrence. Although we identified few swarms in volcanic regions, we suggest that particularly large volcanic swarms (those that

  6. 设施蔬菜生防用枯草芽孢杆菌M6发酵条件的优化研究%Optimization of fermentation conditions for Bacillus subtilis M6 on the biological control of protected vegetable

    Institute of Scientific and Technical Information of China (English)

    徐升运; 赵文娟; 马齐; 张强; 任平; 秦涛


    The article studied the fermentation technology and optimized the fermentation medium for the Bacillus subtilis M6 on the protected vegetable by single factor experiments and orthogonal experiments. The results showed that the optimum condition of the Bacillus subtilis M6 were corn-starch 10g/L, soy protein powder 8g/L, NaCl 5g/L, K2HPO4 0.6g/L, initial pH value of the medium 7.0, temperature 35℃. inoculation volume 8%, rotation speed 180r/min, fermentation time 42h. Under the optimized conditions, the microbe number of the bacillus subtilis M6 could reach 95.8 × 108cfu/ml in the fermentation liquid and the spore ratio was over 92%. The microbe number of the Bacillus subtilis M6 was increased by 12.7% compared to that (85×108cfu/ml) before the optimization.%研究了设施蔬菜生防用枯草芽孢杆菌M6的发酵工艺,通过单因素试验和正交试验确定了M6的发酵培养基及条件.结果表明,枯草芽孢杆菌M6的的最优化工艺条件:玉米淀粉10g/L、大豆蛋白粉8g/L、NaCl 5g/L、K2HPO40.6g/L、培养基初始pH值7.0、培养温度35℃、接种量8%、摇床转速180r/min、培养时间42h.在此条件下发酵液活菌数达到95.8×108cfu/mL,芽孢形成率达到92%以上.此发酵条件比优化前枯草芽孢杆菌M6活菌数(85× 108cfu/mL)提高了12.7%.

  7. Primary Analysis of Relations between Earthquakes and Short Leveling Anomalies at Jingyang Station%泾阳短水准异常与地震关系浅析

    Institute of Scientific and Technical Information of China (English)

    黄英; 古云鹤; 曹建平


    通过对泾阳地震台短水准观测资料和地震关系分析,认为观测资料对南北地震带西南方向部分地震有一定映震能力。对影响观测的问题进行初步分析。%Short leveling data from Jingyang seismological station comprise some of the most important information that helps to monitor the Kouzhen—Guanshan fault zone.Several strong earthquakes have struck the surrounding ar-ea of Shaanxi Province:the Ninger M 6.4 earthquake on June 3,2007,the Lijiang M 7.0 earthquake on February 3, 1 996 in Yunnan,and the Anxian M 5.0 earthquake on September 3,1 999 in Sichuan.Short leveling data from Jingyang seismological station show varying degrees of anomaly.There was an anomaly decline that lasted for 18 days at a rate of 0.005 mm per day before the Ninger M 6.4 earthquake,for 23 days at a rate of 0.005 mm per day before the Li-jiang M 7.0 earthquake,and for 9 days at a rate of 0.009 mm per day before the Anxian M 5.0 earthquake.This study evaluates the ability of short leveling data at Jingyang to predict earthquakes by analyzing the relationship between magnitude,anomaly decline rate,and duration of anomaly decline along with the relationship between magnitude and the duration of the anomaly in cross-fault deformation measurement.To intuitively portray connections among these data,this study clearly indicates anomalies in the data using charts.The data will be used in the experimental formu-lae of the China Earthquake Administration with other earthquakes,and the results harvested.This study considers that an ability to predict earthquakes will be revealed in the short leveling data of Jingyang seismological station,lo-cated in the southwest area of the north-south seismic zone.At the same time,this study discusses how much damage has been caused to the observation environment by the major quarries near the short leveling area and the effect that the arrival and departure of heavy trucks has had on observation data.After summarizing

  8. Atmospheric Gravity Waves (AGWs) as the driver of seismo-ionospheric coupling: recent major earthquakes of Nepal and Imphal - case study (United States)

    Chakraborty, Suman; Chakrabarti, Sandip Kumar; Sasmal, Sudipta


    An important channel of the lithosphere-atmosphere-ionosphere coupling (LAIC) is the acoustic and gravity wave channel where the atmospheric gravity waves (AGW) play the most important part. Atmospheric waves are excited due to seismic gravitational vibrations before earthquakes and their effects on the atmosphere are the sources for seismo-ionospheric coupling which are manifested as perturbations in Very Low Frequency (VLF)/Low Frequency (LF) signal (amplitude/phase). For our study, we chose the recent major earthquakes that took place in Nepal and Imphal. The Nepal earthquake occurred on 12th May, 2015 at 12:50 pm local time (07:05 UTC) with Richter scale magnitude of M = 7.3 and depth 10 km (6.21 miles) at southeast of Kodari. The Imphal earthquake occurred on 4th January, 2016 at 4:35 am local time (23:05 UTC , 3rd January, UTC) with Richter scale magnitude of M = 6.7 and depth 55 km (34.2 miles). The data has been collected from Ionospheric and Earthquake Research Centre (IERC) of Indian Centre for Space Physics (ICSP) transmitted from JJI station of Japan. We performed both Fast Fourier Transform (FFT) and wavelet analysis on the VLF data for a couple of days before and after the major earthquakes. For both earthquakes, we observed wave like structures with periods of almost an hour before and after the earthquake day. The wave like oscillations after the earthquake may be due to the aftershock effects. We also observed that the amplitude of the wave like structures depends on the location of the epicenter between the transmitting and the receiving points and also on the depth of the earthquake.

  9. Increased β-haemolytic group A streptococcal M6 serotype and streptodornase B-specific cellular immune responses in Swedish narcolepsy cases. (United States)

    Ambati, A; Poiret, T; Svahn, B-M; Valentini, D; Khademi, M; Kockum, I; Lima, I; Arnheim-Dahlström, L; Lamb, F; Fink, K; Meng, Q; Kumar, A; Rane, L; Olsson, T; Maeurer, M


    Type 1 narcolepsy is a neurological disorder characterized by excessive daytime sleepiness and cataplexy associated with the HLA allele DQB1*06:02. Genetic predisposition along with external triggering factors may drive autoimmune responses, ultimately leading to the selective loss of hypocretin-positive neurons. The aim of this study was to investigate potential aetiological factors in Swedish cases of postvaccination (Pandemrix) narcolepsy defined by interferon-gamma (IFNγ) production from immune cells in response to molecularly defined targets. Cellular reactivity defined by IFNγ production was examined in blood from 38 (HLA-DQB1*06:02(+) ) Pandemrix-vaccinated narcolepsy cases and 76 (23 HLA-DQB1*06:02(+) and 53 HLA-DQB1*06:02(-) ) control subjects, matched for age, sex and exposure, using a variety of different antigens: β-haemolytic group A streptococcal (GAS) antigens (M5, M6 and streptodornase B), influenza (the pandemic A/H1N1/California/7/09 NYMC X-179A and A/H1N1/California/7/09 NYMC X-181 vaccine antigens, previous Flu-A and -B vaccine targets, A/H1N1/Brisbane/59/2007, A/H1N1/Solomon Islands/3/2006, A/H3N2/Uruguay/716/2007, A/H3N2/Wisconsin/67/2005, A/H5N1/Vietnam/1203/2004 and B/Malaysia/2506/2004), noninfluenza viral targets (CMVpp65, EBNA-1 and EBNA-3) and auto-antigens (hypocretin peptide, Tribbles homolog 2 peptide cocktail and extract from rat hypothalamus tissue). IFN-γ production was significantly increased in whole blood from narcolepsy cases in response to streptococcus serotype M6 (P = 0.0065) and streptodornase B protein (P = 0.0050). T-cell recognition of M6 and streptodornase B was confirmed at the single-cell level by intracellular cytokine (IL-2, IFNγ, tumour necrosis factor-alpha and IL-17) production after stimulation with synthetic M6 or streptodornase B peptides. Significantly, higher (P = 0.02) titres of serum antistreptolysin O were observed in narcolepsy cases, compared to vaccinated controls. β-haemolytic GAS may be

  10. Observations of large earthquakes in the Mexican subduction zone over 110 years (United States)

    Hjörleifsdóttir, Vala; Krishna Singh, Shri; Martínez-Peláez, Liliana; Garza-Girón, Ricardo; Lund, Björn; Ji, Chen


    , without changes in the instrument response. Furthermore, it has a relatively short natural period and is at a 90 degree angle from the Mesoamerican trench, making it highly sensitive to lateral variations in location. In total we have registers from more than 20, M>6.9 earthquakes in seven along trench segments. Events in this region are known to have a relatively short and simple rupture history, which has been interpreted as compact isolated asperities breaking in each event (Stewart et al 1982). Due to their short duration we assume that in the period range of the Wiechert instrument they are practically point sources, and any differences in waveforms are due to differences in location. This is not true for the largest events, as well a small number of the intermediate sized earthquakes, but these events can be identified by their complex P-waves. Of the seven different along trench segments, the Ometepec segment, on the border between the states of Guerrero and Oaxaca, has the largest number of earthquakes during the time period, or a total of five. Of those, three have near-identical waveforms, which we interpret as a repeatedly breaking asperity. The central Oaxaca segment also has had two earthquakes with highly similar waveforms, indicating comparable rupture areas in the two events. In other areas we find that subsequent earthquakes breaking the same segment do not have similar waveforms, indicating that different areas of the fault surface break in each event, or in a few cases, a more complex rupture history. Our observations therefore indicate that both processes are important, perhaps with varying relative importance depending on the region. In other words, some asperities are persistent over time, whereas others are not.

  11. Segmented ruptures during intracontinental earthquakes: Kyrgyz Range, N-Tien Shan (United States)

    Landgraf, Angela; Patyniak, Magda; Dzhumabaeva, Atyrgul; Abdrakhmatov, Kanatbek; Arrowsmith, J. Ramon; Strecker, Manfred R.


    In the late 19th and early 20th centuries, the northern Tien Shan of Kyrgyzstan and Kazakhstan was affected by a series of major M 6.9 to ~8 earthquakes. Ruptures affected either range fronts or range interiors. During these events (AD1885 Belovodskoe; AD1887 Verny; AD1889 Chilik; AD1911 Chon-Kemin; and AD1938 Kemino-Chu), neighboring faults ruptured and caused severe damage in the area of the Kyrgyz capital Bishkek and the former Kazakh capital Almaty (previously also called Alma-Ata or Verny), which were located in the epicentral areas. As recurrence intervals along single faults in this region are on the order of hundreds to thousands of years, such a sequence of earthquakes is not known in the remaining historic record. Earlier events may thus be recorded in long-term geomorphic archives. Through a combination of high-resolution offset measurements in the field, cosmogenic nuclide and luminescence dating of Quaternary landforms, stratigraphic analysis, and paleoseismological trenching, we evaluate the Quaternary deformation and analyze the paleoseismic history of neighboring fault systems along the Kyrgyz range mountain front. Our study sites are located close to the Kyrgyz capital Bishkek and include the epicentral area of the M6.9 Belovodskoe event of AD1885, but also the region west of it, which was not affected by this remarkable earthquake sequence. To date, the paleoseismic and historical seismic records for the Kyrgyz range indicate segmented ruptures that hardly exceed magnitude seven. Based on scaling relationships, however, the linked fault systems would be capable of generating M 8-events, similar to the long segmented ruptures observed in the mountain interior farther east during the late nineteenth and early twentieth centuries. The available observations, thus, point to incomplete fault ruptures along the mountain front, rather than earthquakes failing along a full rupture length.

  12. Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves (United States)

    Chen, W.; Ni, S.; Wang, Z.


    In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.

  13. Research on the Application of Time Structure Variation Analysis to the Jiashi-Bachu Earthquake Swarm Sequence

    Institute of Scientific and Technical Information of China (English)

    Yang Xin; Long Haiying; Shangguan Wenming; Nie Xiaohong


    In 1997~2003, 27 earthquakes with M≥5.0 occurred in the Jiashi-Bachu area of Xinjiang. It was a rare strong earthquake swarm activity. The earthquake swarm has three time segments of activity with different magnitudes in the years 1997, 1998 and 2003. In different time segments, the seismic activity showed strengthening-quiet changes in various degrees before earthquakes with M≥5.0. In order to delimitate effectively the precursory meaning of the clustering (strengthening) quiet change in sequence and to seek the time criterion for impending prediction, the nonlinear characteristics of seismic activity have been used to analyze the time structure characteristics of the earthquake swarm sequence, and further to forecast the development tendency of earthquake sequences in the future. Using the sequence catalogue recorded by the Kashi Station, and taking the earthquakes with M≥5.0 in the sequence as the starting point and the next earthquake with M=5.0 as the end, statistical analysis has been performed on the time structure relations of the earthquake sequence in different stages. The main results are as follows: (1) Before the major earthquakes with M≥5.0 in the swarm sequence, the time variation coefficient (δ-value) has abnormal demonstrations to different degrees. (2) Within 10 days after δ≈1, occurrence of earthquakes with M≥5.0 in the swarm is very possible. (3) The time variation coefficient has three types of change. (4) The change process before earthquakes with M5.0 is similar to that before earthquakes with M6.0, with little difference in the threshold value. In the earthquake swarm sequence, it is difficult to delimitate accurately the attribute of the current sequences (foreshock or aftershnck sequence) and to judge the magnitude of the follow-up earthquake by δ-value. We can only make the judgment that earthquakes with M5.0 are likely to occur in the sequence. (5) The critical clustering characteristics of the sequence are hierarchical

  14. Estimation of Future Earthquake Losses in California (United States)

    Rowshandel, B.; Wills, C. J.; Cao, T.; Reichle, M.; Branum, D.


    Recent developments in earthquake hazards and damage modeling, computing, and data management and processing, have made it possible to develop estimates of the levels of damage from earthquakes that may be expected in the future in California. These developments have been mostly published in the open literature, and provide an opportunity to estimate the levels of earthquake damage Californians can expect to suffer during the next several decades. Within the past 30 years, earthquake losses have increased dramatically, mostly because our exposure to earthquake hazards has increased. All but four of the recent damaging earthquakes have occurred distant from California's major population centers. Two, the Loma Prieta earthquake and the San Fernando earthquake, occurred on the edges of major populated areas. Loma Prieta caused significant damage in the nearby Santa Cruz and in the more distant, heavily populated, San Francisco Bay area. The 1971 San Fernando earthquake had an epicenter in the lightly populated San Gabriel Mountains, but caused slightly over 2 billion dollars in damage in the Los Angeles area. As urban areas continue to expand, the population and infrastructure at risk increases. When earthquakes occur closer to populated areas, damage is more significant. The relatively minor Whittier Narrows earthquake of 1987 caused over 500 million dollars in damage because it occurred in the Los Angeles metropolitan area, not at its fringes. The Northridge earthquake had fault rupture directly beneath the San Fernando Valley, and caused about 46 billion dollars in damage. This vast increase in damage from the San Fernando earthquake reflected both the location of the earthquake directly beneath the populated area and the 23 years of continued development and resulting greater exposure to potential damage. We have calculated losses from potential future earthquake, both as scenarios of potential earthquakes and as annualized losses considering all the potential

  15. Salivaricin D, a novel intrinsically trypsin-resistant lantibiotic from Streptococcus salivarius 5M6c isolated from a healthy infant. (United States)

    Birri, Dagim Jirata; Brede, Dag Anders; Nes, Ingolf F


    In this work, we purified and characterized a newly identified lantibiotic (salivaricin D) from Streptococcus salivarius 5M6c. Salivaricin D is a 34-amino-acid-residue peptide (3,467.55 Da); the locus of the gene encoding this peptide is a 16.5-kb DNA segment which contains genes encoding the precursor of two lantibiotics, two modification enzymes (dehydratase and cyclase), an ABC transporter, a serine-like protease, immunity proteins (lipoprotein and ABC transporters), a response regulator, and a sensor histidine kinase. The immunity gene (salI) was heterologously expressed in a sensitive indicator and provided significant protection against salivaricin D, confirming its immunity function. Salivaricin D is a naturally trypsin-resistant lantibiotic that is similar to nisin-like lantibiotics. It is a relatively broad-spectrum bacteriocin that inhibits members of many genera of Gram-positive bacteria, including the important human pathogens Streptococcus pyogenes and Streptococcus pneumoniae. Thus, Streptococcus salivarius 5M6c may be a potential biological agent for the control of oronasopharynx-colonizing streptococcal pathogens or may be used as a probiotic bacterium.

  16. Large-scale contraction and subsequent disruption of coronal loops during various phases of the M6.2 flare associated with the confined flux rope eruption

    CERN Document Server

    Kushwaha, Upendra; Veronig, Astrid M; Moon, Yong-Jae


    We present a detailed multi-wavelength study of the M6.2 flare which was associated with a confined eruption of a prominence using TRACE, RHESSI, and NoRH observations. The pre-flare phase of this event is characterized by spectacular large-scale contraction of overlying extreme ultraviolet (EUV) coronal loops during which the loop system was subjected to an altitude decrease of ~20 Mm for an extended span of ~30 min. This contraction phase is accompanied by sequential EUV brightenings associated with hard X-ray (HXR) (up to 25 keV) and microwave (MW) sources from low-lying loops in the core of the flaring region which together with X-ray spectra indicate strong localized heating in the source region before the filament activation and associated M-class flare. With the onset of the impulsive phase of the M6.2 flare, we detect HXR and MW sources that exhibit intricate temporal and spatial evolution in relation with the fast rise of the prominence. Following the flare maximum, the filament eruption slowed down ...

  17. The Characteristics of Earthquake Swarms in and around Jiangsu Province

    Institute of Scientific and Technical Information of China (English)

    Huang Yun; Tian Jianming; Miao Ali


    This paper systematically analyzed 36 earthquake swarms in and around Jiangsu Province, summarized their characteristics and discussed the relationship between earthquske swarms and subsequent strong earthquakes. It also analyzed the judgment criteria for precursory earthquake swarms. Earthquake swarms in Jiangsu Province are concentrated in several areas. Most of them were of magnitude ML2. 0 ~ 3. 9. For most earthquake swarms, the number of earthquakes was less than 30. Time duration for about 55% of earthquake swarms was less than 15 days. The biggest magnitude of one earthquake swarm was not proportional to the number of earthquakes and time duration. There are 78% of earthquake swarms corresponded to the forthcoming earthquakes of M 〉 4. 6 in which there're 57% occured in one year, This shows a medium- and short-term criterion. Distance between earthquake swarm and future earthquake was distributed dispersedly. There were no earthquakes occurring in the same location as earthquake swarms. There was no good correlation between the magnitude and the corresponding rate of future earthquakes and the intensity of earthquake swarms. There was also no good correlation between the number of earthquakes in an earthquake swarm and the corresponding rate. The study also shows that it's better to use U-p or whole-combination to determine the type of earthquake swarm.

  18. Relation between the characteristics of strong earthquake activities in Chinese mainland and the Wenchuan earthquake

    Institute of Scientific and Technical Information of China (English)

    Xiaodong Zhang; Guohua Yang; Xian Lu; Mingxiao Li; Zhigao Yang


    This paper studies the relations between the great Wenchuan earthquake and the active-quiet periodic characteristics of strong earthquakes, the rhythmic feature of great earthquakes, and the grouped spatial distribution of MS8.0 earthquakes in Chinese mainland. We also studied the relation between the Wenchuan earthquake and the stepwise migration characteristics of MS≥7.0 earthquakes on the North-South seismic belt, the features of the energy releasing acceleration in the active crustal blocks related to the Wenchuan earthquake and the relation between the Wenchuan earthquake and the so called second-arc fault zone. The results can be summarized as follows: ① the occurrence of the Wenchuan earthquake was consistent with the active-quiet periodic characteristics of strong earthquakes; ② its occurrence is consistent with the features of grouped occurrence of MS8.0 earthquakes and follows the 25 years rhythm (each circulation experiences the same time) of great earthquakes; ③ the Wenchuan MS8.0 earthquake follows the well known stepwise migration feature of strong earthquakes on the North-South seismic belt; ④ the location where the Wenchuan MS8.0 earthquake took place has an obvious consistency with the temporal and spatial characteristic of grouped activity of MS≥7.0 strong earthquakes on the second-arc fault zone; ⑤ the second-arc fault zone is not only the lower boundary for earthquakes with more than 30 km focal depth, but also looks like a lower boundary for deep substance movement; and ⑥ there are obvious seismic accelerations nearby the Qaidam and Qiangtang active crustal blocks (the northern and southern neighbors of the Bayan Har active block, respectively), which agrees with the GPS observation data.

  19. Update earthquake risk assessment in Cairo, Egypt (United States)

    Badawy, Ahmed; Korrat, Ibrahim; El-Hadidy, Mahmoud; Gaber, Hanan


    The Cairo earthquake (12 October 1992; m b = 5.8) is still and after 25 years one of the most painful events and is dug into the Egyptians memory. This is not due to the strength of the earthquake but due to the accompanied losses and damages (561 dead; 10,000 injured and 3000 families lost their homes). Nowadays, the most frequent and important question that should rise is "what if this earthquake is repeated today." In this study, we simulate the same size earthquake (12 October 1992) ground motion shaking and the consequent social-economic impacts in terms of losses and damages. Seismic hazard, earthquake catalogs, soil types, demographics, and building inventories were integrated into HAZUS-MH to produce a sound earthquake risk assessment for Cairo including economic and social losses. Generally, the earthquake risk assessment clearly indicates that "the losses and damages may be increased twice or three times" in Cairo compared to the 1992 earthquake. The earthquake risk profile reveals that five districts (Al-Sahel, El Basateen, Dar El-Salam, Gharb, and Madinat Nasr sharq) lie in high seismic risks, and three districts (Manshiyat Naser, El-Waily, and Wassat (center)) are in low seismic risk level. Moreover, the building damage estimations reflect that Gharb is the highest vulnerable district. The analysis shows that the Cairo urban area faces high risk. Deteriorating buildings and infrastructure make the city particularly vulnerable to earthquake risks. For instance, more than 90 % of the estimated buildings damages are concentrated within the most densely populated (El Basateen, Dar El-Salam, Gharb, and Madinat Nasr Gharb) districts. Moreover, about 75 % of casualties are in the same districts. Actually, an earthquake risk assessment for Cairo represents a crucial application of the HAZUS earthquake loss estimation model for risk management. Finally, for mitigation, risk reduction, and to improve the seismic performance of structures and assure life safety

  20. Update earthquake risk assessment in Cairo, Egypt (United States)

    Badawy, Ahmed; Korrat, Ibrahim; El-Hadidy, Mahmoud; Gaber, Hanan


    The Cairo earthquake (12 October 1992; m b = 5.8) is still and after 25 years one of the most painful events and is dug into the Egyptians memory. This is not due to the strength of the earthquake but due to the accompanied losses and damages (561 dead; 10,000 injured and 3000 families lost their homes). Nowadays, the most frequent and important question that should rise is "what if this earthquake is repeated today." In this study, we simulate the same size earthquake (12 October 1992) ground motion shaking and the consequent social-economic impacts in terms of losses and damages. Seismic hazard, earthquake catalogs, soil types, demographics, and building inventories were integrated into HAZUS-MH to produce a sound earthquake risk assessment for Cairo including economic and social losses. Generally, the earthquake risk assessment clearly indicates that "the losses and damages may be increased twice or three times" in Cairo compared to the 1992 earthquake. The earthquake risk profile reveals that five districts (Al-Sahel, El Basateen, Dar El-Salam, Gharb, and Madinat Nasr sharq) lie in high seismic risks, and three districts (Manshiyat Naser, El-Waily, and Wassat (center)) are in low seismic risk level. Moreover, the building damage estimations reflect that Gharb is the highest vulnerable district. The analysis shows that the Cairo urban area faces high risk. Deteriorating buildings and infrastructure make the city particularly vulnerable to earthquake risks. For instance, more than 90 % of the estimated buildings damages are concentrated within the most densely populated (El Basateen, Dar El-Salam, Gharb, and Madinat Nasr Gharb) districts. Moreover, about 75 % of casualties are in the same districts. Actually, an earthquake risk assessment for Cairo represents a crucial application of the HAZUS earthquake loss estimation model for risk management. Finally, for mitigation, risk reduction, and to improve the seismic performance of structures and assure life safety

  1. A smartphone application for earthquakes that matter! (United States)

    Bossu, Rémy; Etivant, Caroline; Roussel, Fréderic; Mazet-Roux, Gilles; Steed, Robert


    Smartphone applications have swiftly become one of the most popular tools for rapid reception of earthquake information for the public, some of them having been downloaded more than 1 million times! The advantages are obvious: wherever someone's own location is, they can be automatically informed when an earthquake has struck. Just by setting a magnitude threshold and an area of interest, there is no longer the need to browse the internet as the information reaches you automatically and instantaneously! One question remains: are the provided earthquake notifications always relevant for the public? What are the earthquakes that really matters to laypeople? One clue may be derived from some newspaper reports that show that a while after damaging earthquakes many eyewitnesses scrap the application they installed just after the mainshock. Why? Because either the magnitude threshold is set too high and many felt earthquakes are missed, or it is set too low and the majority of the notifications are related to unfelt earthquakes thereby only increasing anxiety among the population at each new update. Felt and damaging earthquakes are the ones that matter the most for the public (and authorities). They are the ones of societal importance even when of small magnitude. A smartphone application developed by EMSC (Euro-Med Seismological Centre) with the financial support of the Fondation MAIF aims at providing suitable notifications for earthquakes by collating different information threads covering tsunamigenic, potentially damaging and felt earthquakes. Tsunamigenic earthquakes are considered here to be those ones that are the subject of alert or information messages from the PTWC (Pacific Tsunami Warning Centre). While potentially damaging earthquakes are identified through an automated system called EQIA (Earthquake Qualitative Impact Assessment) developed and operated at EMSC. This rapidly assesses earthquake impact by comparing the population exposed to each expected

  2. TremorScope: A Tool to Image the Deep Workings of the San Andreas Fault near Cholame, CA (United States)

    Hellweg, M.; Burgmann, R.; Taira, T.; Nadeau, R. M.; Dreger, D. S.; Allen, R. M.


    Until recently, active fault zones were thought to deform via seismic slip during earthquakes in the upper, brittle portion of the crust, and by steady, aseismic shear below. However, since 2000, this view has been shaken by seismological observations of seismic tremor deep in the roots of active fault zones, including on the section of the San Andreas to the southeast of Parkfield, CA, deep (~20-30 km) beneath the nucleation zone of the great 1857 Fort Tejon earthquake. With funding from the Gordon and Betty Moore Foundation, we have improved the seismic network in the area above the tremor source by installing four new broadband/strong motion surface stations and four borehole sites with uphole accelerometers and downhole geophones, broadband and strong motion sensors. Data from all stations are telemetered in real-time. They are analysed as part of normal earthquake monitoring, and archived and distributed through the Northern California Earthquake Data Center (NCEDC). Data from the TremorScope project is improving earthquake monitoring in the region south of Parkfield, including allowing empirical Greens function finite fault analysis of moderate events in the area. Locations and characterization of tremor episodes are improved by the data recorded by TremorScope stations. For example, the rate of ambient tremor activity in the TremorScope area increased by a factor of ~8 within ~12 hours of the 2014 Napa M6.0 earthquake and remained elevated for ~ 100 days, exceeding the tremor rate increase following the 2004 Parkfield M6.0 quake despite the differences in epicentral distance (~300 km vs. ~15 km). No comparable increases in tremor rates have been observed between the Parkfield and Napa events. This suggests that the sensitivity to external stressing in the in the deep tremor zone of the TremorScope region may have increased since 2004. We also show how this network's strong motion instrumentation will provide unprecedented and exciting insights into the

  3. Earthquake Risk, FEMA Earthquake Hazzard Risk Map, Published in 1994, Delaware Geological Survey. (United States)

    NSGIC GIS Inventory (aka Ramona) — This Earthquake Risk dataset, was produced all or in part from Published Reports/Deeds information as of 1994. It is described as 'FEMA Earthquake Hazzard Risk Map'....

  4. The space and time distribution characteristics of the shear stress field for the sequence of the Wuding earthquake

    Institute of Scientific and Technical Information of China (English)


    Follow Chen and Duda's model of spectral fall-off of (3, the dependence of peak parameters of ground motion, peak displacement dm, peak velocity vm and peak acceleration am, upon the environment stress (0-values are studied using near source seismic digital recordings for the sequence of the Wuding, Yunnan, M = 6.5 earthquake, in which, as a new thought, the peak parameters are assumed to be related to the medium Q-value. Three formulae for estimating the environment stress (0-values by the peak parameters of three types of ground motions are derived. Using these formulae, the environment stress (0-values are calculated for the sequence of the Wuding earthquake. The result show that (0-values calculated by the three formulae are constant largely, the averages of (0 are in the range of 5.0~35 MPa for most earthquakes. It belongs to the high-stress earthquakes sequence: the high-stress values are restricted to the relatively small area closely near to the epicenter of the main shock. The fine distribution structure for the contours of the environment stress (0-values is related closely to the strong aftershocks. The analysis of spatial and temporal feature of (0-values suggests that the earthquakes sequence in a rupture process generated at the specific intersection zone of seismo-tectonics under high-stress background.

  5. A resonance mechanism of earthquakes

    CERN Document Server

    Flambaum, V V


    It had been observed in [1] that there are periodic 4-6 hours pulses of ? 200 ?Hz seismogravita- tional oscillations ( SGO ) before 95 % of powerful earthquakes. We explain this by beating between an oscillation eigenmode of a whole tectonic plate and a local eigenmode of an active zone which tranfers the oscillation energy from the tectonic plate to the active zone causing the eathrquake. Oscillation frequencies of the plate and ones of the active zone are tuned to a resonance by an additional pressure applied to the active zone due to collision of neighboring plates or convection in the upper mantia (plume). Corresponding theory may be used for short-term prediction of the earthquakes and tsunami.

  6. Pre-earthquake Magnetic Pulses

    CERN Document Server

    Scoville, John; Freund, Friedemann


    A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are observable because their extremely long wavelength allows them to pass through the Earth's crust. Interestingly, the source of these pulses may be triangulated to pinpoint locations where stress is building deep within the crust. We couple a semiconductor drift-diffusion model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.

  7. Great East Japan Earthquake Tsunami (United States)

    Iijima, Y.; Minoura, K.; Hirano, S.; Yamada, T.


    The 11 March 2011, Mw 9.0 Great East Japan Earthquake, already among the most destructive earthquakes in modern history, emanated from a fault rupture that extended an estimated 500 km along the Pacific coast of Honshu. This earthquake is the fourth among five of the strongest temblors since AD 1900 and the largest in Japan since modern instrumental recordings began 130 years ago. The earthquake triggered a huge tsunami, which invaded the seaside areas of the Pacific coast of East Japan, causing devastating damages on the coast. Artificial structures were destroyed and planted forests were thoroughly eroded. Inrush of turbulent flows washed backshore areas and dunes. Coastal materials including beach sand were transported onto inland areas by going-up currents. Just after the occurrence of the tsunami, we started field investigation of measuring thickness and distribution of sediment layers by the tsunami and the inundation depth of water in Sendai plain. Ripple marks showing direction of sediment transport were the important object of observation. We used a soil auger for collecting sediments in the field, and sediment samples were submitted for analyzing grain size and interstitial water chemistry. Satellite images and aerial photographs are very useful for estimating the hydrogeological effects of tsunami inundation. We checked the correspondence of micro-topography, vegetation and sediment covering between before and after the tsunami. The most conspicuous phenomenon is the damage of pine forests planted in the purpose of preventing sand shifting. About ninety-five percent of vegetation coverage was lost during the period of rapid currents changed from first wave. The landward slopes of seawalls were mostly damaged and destroyed. Some aerial photographs leave detailed records of wave destruction just behind seawalls, which shows the occurrence of supercritical flows. The large-scale erosion of backshore behind seawalls is interpreted to have been caused by

  8. The physics of rock failure and earthquakes

    CERN Document Server

    Ohnaka, Mitiyasu


    Despite significant advances in the understanding of earthquake generation processes and derivation of underlying physical laws, controversy remains regarding the constitutive law for earthquake ruptures and how it should be formulated. Laboratory experiments are necessary to obtain high-resolution measurements that allow the physical nature of shear rupture processes to be deduced, and to resolve the controversy. This important book provides a deeper understanding of earthquake processes from nucleation to their dynamic propagation. Its key focus is a deductive approach based on laboratory-derived physical laws and formulae, such as a unifying constitutive law, a constitutive scaling law, and a physical model of shear rupture nucleation. Topics covered include: the fundamentals of rock failure physics, earthquake generation processes, physical scale dependence, and large-earthquake generation cycles. Designed for researchers and professionals in earthquake seismology, rock failure physics, geology and earthq...

  9. Is There An Earthquake Migration Global Pattern? (United States)

    dos Santos, A. M.; Franca, G. S.; da Silveira, A. G.; Frigeri, G. V.; Marotta, G. S.


    Earthquake migration patterns before large earthquake were proposed by Mogi (1968) and existence of the correlation between earthquakes over distances that show probable global interdependence and this theme is certainly one of the most intriguing in field of seismology. In this job, we will present the phenomenology of earthquake migration global seismic pattern empirically, in order to ensure statistically the correlation of long range and lead to confrontation these seismic patterns. We used the international catalog available, such as, NEIC-USGS. We find that the pair of events that have a good correlation are confirmed statistically. As Shebalin (1996) has shown the earthquake chain, we show this first stage of the earthquake prediction correlation for large distances.

  10. Earthquake Hazard Mitigation Strategy in Indonesia (United States)

    Karnawati, D.; Anderson, R.; Pramumijoyo, S.


    Because of the active tectonic setting of the region, the risks of geological hazards inevitably increase in Indonesian Archipelagoes and other ASIAN countries. Encouraging community living in the vulnerable area to adapt with the nature of geology will be the most appropriate strategy for earthquake risk reduction. Updating the Earthquake Hazard Maps, enhancement ofthe existing landuse management , establishment of public education strategy and method, strengthening linkages among stake holders of disaster mitigation institutions as well as establishement of continues public consultation are the main strategic programs for community resilience in earthquake vulnerable areas. This paper highlights some important achievements of Earthquake Hazard Mitigation Programs in Indonesia, together with the difficulties in implementing such programs. Case examples of Yogyakarta and Bengkulu Earthquake Mitigation efforts will also be discussed as the lesson learned. The new approach for developing earthquake hazard map which is innitiating by mapping the psychological aspect of the people living in vulnerable area will be addressed as well.

  11. Earthquakes in Virginia and vicinity 1774 - 2004 (United States)

    Tarr, Arthur C.; Wheeler, Russell L.


    This map summarizes two and a third centuries of earthquake activity. The seismic history consists of letters, journals, diaries, and newspaper and scholarly articles that supplement seismograph recordings (seismograms) dating from the early twentieth century to the present. All of the pre-instrumental (historical) earthquakes were large enough to be felt by people or to cause shaking damage to buildings and their contents. Later, widespread use of seismographs meant that tremors too small or distant to be felt could be detected and accurately located. Earthquakes are a legitimate concern in Virginia and parts of adjacent States. Moderate earthquakes cause slight local damage somewhere in the map area about twice a decade on the average. Additionally, many buildings in the map area were constructed before earthquake protection was added to local building codes. The large map shows all historical and instrumentally located earthquakes from 1774 through 2004.

  12. Earthquake forewarning in the Cascadia region (United States)

    Gomberg, Joan S.; Atwater, Brian F.; Beeler, Nicholas M.; Bodin, Paul; Davis, Earl; Frankel, Arthur; Hayes, Gavin P.; McConnell, Laura; Melbourne, Tim; Oppenheimer, David H.; Parrish, John G.; Roeloffs, Evelyn A.; Rogers, Gary D.; Sherrod, Brian; Vidale, John; Walsh, Timothy J.; Weaver, Craig S.; Whitmore, Paul M.


    This report, prepared for the National Earthquake Prediction Evaluation Council (NEPEC), is intended as a step toward improving communications about earthquake hazards between information providers and users who coordinate emergency-response activities in the Cascadia region of the Pacific Northwest. NEPEC charged a subcommittee of scientists with writing this report about forewarnings of increased probabilities of a damaging earthquake. We begin by clarifying some terminology; a “prediction” refers to a deterministic statement that a particular future earthquake will or will not occur. In contrast to the 0- or 100-percent likelihood of a deterministic prediction, a “forecast” describes the probability of an earthquake occurring, which may range from >0 to 4 earthquakes on the plate interface north of the Mendocino region 

  13. Dim prospects for earthquake prediction (United States)

    Geller, Robert J.

    I was misquoted by C. Lomnitz's [1998] Forum letter (Eos, August 4, 1998, p. 373), which said: [I wonder whether Sasha Gusev [1998] actually believes that branding earthquake prediction a ‘proven nonscience’ [Geller, 1997a] is a paradigm for others to copy.”Readers are invited to verify for themselves that neither “proven nonscience” norv any similar phrase was used by Geller [1997a].

  14. Understand mountain studies from earthquake

    Institute of Scientific and Technical Information of China (English)


    @@ The Sichuan earthquake on 12 May was the most devastating one to hit China over the past 60 years or so. As the affected were mostly mountainous areas, serious damages were caused by various secondary disasters ranging from mountain collapse to the formation of quake lakes. This leaves Prof. DENG Wei, director-general of the Institute of Mountain Hazards and Environment, CAS, much to think about, and he is calling for strengthening studies on mountain science.

  15. Tangshan Women After the Earthquake

    Institute of Scientific and Technical Information of China (English)


    TWENTY years ago, Tangshan, a city in China’s Hebei Province, was struck by an earthquake which killed 240,000 people, injured 160,000, and destroyed 10,200 homes. In 7,200 families there were no survivors. After 20 years of rebuilding, a new Tangshan has risen from the debris. Tangshan women played a very important role in rebuilding their hometown.

  16. Mechanics of Multifault Earthquake Ruptures (United States)

    Fletcher, J. M.; Oskin, M. E.; Teran, O.


    The 2010 El Mayor-Cucapah earthquake of magnitude Mw 7.2 produced the most complex rupture ever documented on the Pacific-North American plate margin, and the network of high- and low-angle faults activated in the event record systematic changes in kinematics with fault orientation. Individual faults have a broad and continuous spectrum of slip sense ranging from endmember dextral strike slip to normal slip, and even faults with thrust sense of dip slip were commonly observed in the aftershock sequence. Patterns of coseismic slip are consistent with three-dimensional constrictional strain and show that integrated transtensional shearing can be accommodated in a single earthquake. Stress inversions of coseismic surface rupture and aftershock focal mechanisms define two coaxial, but permuted stress states. The maximum (σ1) and intermediate (σ2) principal stresses are close in magnitude, but flip orientations due to topography- and density-controlled gradients in lithostatic load along the length of the rupture. Although most large earthquakes throughout the world activate slip on multiple faults, the mechanical conditions of their genesis remain poorly understood. Our work attempts to answer several key questions. 1) Why do complex fault systems exist? They must do something that simple, optimally-oriented fault systems cannot because the two types of faults are commonly located in close proximity. 2) How are faults with diverse orientations and slip senses prepared throughout the interseismic period to fail spontaneously together in a single earthquake? 3) Can a single stress state produce multi-fault failure? 4) Are variations in pore pressure, friction and cohesion required to produce simultaneous rupture? 5) How is the fabric of surface rupture affected by variations in orientation, kinematics, total geologic slip and fault zone architecture?

  17. Bayesian kinematic earthquake source models (United States)

    Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.


    Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high

  18. Earthquake scenario ground motions for the urban area of Evansville, Indiana (United States)

    Haase, Jennifer S.; Nowack, Robert L.; Cramer, Chris H.; Boyd, Oliver S.; Bauer, Robert A.


    The Wabash Valley seismic zone and the New Madrid seismic zone are the closest large earthquake source zones to Evansville, Indiana. The New Madrid earthquakes of 1811-1812, over 180 kilometers (km) from Evansville, produced ground motions with a Modified Mercalli Intensity of VII near Evansville, the highest intensity observed in Indiana. Liquefaction evidence has been documented less than 40 km away from Evansville resulting from two large earthquakes in the past 12,000 years in the Wabash Valley. Two earthquake scenarios are described in this paper that demonstrate the expected ground motions for a 33×42-km region around Evansville based on a repeat earthquake from each of these source regions. We perform a one-dimensional analysis for a grid of sites that takes into account the amplification or deamplification of ground motion in the unconsolidated soil layer using a new three-dimensional model of seismic velocity and bedrock depth. There are significant differences in the calculated amplification from that expected for National Earthquake Hazard Reduction Program site class D conditions, with deamplification at many locations within the ancient bedrock valley underlying Evansville. Ground motions relative to the acceleration of gravity (g) in the Evansville area from a simulation of a magnitude (M) 7.7 New Madrid earthquake range from 0.15 to 0.25 g for peak ground acceleration, 0.14 to 0.7 g for 0.2-second (s) spectral acceleration, and 0.05 to 0.25 g for 1.0-s spectral acceleration. Ground motions from a M6.8 Wabash Valley earthquake centered 40 km northwest of the city produce ground motions that decrease with distance from 1.5 to 0.3 g for 0.2-s spectral acceleration when they reach the main part of Evansville, but then increase in amplitude from 0.3 to 0.6 g south of the city and the Ohio River. The densest urbanization in Evansville and Henderson, Ky., is within the area of preferential amplification at 1.0-s period for both scenarios, but the area

  19. Recent Intermediate Depth Earthquakes in El Salvador, Central Mexico, Cascadia and South-West Japan (United States)

    Lemoine, A.; Gardi, A.; Gutscher, M.; Madariaga, R.


    We studied occurence and source parameters of several recent intermediate depth earthquakes. We concentrated on the Mw=7.7 salvadorian earthquake which took place on January 13, 2001. It was a good example of the high seismic risk associated to such kind of events which occur closer to the coast than the interplate thrust events. The Salvadorian earthquake was an intermediate depth downdip extensional event which occured inside the downgoing Cocos plate, next to the downdip flexure where the dip increases sharply before the slab sinks more steeply. This location corresponds closely to the position of the Mw=5.7 1996 and Mw=7.3 1982 downdip extensional events. Several recent intermediate depth earthquakes occured in subduction zones exhibiting a ``flat slab'' geometry with three distinct flexural bends where flexural stress may be enhanced. The Mw=6.7 Geiyo event showed a downdip extensional mechanism with N-S striking nodal planes. This trend was highly oblique to the trench (Nankai Trough), yet consistent with westward steepening at the SW lateral termination of the SW Japan flat slab. The Mw=6.8 Olympia earthquake in the Cascadia subduction zone occured at the downdip termination of the Juan de Fuca slab, where plate dip increases from about 5o to over 30o. The N-S orientation of the focal planes, parallel to the trench indicated downdip extension. The location at the downdip flexure corresponds closely to the estimated positions of the 1949 M7.1 Olympia and 1965 M6.5 Seattle-Tacoma events. Between 1994 and 1999, in Central Mexico, an unusually high intermediate depth seismicity occured where several authors proposed a flat geometry for the Cocos plate. Seven events of magnitude between Mw=5.9 and Mw=7.1 occured. Three of them were downdip compressional and four where down-dip extensional. We can explain these earthquakes by flexural stresses at down-dip and lateral terminations of the supposed flat segment. Even if intermediate depth earthquakes occurence could

  20. Storm sudden commencements and earthquakes (United States)

    Lavrov, Ivan; Sobisevich, Aleksey; Guglielmi, Anatol


    We have investigated statistically the problem of possible impact of the geomagnetic storm sudden com-mencement (SSC) on the global seismic activity. SSC are used as reference points for comparative analysis of seismicity by the method of superposed epoch. We selected 405 earthquakes from 1973 to 2010 with M˜5 magnitudes from a representative part of USGS Catalog. The comparative analysis of seismicity was carried out at the intervals of ˜60 min relative to the reference point. With a high degree of reliability, it was found that before the reference point the number of earthquakes is noticeably greater than after it. In other words, the global seismicity is suppressed by SSC. We refer to some studies in which the chemical, thermal and force mechanisms of the electromagnetic field action on rocks are discussed. We emphasize the incompleteness of the study concerning the correlation between SSC and earthquakes because we still do not succeed in understanding and interpreting the relationship in terms of physics and mathematics. The study need to be continued to solve this problem of interest and importance.

  1. Pre-earthquake magnetic pulses

    Directory of Open Access Journals (Sweden)

    J. Scoville


    Full Text Available A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are generated deep in the Earth's crust, in and around the Hypocentral volume, days or even weeks before Earthquakes. They are observable at the surface because their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, the source of these pulses may be triangulated to pinpoint locations where stresses are building deep within the crust. We couple a semiconductor drift-diffusion model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.

  2. Global review of human-induced earthquakes.


    Foulger, Gillian R.; Wilson, Miles; Gluyas, Jon; Julian, Bruce R.; Davies, Richard


    The Human-induced Earthquake Database, HiQuake, is a comprehensive record of earthquake sequences postulated to be induced by anthropogenic activity. It contains over 700 cases spanning the period 1868–2016. Activities that have been proposed to induce earthquakes include the impoundment of water reservoirs, erecting tall buildings, coastal engineering, quarrying, extraction of groundwater, coal, minerals, gas, oil and geothermal fluids, excavation of tunnels, and adding material to the subsu...

  3. Global Significant Earthquake Database, 2150 BC to present (United States)

    National Oceanic and Atmospheric Admi