WorldWideScience

Sample records for release earthquake prediction

  1. Earthquake prediction

    International Nuclear Information System (INIS)

    Ward, P.L.

    1978-01-01

    The state of the art of earthquake prediction is summarized, the possible responses to such prediction are examined, and some needs in the present prediction program and in research related to use of this new technology are reviewed. Three basic aspects of earthquake prediction are discussed: location of the areas where large earthquakes are most likely to occur, observation within these areas of measurable changes (earthquake precursors) and determination of the area and time over which the earthquake will occur, and development of models of the earthquake source in order to interpret the precursors reliably. 6 figures

  2. Load-Unload Response Ratio and Accelerating Moment/Energy Release Critical Region Scaling and Earthquake Prediction

    Science.gov (United States)

    Yin, X. C.; Mora, P.; Peng, K.; Wang, Y. C.; Weatherley, D.

    The main idea of the Load-Unload Response Ratio (LURR) is that when a system is stable, its response to loading corresponds to its response to unloading, whereas when the system is approaching an unstable state, the response to loading and unloading becomes quite different. High LURR values and observations of Accelerating Moment/Energy Release (AMR/AER) prior to large earthquakes have led different research groups to suggest intermediate-term earthquake prediction is possible and imply that the LURR and AMR/AER observations may have a similar physical origin. To study this possibility, we conducted a retrospective examination of several Australian and Chinese earthquakes with magnitudes ranging from 5.0 to 7.9, including Australia's deadly Newcastle earthquake and the devastating Tangshan earthquake. Both LURR values and best-fit power-law time-to-failure functions were computed using data within a range of distances from the epicenter. Like the best-fit power-law fits in AMR/AER, the LURR value was optimal using data within a certain epicentral distance implying a critical region for LURR. Furthermore, LURR critical region size scales with mainshock magnitude and is similar to the AMR/AER critical region size. These results suggest a common physical origin for both the AMR/AER and LURR observations. Further research may provide clues that yield an understanding of this mechanism and help lead to a solid foundation for intermediate-term earthquake prediction.

  3. Moment-ration imaging of seismic regions for earthquake prediction

    Science.gov (United States)

    Lomnitz, Cinna

    1993-10-01

    An algorithm for predicting large earthquakes is proposed. The reciprocal ratio (mri) of the residual seismic moment to the total moment release in a region is used for imaging seismic moment precursors. Peaks in mri predict recent major earthquakes, including the 1985 Michoacan, 1985 central Chile, and 1992 Eureka, California earthquakes.

  4. Geophysical Anomalies and Earthquake Prediction

    Science.gov (United States)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require

  5. Earthquake prediction by Kina Method

    International Nuclear Information System (INIS)

    Kianoosh, H.; Keypour, H.; Naderzadeh, A.; Motlagh, H.F.

    2005-01-01

    Earthquake prediction has been one of the earliest desires of the man. Scientists have worked hard to predict earthquakes for a long time. The results of these efforts can generally be divided into two methods of prediction: 1) Statistical Method, and 2) Empirical Method. In the first method, earthquakes are predicted using statistics and probabilities, while the second method utilizes variety of precursors for earthquake prediction. The latter method is time consuming and more costly. However, the result of neither method has fully satisfied the man up to now. In this paper a new method entitled 'Kiana Method' is introduced for earthquake prediction. This method offers more accurate results yet lower cost comparing to other conventional methods. In Kiana method the electrical and magnetic precursors are measured in an area. Then, the time and the magnitude of an earthquake in the future is calculated using electrical, and in particular, electrical capacitors formulas. In this method, by daily measurement of electrical resistance in an area we make clear that the area is capable of earthquake occurrence in the future or not. If the result shows a positive sign, then the occurrence time and the magnitude can be estimated by the measured quantities. This paper explains the procedure and details of this prediction method. (authors)

  6. Radon observation for earthquake prediction

    Energy Technology Data Exchange (ETDEWEB)

    Wakita, Hiroshi [Tokyo Univ. (Japan)

    1998-12-31

    Systematic observation of groundwater radon for the purpose of earthquake prediction began in Japan in late 1973. Continuous observations are conducted at fixed stations using deep wells and springs. During the observation period, significant precursory changes including the 1978 Izu-Oshima-kinkai (M7.0) earthquake as well as numerous coseismic changes were observed. At the time of the 1995 Kobe (M7.2) earthquake, significant changes in chemical components, including radon dissolved in groundwater, were observed near the epicentral region. Precursory changes are presumably caused by permeability changes due to micro-fracturing in basement rock or migration of water from different sources during the preparation stage of earthquakes. Coseismic changes may be caused by seismic shaking and by changes in regional stress. Significant drops of radon concentration in groundwater have been observed after earthquakes at the KSM site. The occurrence of such drops appears to be time-dependent, and possibly reflects changes in the regional stress state of the observation area. The absence of radon drops seems to be correlated with periods of reduced regional seismic activity. Experience accumulated over the two past decades allows us to reach some conclusions: 1) changes in groundwater radon do occur prior to large earthquakes; 2) some sites are particularly sensitive to earthquake occurrence; and 3) the sensitivity changes over time. (author)

  7. Earthquake predictions using seismic velocity ratios

    Science.gov (United States)

    Sherburne, R. W.

    1979-01-01

    Since the beginning of modern seismology, seismologists have contemplated predicting earthquakes. The usefulness of earthquake predictions to the reduction of human and economic losses and the value of long-range earthquake prediction to planning is obvious. Not as clear are the long-range economic and social impacts of earthquake prediction to a speicifc area. The general consensus of opinion among scientists and government officials, however, is that the quest of earthquake prediction is a worthwhile goal and should be prusued with a sense of urgency. 

  8. Earthquake prediction with electromagnetic phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Hayakawa, Masashi, E-mail: hayakawa@hi-seismo-em.jp [Hayakawa Institute of Seismo Electomagnetics, Co. Ltd., University of Electro-Communications (UEC) Incubation Center, 1-5-1 Chofugaoka, Chofu Tokyo, 182-8585 (Japan); Advanced Wireless & Communications Research Center, UEC, Chofu Tokyo (Japan); Earthquake Analysis Laboratory, Information Systems Inc., 4-8-15, Minami-aoyama, Minato-ku, Tokyo, 107-0062 (Japan); Fuji Security Systems. Co. Ltd., Iwato-cho 1, Shinjyuku-ku, Tokyo (Japan)

    2016-02-01

    Short-term earthquake (EQ) prediction is defined as prospective prediction with the time scale of about one week, which is considered to be one of the most important and urgent topics for the human beings. If this short-term prediction is realized, casualty will be drastically reduced. Unlike the conventional seismic measurement, we proposed the use of electromagnetic phenomena as precursors to EQs in the prediction, and an extensive amount of progress has been achieved in the field of seismo-electromagnetics during the last two decades. This paper deals with the review on this short-term EQ prediction, including the impossibility myth of EQs prediction by seismometers, the reason why we are interested in electromagnetics, the history of seismo-electromagnetics, the ionospheric perturbation as the most promising candidate of EQ prediction, then the future of EQ predictology from two standpoints of a practical science and a pure science, and finally a brief summary.

  9. Dim prospects for earthquake prediction

    Science.gov (United States)

    Geller, Robert J.

    I was misquoted by C. Lomnitz's [1998] Forum letter (Eos, August 4, 1998, p. 373), which said: [I wonder whether Sasha Gusev [1998] actually believes that branding earthquake prediction a ‘proven nonscience’ [Geller, 1997a] is a paradigm for others to copy.”Readers are invited to verify for themselves that neither “proven nonscience” norv any similar phrase was used by Geller [1997a].

  10. Probabilistic approach to earthquake prediction.

    Directory of Open Access Journals (Sweden)

    G. D'Addezio

    2002-06-01

    Full Text Available The evaluation of any earthquake forecast hypothesis requires the application of rigorous statistical methods. It implies a univocal definition of the model characterising the concerned anomaly or precursor, so as it can be objectively recognised in any circumstance and by any observer.A valid forecast hypothesis is expected to maximise successes and minimise false alarms. The probability gain associated to a precursor is also a popular way to estimate the quality of the predictions based on such precursor. Some scientists make use of a statistical approach based on the computation of the likelihood of an observed realisation of seismic events, and on the comparison of the likelihood obtained under different hypotheses. This method can be extended to algorithms that allow the computation of the density distribution of the conditional probability of earthquake occurrence in space, time and magnitude. Whatever method is chosen for building up a new hypothesis, the final assessment of its validity should be carried out by a test on a new and independent set of observations. The implementation of this test could, however, be problematic for seismicity characterised by long-term recurrence intervals. Even using the historical record, that may span time windows extremely variable between a few centuries to a few millennia, we have a low probability to catch more than one or two events on the same fault. Extending the record of earthquakes of the past back in time up to several millennia, paleoseismology represents a great opportunity to study how earthquakes recur through time and thus provide innovative contributions to time-dependent seismic hazard assessment. Sets of paleoseimologically dated earthquakes have been established for some faults in the Mediterranean area: the Irpinia fault in Southern Italy, the Fucino fault in Central Italy, the El Asnam fault in Algeria and the Skinos fault in Central Greece. By using the age of the

  11. Gambling scores for earthquake predictions and forecasts

    Science.gov (United States)

    Zhuang, Jiancang

    2010-04-01

    This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.

  12. Collaboratory for the Study of Earthquake Predictability

    Science.gov (United States)

    Schorlemmer, D.; Jordan, T. H.; Zechar, J. D.; Gerstenberger, M. C.; Wiemer, S.; Maechling, P. J.

    2006-12-01

    Earthquake prediction is one of the most difficult problems in physical science and, owing to its societal implications, one of the most controversial. The study of earthquake predictability has been impeded by the lack of an adequate experimental infrastructure---the capability to conduct scientific prediction experiments under rigorous, controlled conditions and evaluate them using accepted criteria specified in advance. To remedy this deficiency, the Southern California Earthquake Center (SCEC) is working with its international partners, which include the European Union (through the Swiss Seismological Service) and New Zealand (through GNS Science), to develop a virtual, distributed laboratory with a cyberinfrastructure adequate to support a global program of research on earthquake predictability. This Collaboratory for the Study of Earthquake Predictability (CSEP) will extend the testing activities of SCEC's Working Group on Regional Earthquake Likelihood Models, from which we will present first results. CSEP will support rigorous procedures for registering prediction experiments on regional and global scales, community-endorsed standards for assessing probability-based and alarm-based predictions, access to authorized data sets and monitoring products from designated natural laboratories, and software to allow researchers to participate in prediction experiments. CSEP will encourage research on earthquake predictability by supporting an environment for scientific prediction experiments that allows the predictive skill of proposed algorithms to be rigorously compared with standardized reference methods and data sets. It will thereby reduce the controversies surrounding earthquake prediction, and it will allow the results of prediction experiments to be communicated to the scientific community, governmental agencies, and the general public in an appropriate research context.

  13. The October 1992 Parkfield, California, earthquake prediction

    Science.gov (United States)

    Langbein, J.

    1992-01-01

    A magnitude 4.7 earthquake occurred near Parkfield, California, on October 20, 992, at 05:28 UTC (October 19 at 10:28 p.m. local or Pacific Daylight Time).This moderate shock, interpreted as the potential foreshock of a damaging earthquake on the San Andreas fault, triggered long-standing federal, state and local government plans to issue a public warning of an imminent magnitude 6 earthquake near Parkfield. Although the predicted earthquake did not take place, sophisticated suites of instruments deployed as part of the Parkfield Earthquake Prediction Experiment recorded valuable data associated with an unusual series of events. this article describes the geological aspects of these events, which occurred near Parkfield in October 1992. The accompnaying article, an edited version of a press conference b Richard Andrews, the Director of the California Office of Emergency Service (OES), describes governmental response to the prediction.   

  14. Strong ground motion prediction using virtual earthquakes.

    Science.gov (United States)

    Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C

    2014-01-24

    Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.

  15. Failures and suggestions in Earthquake forecasting and prediction

    Science.gov (United States)

    Sacks, S. I.

    2013-12-01

    Seismologists have had poor success in earthquake prediction. However, wide ranging observations from earlier great earthquakes show that precursory data can exist. In particular, two aspects seem promising. In agreement with simple physical modeling, b-values decrease in highly loaded fault zones for years before failure. Potentially more usefully, in high stress regions, breakdown of dilatant patches leading to failure can yield expelled water-related observations. The volume increase (dilatancy) caused by high shear stresses decreases the pore pressure. Eventually, water flows back in restoring the pore pressure, promoting failure and expelling the extra water. Of course, in a generally stressed region there may be many small patches that fail, such as observed before the 1975 Haicheng earthquake. Only a few days before the major event will most of the dilatancy breakdown occur in the fault zone itself such as for the Tangshan, 1976 destructive event. Observations of 'water release' effects have been observed before the 1923 great Kanto earthquake, the 1984 Yamasaki event, the 1975 Haicheng and the 1976 Tangshan earthquakes and also the 1995 Kobe earthquake. While there are obvious difficulties in water release observations, not least because there is currently no observational network anywhere, historical data does suggest some promise if we broaden our approach to this difficult subject.

  16. Elastic energy release in great earthquakes and eruptions

    Directory of Open Access Journals (Sweden)

    Agust eGudmundsson

    2014-05-01

    Full Text Available The sizes of earthquakes are measured using well-defined, measurable quantities such as seismic moment and released (transformed elastic energy. No similar measures exist for the sizes of volcanic eruptions, making it difficult to compare the energies released in earthquakes and eruptions. Here I provide a new measure of the elastic energy (the potential mechanical energy associated with magma chamber rupture and contraction (shrinkage during an eruption. For earthquakes and eruptions, elastic energy derives from two sources: (1 the strain energy stored in the volcano/fault zone before rupture, and (2 the external applied load (force, pressure, stress, displacement on the volcano/fault zone. From thermodynamic considerations it follows that the elastic energy released or transformed (dU during an eruption is directly proportional to the excess pressure (pe in the magma chamber at the time of rupture multiplied by the volume decrease (-dVc of the chamber, so that . This formula can be used as a basis for a new eruption magnitude scale, based on elastic energy released, which can be related to the moment-magnitude scale for earthquakes. For very large eruptions (>100 km3, the volume of the feeder-dike is negligible, so that the decrease in chamber volume during an eruption corresponds roughly to the associated volume of erupted materials , so that the elastic energy is . Using a typical excess pressures of 5 MPa, it is shown that the largest known eruptions on Earth, such as the explosive La Garita Caldera eruption (27-28 million years ago and largest single (effusive Colombia River basalt lava flows (15-16 million years ago, both of which have estimated volumes of about 5000 km3, released elastic energy of the order of 10EJ. For comparison, the seismic moment of the largest earthquake ever recorded, the M9.5 1960 Chile earthquake, is estimated at 100 ZJ and the associated elastic energy release at 10EJ.

  17. Using remote sensing to predict earthquake impacts

    Science.gov (United States)

    Fylaktos, Asimakis; Yfantidou, Anastasia

    2017-09-01

    Natural hazards like earthquakes can result to enormous property damage, and human casualties in mountainous areas. Italy has always been exposed to numerous earthquakes, mostly concentrated in central and southern regions. Last year, two seismic events near Norcia (central Italy) have occurred, which led to substantial loss of life and extensive damage to properties, infrastructure and cultural heritage. This research utilizes remote sensing products and GIS software, to provide a database of information. We used both SAR images of Sentinel 1A and optical imagery of Landsat 8 to examine the differences of topography with the aid of the multi temporal monitoring technique. This technique suits for the observation of any surface deformation. This database is a cluster of information regarding the consequences of the earthquakes in groups, such as property and infrastructure damage, regional rifts, cultivation loss, landslides and surface deformations amongst others, all mapped on GIS software. Relevant organizations can implement these data in order to calculate the financial impact of these types of earthquakes. In the future, we can enrich this database including more regions and enhance the variety of its applications. For instance, we could predict the future impacts of any type of earthquake in several areas, and design a preliminarily model of emergency for immediate evacuation and quick recovery response. It is important to know how the surface moves, in particular geographical regions like Italy, Cyprus and Greece, where earthquakes are so frequent. We are not able to predict earthquakes, but using data from this research, we may assess the damage that could be caused in the future.

  18. A mathematical model for predicting earthquake occurrence ...

    African Journals Online (AJOL)

    We consider the continental crust under damage. We use the observed results of microseism in many seismic stations of the world which was established to study the time series of the activities of the continental crust with a view to predicting possible time of occurrence of earthquake. We consider microseism time series ...

  19. Earthquake prediction in Japan and natural time analysis of seismicity

    Science.gov (United States)

    Uyeda, S.; Varotsos, P.

    2011-12-01

    M9 super-giant earthquake with huge tsunami devastated East Japan on 11 March, causing more than 20,000 casualties and serious damage of Fukushima nuclear plant. This earthquake was predicted neither short-term nor long-term. Seismologists were shocked because it was not even considered possible to happen at the East Japan subduction zone. However, it was not the only un-predicted earthquake. In fact, throughout several decades of the National Earthquake Prediction Project, not even a single earthquake was predicted. In reality, practically no effective research has been conducted for the most important short-term prediction. This happened because the Japanese National Project was devoted for construction of elaborate seismic networks, which was not the best way for short-term prediction. After the Kobe disaster, in order to parry the mounting criticism on their no success history, they defiantly changed their policy to "stop aiming at short-term prediction because it is impossible and concentrate resources on fundamental research", that meant to obtain "more funding for no prediction research". The public were and are not informed about this change. Obviously earthquake prediction would be possible only when reliable precursory phenomena are caught and we have insisted this would be done most likely through non-seismic means such as geochemical/hydrological and electromagnetic monitoring. Admittedly, the lack of convincing precursors for the M9 super-giant earthquake has adverse effect for us, although its epicenter was far out off shore of the range of operating monitoring systems. In this presentation, we show a new possibility of finding remarkable precursory signals, ironically, from ordinary seismological catalogs. In the frame of the new time domain termed natural time, an order parameter of seismicity, κ1, has been introduced. This is the variance of natural time kai weighted by normalised energy release at χ. In the case that Seismic Electric Signals

  20. A Deterministic Approach to Earthquake Prediction

    Directory of Open Access Journals (Sweden)

    Vittorio Sgrigna

    2012-01-01

    Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.

  1. 78 FR 64973 - National Earthquake Prediction Evaluation Council (NEPEC)

    Science.gov (United States)

    2013-10-30

    ... DEPARTMENT OF THE INTERIOR Geological Survey [GX14GG009950000] National Earthquake Prediction...: Pursuant to Public Law 96-472, the National Earthquake Prediction Evaluation Council (NEPEC) will hold a... Council shall advise the Director of the U.S. Geological Survey on proposed earthquake predictions, on the...

  2. 76 FR 69761 - National Earthquake Prediction Evaluation Council (NEPEC)

    Science.gov (United States)

    2011-11-09

    ... DEPARTMENT OF THE INTERIOR U.S. Geological Survey National Earthquake Prediction Evaluation... 96-472, the National Earthquake Prediction Evaluation Council (NEPEC) will hold a 1\\1/2\\-day meeting.... Geological Survey on proposed earthquake predictions, on the completeness and scientific validity of the...

  3. 76 FR 19123 - National Earthquake Prediction Evaluation Council (NEPEC)

    Science.gov (United States)

    2011-04-06

    ... Earthquake Prediction Evaluation Council (NEPEC) AGENCY: U.S. Geological Survey, Interior. ACTION: Notice of meeting. SUMMARY: Pursuant to Public Law 96-472, the National Earthquake Prediction Evaluation Council... proposed earthquake predictions, on the completeness and scientific validity of the available data related...

  4. Signals of ENPEMF Used in Earthquake Prediction

    Science.gov (United States)

    Hao, G.; Dong, H.; Zeng, Z.; Wu, G.; Zabrodin, S. M.

    2012-12-01

    The signals of Earth's natural pulse electromagnetic field (ENPEMF) is a combination of the abnormal crustal magnetic field pulse affected by the earthquake, the induced field of earth's endogenous magnetic field, the induced magnetic field of the exogenous variation magnetic field, geomagnetic pulsation disturbance and other energy coupling process between sun and earth. As an instantaneous disturbance of the variation field of natural geomagnetism, ENPEMF can be used to predict earthquakes. This theory was introduced by A.A Vorobyov, who expressed a hypothesis that pulses can arise not only in the atmosphere but within the Earth's crust due to processes of tectonic-to-electric energy conversion (Vorobyov, 1970; Vorobyov, 1979). The global field time scale of ENPEMF signals has specific stability. Although the wave curves may not overlap completely at different regions, the smoothed diurnal ENPEMF patterns always exhibit the same trend per month. The feature is a good reference for observing the abnormalities of the Earth's natural magnetic field in a specific region. The frequencies of the ENPEMF signals generally locate in kilo Hz range, where frequencies within 5-25 kilo Hz range can be applied to monitor earthquakes. In Wuhan, the best observation frequency is 14.5 kilo Hz. Two special devices are placed in accordance with the S-N and W-E direction. Dramatic variation from the comparison between the pulses waveform obtained from the instruments and the normal reference envelope diagram should indicate high possibility of earthquake. The proposed detection method of earthquake based on ENPEMF can improve the geodynamic monitoring effect and can enrich earthquake prediction methods. We suggest the prospective further researches are about on the exact sources composition of ENPEMF signals, the distinction between noise and useful signals, and the effect of the Earth's gravity tide and solid tidal wave. This method may also provide a promising application in

  5. Is It Possible to Predict Strong Earthquakes?

    Science.gov (United States)

    Polyakov, Y. S.; Ryabinin, G. V.; Solovyeva, A. B.; Timashev, S. F.

    2015-07-01

    The possibility of earthquake prediction is one of the key open questions in modern geophysics. We propose an approach based on the analysis of common short-term candidate precursors (2 weeks to 3 months prior to strong earthquake) with the subsequent processing of brain activity signals generated in specific types of rats (kept in laboratory settings) who reportedly sense an impending earthquake a few days prior to the event. We illustrate the identification of short-term precursors using the groundwater sodium-ion concentration data in the time frame from 2010 to 2014 (a major earthquake occurred on 28 February 2013) recorded at two different sites in the southeastern part of the Kamchatka Peninsula, Russia. The candidate precursors are observed as synchronized peaks in the nonstationarity factors, introduced within the flicker-noise spectroscopy framework for signal processing, for the high-frequency component of both time series. These peaks correspond to the local reorganizations of the underlying geophysical system that are believed to precede strong earthquakes. The rodent brain activity signals are selected as potential "immediate" (up to 2 weeks) deterministic precursors because of the recent scientific reports confirming that rodents sense imminent earthquakes and the population-genetic model of K irshvink (Soc Am 90, 312-323, 2000) showing how a reliable genetic seismic escape response system may have developed over the period of several hundred million years in certain animals. The use of brain activity signals, such as electroencephalograms, in contrast to conventional abnormal animal behavior observations, enables one to apply the standard "input-sensor-response" approach to determine what input signals trigger specific seismic escape brain activity responses.

  6. 77 FR 53225 - National Earthquake Prediction Evaluation Council (NEPEC)

    Science.gov (United States)

    2012-08-31

    ... DEPARTMENT OF THE INTERIOR Geological Survey [USGS-GX12GG00995NP00] National Earthquake Prediction... meeting. SUMMARY: Pursuant to Public Law 96-472, the National Earthquake Prediction Evaluation Council... National Earthquake Information Center (NEIC), 1711 Illinois Avenue, Golden, Colorado 80401. The Council is...

  7. Earthquake Prediction in a Big Data World

    Science.gov (United States)

    Kossobokov, V. G.

    2016-12-01

    The digital revolution started just about 15 years ago has already surpassed the global information storage capacity of more than 5000 Exabytes (in optimally compressed bytes) per year. Open data in a Big Data World provides unprecedented opportunities for enhancing studies of the Earth System. However, it also opens wide avenues for deceptive associations in inter- and transdisciplinary data and for inflicted misleading predictions based on so-called "precursors". Earthquake prediction is not an easy task that implies a delicate application of statistics. So far, none of the proposed short-term precursory signals showed sufficient evidence to be used as a reliable precursor of catastrophic earthquakes. Regretfully, in many cases of seismic hazard assessment (SHA), from term-less to time-dependent (probabilistic PSHA or deterministic DSHA), and short-term earthquake forecasting (StEF), the claims of a high potential of the method are based on a flawed application of statistics and, therefore, are hardly suitable for communication to decision makers. Self-testing must be done in advance claiming prediction of hazardous areas and/or times. The necessity and possibility of applying simple tools of Earthquake Prediction Strategies, in particular, Error Diagram, introduced by G.M. Molchan in early 1990ies, and Seismic Roulette null-hypothesis as a metric of the alerted space, is evident. The set of errors, i.e. the rates of failure and of the alerted space-time volume, can be easily compared to random guessing, which comparison permits evaluating the SHA method effectiveness and determining the optimal choice of parameters in regard to a given cost-benefit function. These and other information obtained in such a simple testing may supply us with a realistic estimates of confidence and accuracy of SHA predictions and, if reliable but not necessarily perfect, with related recommendations on the level of risks for decision making in regard to engineering design, insurance

  8. Earthquake prediction the ory and its relation to precursors

    International Nuclear Information System (INIS)

    Negarestani, A.; Setayeshi, S.; Ghannadi-Maragheh, M.; Akasheh, B.

    2001-01-01

    Since we don't have enough knowledge about the Physics of earthquakes. therefore. the study of seismic precursors plays an important role in earthquake prediction. Earthquake prediction is a science which discusses about precursory phenomena during seismogenic process, and then investigates the correlation and association among them and the intrinsic relation between precursors and the seismogenic process. ar the end judges comprehensively the seismic status and finally makes earthquake prediction. There are two ways for predicting earthquake prediction. The first is to study the physics of seismogenic process and to determine the parameters in the process based on the source theories and the second way is to use seismic precursors. In this paper the theory of earthquake is reviewed. We also study theory of earthquake using models of earthquake origin, the relation between seismogenic process and various accompanying precursory phenomena. The earthquake prediction is divided into three categories: long-term, medium-term and short-term. We study seismic anomalous behavior. electric field, crustal deformation, gravity. magnetism of earth. change of groundwater variation. groundwater geochemistry and change of Radon gas emission. Finally, it is concluded the there is a correlation between Radon gas emission and earthquake phenomena. Meanwhile, there are some samples from actual processing in this area

  9. Statistical short-term earthquake prediction.

    Science.gov (United States)

    Kagan, Y Y; Knopoff, L

    1987-06-19

    A statistical procedure, derived from a theoretical model of fracture growth, is used to identify a foreshock sequence while it is in progress. As a predictor, the procedure reduces the average uncertainty in the rate of occurrence for a future strong earthquake by a factor of more than 1000 when compared with the Poisson rate of occurrence. About one-third of all main shocks with local magnitude greater than or equal to 4.0 in central California can be predicted in this way, starting from a 7-year database that has a lower magnitude cut off of 1.5. The time scale of such predictions is of the order of a few hours to a few days for foreshocks in the magnitude range from 2.0 to 5.0.

  10. Stigma in science: the case of earthquake prediction.

    Science.gov (United States)

    Joffe, Helene; Rossetto, Tiziana; Bradley, Caroline; O'Connor, Cliodhna

    2018-01-01

    This paper explores how earthquake scientists conceptualise earthquake prediction, particularly given the conviction of six earthquake scientists for manslaughter (subsequently overturned) on 22 October 2012 for having given inappropriate advice to the public prior to the L'Aquila earthquake of 6 April 2009. In the first study of its kind, semi-structured interviews were conducted with 17 earthquake scientists and the transcribed interviews were analysed thematically. The scientists primarily denigrated earthquake prediction, showing strong emotive responses and distancing themselves from earthquake 'prediction' in favour of 'forecasting'. Earthquake prediction was regarded as impossible and harmful. The stigmatisation of the subject is discussed in the light of research on boundary work and stigma in science. The evaluation reveals how mitigation becomes the more favoured endeavour, creating a normative environment that disadvantages those who continue to pursue earthquake prediction research. Recommendations are made for communication with the public on earthquake risk, with a focus on how scientists portray uncertainty. © 2018 The Author(s). Disasters © Overseas Development Institute, 2018.

  11. Modified-Fibonacci-Dual-Lucas method for earthquake prediction

    Science.gov (United States)

    Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.

    2015-06-01

    The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with the earthquake date and in this case the FDL method coincides with the MFDL. Based on the MDFL method we present the prediction method capable of predicting global events or localized earthquakes and we will discuss the accuracy of the method in as far as the prediction and location parts of the method. We show example calendar style predictions for global events as well as for the Greek region using

  12. Gambling score in earthquake prediction analysis

    Science.gov (United States)

    Molchan, G.; Romashkova, L.

    2011-03-01

    The number of successes and the space-time alarm rate are commonly used to characterize the strength of an earthquake prediction method and the significance of prediction results. It has been recently suggested to use a new characteristic to evaluate the forecaster's skill, the gambling score (GS), which incorporates the difficulty of guessing each target event by using different weights for different alarms. We expand parametrization of the GS and use the M8 prediction algorithm to illustrate difficulties of the new approach in the analysis of the prediction significance. We show that the level of significance strongly depends (1) on the choice of alarm weights, (2) on the partitioning of the entire alarm volume into component parts and (3) on the accuracy of the spatial rate measure of target events. These tools are at the disposal of the researcher and can affect the significance estimate. Formally, all reasonable GSs discussed here corroborate that the M8 method is non-trivial in the prediction of 8.0 ≤M < 8.5 events because the point estimates of the significance are in the range 0.5-5 per cent. However, the conservative estimate 3.7 per cent based on the number of successes seems preferable owing to two circumstances: (1) it is based on relative values of the spatial rate and hence is more stable and (2) the statistic of successes enables us to construct analytically an upper estimate of the significance taking into account the uncertainty of the spatial rate measure.

  13. Prediction of earthquakes: a data evaluation and exchange problem

    Energy Technology Data Exchange (ETDEWEB)

    Melchior, Paul

    1978-11-15

    Recent experiences in earthquake prediction are recalled. Precursor information seems to be available from geodetic measurements, hydrological and geochemical measurements, electric and magnetic measurements, purely seismic phenomena, and zoological phenomena; some new methods are proposed. A list of possible earthquake triggers is given. The dilatancy model is contrasted with a dry model; they seem to be equally successful. In conclusion, the space and time range of the precursors is discussed in relation to the magnitude of earthquakes. (RWR)

  14. The 2008 Wenchuan Earthquake and the Rise and Fall of Earthquake Prediction in China

    Science.gov (United States)

    Chen, Q.; Wang, K.

    2009-12-01

    Regardless of the future potential of earthquake prediction, it is presently impractical to rely on it to mitigate earthquake disasters. The practical approach is to strengthen the resilience of our built environment to earthquakes based on hazard assessment. But this was not common understanding in China when the M 7.9 Wenchuan earthquake struck the Sichuan Province on 12 May 2008, claiming over 80,000 lives. In China, earthquake prediction is a government-sanctioned and law-regulated measure of disaster prevention. A sudden boom of the earthquake prediction program in 1966-1976 coincided with a succession of nine M > 7 damaging earthquakes in the densely populated region of the country and the political chaos of the Cultural Revolution. It climaxed with the prediction of the 1975 Haicheng earthquake, which was due mainly to an unusually pronounced foreshock sequence and the extraordinary readiness of some local officials to issue imminent warning and evacuation order. The Haicheng prediction was a success in practice and yielded useful lessons, but the experience cannot be applied to most other earthquakes and cultural environments. Since the disastrous Tangshan earthquake in 1976 that killed over 240,000 people, there have been two opposite trends in China: decreasing confidence in prediction and increasing emphasis on regulating construction design for earthquake resilience. In 1976, most of the seismic intensity XI areas of Tangshan were literally razed to the ground, but in 2008, many buildings in the intensity XI areas of Wenchuan did not collapse. Prediction did not save life in either of these events; the difference was made by construction standards. For regular buildings, there was no seismic design in Tangshan to resist any earthquake shaking in 1976, but limited seismic design was required for the Wenchuan area in 2008. Although the construction standards were later recognized to be too low, those buildings that met the standards suffered much less

  15. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  16. An application of earthquake prediction algorithm M8 in eastern ...

    Indian Academy of Sciences (India)

    2Institute of Earthquake Prediction Theory and Mathematical Geophysics, ... located about 70 km from a preceding M7.3 earthquake that occurred in ... local extremes of the seismic density distribution, and in the third approach, CI centers were distributed ...... Bird P 2003 An updated digital model of plate boundaries;.

  17. Quantitative Earthquake Prediction on Global and Regional Scales

    International Nuclear Information System (INIS)

    Kossobokov, Vladimir G.

    2006-01-01

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  18. Quantitative Earthquake Prediction on Global and Regional Scales

    Science.gov (United States)

    Kossobokov, Vladimir G.

    2006-03-01

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  19. Implications of fault constitutive properties for earthquake prediction.

    Science.gov (United States)

    Dieterich, J H; Kilgore, B

    1996-04-30

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  20. Some considerations regarding earthquake prediction - The case of Vrancea region -

    International Nuclear Information System (INIS)

    Enescu, Bogdan; Enescu, Dumitru

    2000-01-01

    Earthquake prediction research has been conducted for over 100 years with no obvious success. In the last year, the new modern concepts regarding the earthquake dynamics added another source of skepticism regarding the possibility of predicting earthquakes. However there are some recognizable trends, optimized in the recent years, which may give rise to more reliable and solid approaches to deal with this complex subject. In the light of these trends, emphasized by Aki, we try to analyze the new developments in the field, especially concerning the Vrancea region. (authors)

  1. Turning the rumor of May 11, 2011 earthquake prediction In Rome, Italy, into an information day on earthquake hazard

    Science.gov (United States)

    Amato, A.; Cultrera, G.; Margheriti, L.; Nostro, C.; Selvaggi, G.; INGVterremoti Team

    2011-12-01

    A devastating earthquake had been predicted for May 11, 2011 in Rome. This prediction was never released officially by anyone, but it grew up in the Internet and was amplified by media. It was erroneously ascribed to Raffaele Bendandi, an Italian self-taught natural scientist who studied planetary motions. Indeed, around May 11, 2011, a planetary alignment was really expected and this contributed to give credibility to the earthquake prediction among people. During the previous months, INGV was overwhelmed with requests for information about this supposed prediction by Roman inhabitants and tourists. Given the considerable mediatic impact of this expected earthquake, INGV decided to organize an Open Day in its headquarter in Rome for people who wanted to learn more about the Italian seismicity and the earthquake as natural phenomenon. The Open Day was preceded by a press conference two days before, in which we talked about this prediction, we presented the Open Day, and we had a scientific discussion with journalists about the earthquake prediction and more in general on the real problem of seismic risk in Italy. About 40 journalists from newspapers, local and national tv's, press agencies and web news attended the Press Conference and hundreds of articles appeared in the following days, advertising the 11 May Open Day. The INGV opened to the public all day long (9am - 9pm) with the following program: i) meetings with INGV researchers to discuss scientific issues; ii) visits to the seismic monitoring room, open 24h/7 all year; iii) guided tours through interactive exhibitions on earthquakes and Earth's deep structure; iv) lectures on general topics from the social impact of rumors to seismic risk reduction; v) 13 new videos on channel YouTube.com/INGVterremoti to explain the earthquake process and give updates on various aspects of seismic monitoring in Italy; vi) distribution of books and brochures. Surprisingly, more than 3000 visitors came to visit INGV

  2. Earthquake prediction rumors can help in building earthquake awareness: the case of May the 11th 2011 in Rome (Italy)

    Science.gov (United States)

    Amato, A.; Arcoraci, L.; Casarotti, E.; Cultrera, G.; Di Stefano, R.; Margheriti, L.; Nostro, C.; Selvaggi, G.; May-11 Team

    2012-04-01

    Banner headlines in an Italian newspaper read on May 11, 2011: "Absence boom in offices: the urban legend in Rome become psychosis". This was the effect of a large-magnitude earthquake prediction in Rome for May 11, 2011. This prediction was never officially released, but it grew up in Internet and was amplified by media. It was erroneously ascribed to Raffaele Bendandi, an Italian self-taught natural scientist who studied planetary motions and related them to earthquakes. Indeed, around May 11, 2011, there was a planetary alignment and this increased the earthquake prediction credibility. Given the echo of this earthquake prediction, INGV decided to organize on May 11 (the same day the earthquake was predicted to happen) an Open Day in its headquarter in Rome to inform on the Italian seismicity and the earthquake physics. The Open Day was preceded by a press conference two days before, attended by about 40 journalists from newspapers, local and national TV's, press agencies and web news magazines. Hundreds of articles appeared in the following two days, advertising the 11 May Open Day. On May 11 the INGV headquarter was peacefully invaded by over 3,000 visitors from 9am to 9pm: families, students, civil protection groups and many journalists. The program included conferences on a wide variety of subjects (from social impact of rumors to seismic risk reduction) and distribution of books and brochures, in addition to several activities: meetings with INGV researchers to discuss scientific issues, visits to the seismic monitoring room (open 24h/7 all year), guided tours through interactive exhibitions on earthquakes and Earth's deep structure. During the same day, thirteen new videos have also been posted on our youtube/INGVterremoti channel to explain the earthquake process and hazard, and to provide real time periodic updates on seismicity in Italy. On May 11 no large earthquake happened in Italy. The initiative, built up in few weeks, had a very large feedback

  3. Predicting hydrocarbon release from soil

    International Nuclear Information System (INIS)

    Poppendieck, D.; Loehr, R.C.

    2002-01-01

    'Full text:' The remediation of hazardous chemicals from soils can be a lengthy and costly process. As a result, recent regulatory initiatives have focused on risk-based corrective action (RBCA) approaches. Such approaches attempt to identify the amount of chemical that can be left at a site with contaminated soil and still be protective of human health and the environment. For hydrocarbons in soils to pose risk to human heath and the environment, the hydrocarbons must be released from the soil and accessible to microorganisms, earthworms, or other higher level organisms. The sorption of hydrocarbons to soil can reduce the availability of the hydrocarbon to receptors. Typically in soils and sediments, there is an initial fast release of a hydrocarbon from the soil to the aqueous phase followed by a slower release of the remaining hydrocarbon to the aqueous phase. The rate and extent of slow release can influence aqueous hydrocarbon concentrations and the fate and transport of hydrocarbons in the subsurface. Once the fast fraction of the chemical has been removed from the soil, the remaining fraction of a chemical may desorb at a rate that natural mechanisms can attenuate the released hydrocarbon. Hence, active remediation may be needed only until the fast fraction has been removed. However, the fast fraction is a soil and chemical specific parameter. This presentation will present a tier I type protocol that has been developed to quickly estimate the fraction of hydrocarbons that are readily released from the soil matrix to the aqueous phase. Previous research in our laboratory and elsewhere has used long-term desorption (four months) studies to determine the readily released fraction. This research shows that a single short-term (less than two weeks) batch extraction procedure provides a good estimate of the fast released fraction derived from long-term experiments. This procedure can be used as a tool to rapidly evaluate the release and bioavailability of

  4. EPOS1 - a multiparameter measuring system to earthquake prediction research

    Energy Technology Data Exchange (ETDEWEB)

    Streil, T.; Oeser, V. [SARAD GmbH, Dresden (Germany); Heinicke, J.; Koch, U.; Wiegand, J.

    1998-12-31

    The approach to earthquake prediction by geophysical, geochemical and hydrological measurements is a long and winding road. Nevertheless, the results show a progress in that field (e.g. Kobe). This progress is also a result of a new generation of measuring equipment. SARAD has developed a versatile measuring system (EPOS1) based on experiences and recent results from different research groups. It is able to record selected parameters suitable to earthquake prediction research. A micro-computer system handles data exchange, data management and control. It is connected to a modular sensor system. Sensor modules can be selected according to the actual needs at the measuring site. (author)

  5. Modelling earth current precursors in earthquake prediction

    Directory of Open Access Journals (Sweden)

    R. Di Maio

    1997-06-01

    Full Text Available This paper deals with the theory of earth current precursors of earthquake. A dilatancy-diffusion-polarization model is proposed to explain the anomalies of the electric potential, which are observed on the ground surface prior to some earthquakes. The electric polarization is believed to be the electrokinetic effect due to the invasion of fluids into new pores, which are opened inside a stressed-dilated rock body. The time and space variation of the distribution of the electric potential in a layered earth as well as in a faulted half-space is studied in detail. It results that the surface response depends on the underground conductivity distribution and on the relative disposition of the measuring dipole with respect to the buried bipole source. A field procedure based on the use of an areal layout of the recording sites is proposed, in order to obtain the most complete information on the time and space evolution of the precursory phenomena in any given seismic region.

  6. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    Science.gov (United States)

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  7. Forecasting of future earthquakes in the northeast region of India considering energy released concept

    Science.gov (United States)

    Zarola, Amit; Sil, Arjun

    2018-04-01

    This study presents the forecasting of time and magnitude size of the next earthquake in the northeast India, using four probability distribution models (Gamma, Lognormal, Weibull and Log-logistic) considering updated earthquake catalog of magnitude Mw ≥ 6.0 that occurred from year 1737-2015 in the study area. On the basis of past seismicity of the region, two types of conditional probabilities have been estimated using their best fit model and respective model parameters. The first conditional probability is the probability of seismic energy (e × 1020 ergs), which is expected to release in the future earthquake, exceeding a certain level of seismic energy (E × 1020 ergs). And the second conditional probability is the probability of seismic energy (a × 1020 ergs/year), which is expected to release per year, exceeding a certain level of seismic energy per year (A × 1020 ergs/year). The logarithm likelihood functions (ln L) were also estimated for all four probability distribution models. A higher value of ln L suggests a better model and a lower value shows a worse model. The time of the future earthquake is forecasted by dividing the total seismic energy expected to release in the future earthquake with the total seismic energy expected to release per year. The epicentre of recently occurred 4 January 2016 Manipur earthquake (M 6.7), 13 April 2016 Myanmar earthquake (M 6.9) and the 24 August 2016 Myanmar earthquake (M 6.8) are located in zone Z.12, zone Z.16 and zone Z.15, respectively and that are the identified seismic source zones in the study area which show that the proposed techniques and models yield good forecasting accuracy.

  8. Three Millennia of Seemingly Time-Predictable Earthquakes, Tell Ateret

    Science.gov (United States)

    Agnon, Amotz; Marco, Shmuel; Ellenblum, Ronnie

    2014-05-01

    Among various idealized recurrence models of large earthquakes, the "time-predictable" model has a straightforward mechanical interpretation, consistent with simple friction laws. On a time-predictable fault, the time interval between an earthquake and its predecessor is proportional to the slip during the predecessor. The alternative "slip-predictable" model states that the slip during earthquake rupture is proportional to the preceding time interval. Verifying these models requires extended records of high precision data for both timing and amount of slip. The precision of paleoearthquake data can rarely confirm or rule out predictability, and recent papers argue for either time- or slip-predictable behavior. The Ateret site, on the trace of the Dead Sea fault at the Jordan Gorge segment, offers unique precision for determining space-time patterns. Five consecutive slip events, each associated with deformed and offset sets of walls, are correlated with historical earthquakes. Two correlations are based on detailed archaeological, historical, and numismatic evidence. The other three are tentative. The offsets of three of the events are determined with high precision; the other two are not as certain. Accepting all five correlations, the fault exhibits a striking time-predictable behavior, with a long term slip rate of 3 mm/yr. However, the 30 October 1759 ~0.5 m rupture predicts a subsequent rupture along the Jordan Gorge toward the end of the last century. We speculate that earthquakres on secondary faults (the 25 November 1759 on the Rachaya branch and the 1 January 1837 on the Roum branch, both M≥7) have disrupted the 3 kyr time-predictable pattern.

  9. Predicting earthquakes by analyzing accelerating precursory seismic activity

    Science.gov (United States)

    Varnes, D.J.

    1989-01-01

    During 11 sequences of earthquakes that in retrospect can be classed as foreshocks, the accelerating rate at which seismic moment is released follows, at least in part, a simple equation. This equation (1) is {Mathematical expression},where {Mathematical expression} is the cumulative sum until time, t, of the square roots of seismic moments of individual foreshocks computed from reported magnitudes;C and n are constants; and tfis a limiting time at which the rate of seismic moment accumulation becomes infinite. The possible time of a major foreshock or main shock, tf,is found by the best fit of equation (1), or its integral, to step-like plots of {Mathematical expression} versus time using successive estimates of tfin linearized regressions until the maximum coefficient of determination, r2,is obtained. Analyzed examples include sequences preceding earthquakes at Cremasta, Greece, 2/5/66; Haicheng, China 2/4/75; Oaxaca, Mexico, 11/29/78; Petatlan, Mexico, 3/14/79; and Central Chile, 3/3/85. In 29 estimates of main-shock time, made as the sequences developed, the errors in 20 were less than one-half and in 9 less than one tenth the time remaining between the time of the last data used and the main shock. Some precursory sequences, or parts of them, yield no solution. Two sequences appear to include in their first parts the aftershocks of a previous event; plots using the integral of equation (1) show that the sequences are easily separable into aftershock and foreshock segments. Synthetic seismic sequences of shocks at equal time intervals were constructed to follow equation (1), using four values of n. In each series the resulting distributions of magnitudes closely follow the linear Gutenberg-Richter relation log N=a-bM, and the product n times b for each series is the same constant. In various forms and for decades, equation (1) has been used successfully to predict failure times of stressed metals and ceramics, landslides in soil and rock slopes, and volcanic

  10. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.; Mai, Paul Martin; Thingbaijam, Kiran Kumar; Razafindrakoto, H. N. T.; Genton, Marc G.

    2014-01-01

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  11. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  12. 75 FR 63854 - National Earthquake Prediction Evaluation Council (NEPEC) Advisory Committee

    Science.gov (United States)

    2010-10-18

    ... DEPARTMENT OF THE INTERIOR Geological Survey National Earthquake Prediction Evaluation Council...: Pursuant to Public Law 96-472, the National Earthquake Prediction Evaluation Council (NEPEC) will hold a 2... proposed earthquake predictions, on the completeness and scientific validity of the available data related...

  13. Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea

    KAUST Repository

    Sawlan, Zaid A

    2012-12-01

    Tsunami concerns have increased in the world after the 2004 Indian Ocean tsunami and the 2011 Tohoku tsunami. Consequently, tsunami models have been developed rapidly in the last few years. One of the advanced tsunami models is the GeoClaw tsunami model introduced by LeVeque (2011). This model is adaptive and consistent. Because of different sources of uncertainties in the model, observations are needed to improve model prediction through a data assimilation framework. Model inputs are earthquake parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while the smoother operates smoothing to estimate the earthquake parameters. This method reduces the error produced by uncertain inputs. In addition, state-parameter EnKF is implemented to estimate earthquake parameters. Although number of observations is small, estimated parameters generates a better tsunami prediction than the model. Methods and results of prediction experiments in the Red Sea are presented and the prospect of developing an operational tsunami prediction system in the Red Sea is discussed.

  14. Earthquakes

    Science.gov (United States)

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  15. Feasibility study of short-term earthquake prediction using ionospheric anomalies immediately before large earthquakes

    Science.gov (United States)

    Heki, K.; He, L.

    2017-12-01

    We showed that positive and negative electron density anomalies emerge above the fault immediately before they rupture, 40/20/10 minutes before Mw9/8/7 earthquakes (Heki, 2011 GRL; Heki and Enomoto, 2013 JGR; He and Heki 2017 JGR). These signals are stronger for earthquake with larger Mw and under higher background vertical TEC (total electron conetent) (Heki and Enomoto, 2015 JGR). The epicenter, the positive and the negative anomalies align along the local geomagnetic field (He and Heki, 2016 GRL), suggesting electric fields within ionosphere are responsible for making the anomalies (Kuo et al., 2014 JGR; Kelley et al., 2017 JGR). Here we suppose the next Nankai Trough earthquake that may occur within a few tens of years in Southwest Japan, and will discuss if we can recognize its preseismic signatures in TEC by real-time observations with GNSS.During high geomagnetic activities, large-scale traveling ionospheric disturbances (LSTID) often propagate from auroral ovals toward mid-latitude regions, and leave similar signatures to preseismic anomalies. This is a main obstacle to use preseismic TEC changes for practical short-term earthquake prediction. In this presentation, we show that the same anomalies appeared 40 minutes before the mainshock above northern Australia, the geomagnetically conjugate point of the 2011 Tohoku-oki earthquake epicenter. This not only demonstrates that electric fields play a role in making the preseismic TEC anomalies, but also offers a possibility to discriminate preseismic anomalies from those caused by LSTID. By monitoring TEC in the conjugate areas in the two hemisphere, we can recognize anomalies with simultaneous onset as those caused by within-ionosphere electric fields (e.g. preseismic anomalies, night-time MSTID) and anomalies without simultaneous onset as gravity-wave origin disturbances (e.g. LSTID, daytime MSTID).

  16. The use of radon gas techniques for earthquake prediction

    International Nuclear Information System (INIS)

    Al-Hilal, M.

    1993-01-01

    This scientific article explains the applications of radon gas measurements in water and soil for monitoring fault activities and earthquake prediction. It also emphasizes, through some worldwide examples presented from Tashkent Basin in U.S.S.R. and from San Andreas fault in U.S.A, that the use of radon gas technique in fault originated water as well as in soil gases can be considered as an important geological-tool, within the general framework of earthquake prediction because of the coherent and time anomalous relationship between the density of alpha particles due to radon decay and between the tectonic activity level along fault zones. The article also indicates, and through the practical experience of the author, to the possibility of applying such techniques in certain parts of Syria. (author). 6 refs., 4 figs

  17. Prediction of the occurrence of related strong earthquakes in Italy

    International Nuclear Information System (INIS)

    Vorobieva, I.A.; Panza, G.F.

    1993-06-01

    In the seismic flow it is often observed that a Strong Earthquake (SE), is followed by Related Strong Earthquakes (RSEs), which occur near the epicentre of the SE with origin time rather close to the origin time of the SE. The algorithm for the prediction of the occurrence of a RSE has been developed and applied for the first time to the seismicity data of the California-Nevada region and has been successfully tested in several regions of the World, the statistical significance of the result being 97%. So far, it has been possible to make five successful forward predictions, with no false alarms or failures to predict. The algorithm is applied here to the Italian territory, where the occurrence of RSEs is a particularly rare phenomenon. Our results show that the standard algorithm is successfully directly applicable without any adjustment of the parameters. Eleven SEs are considered. Of them, three are followed by a RSE, as predicted by the algorithm, eight SEs are not followed by a RSE, and the algorithm predicts this behaviour for seven of them, giving rise to only one false alarm. Since, in Italy, quite often the series of strong earthquakes are relatively short, the algorithm has been extended to handle such situation. The result of this experiment indicates that it is possible to attempt to test a SE, for the occurrence of a RSE, soon after the occurrence of the SE itself, performing timely ''preliminary'' recognition on reduced data sets. This fact, the high confidence level of the retrospective analysis, and the first successful forward predictions, made in different parts of the World, indicates that, even if additional tests are desirable, the algorithm can already be considered for routine application to Civil Defence. (author). Refs, 3 figs, 7 tabs

  18. Stabilizing intermediate-term medium-range earthquake predictions

    International Nuclear Information System (INIS)

    Kossobokov, V.G.; Romashkova, L.L.; Panza, G.F.; Peresan, A.

    2001-12-01

    A new scheme for the application of the intermediate-term medium-range earthquake prediction algorithm M8 is proposed. The scheme accounts for the natural distribution of seismic activity, eliminates the subjectivity in the positioning of the areas of investigation and provides additional stability of the predictions with respect to the original variant. According to the retroactive testing in Italy and adjacent regions, this improvement is achieved without any significant change of the alarm volume in comparison with the results published so far. (author)

  19. Predictability of Landslide Timing From Quasi-Periodic Precursory Earthquakes

    Science.gov (United States)

    Bell, Andrew F.

    2018-02-01

    Accelerating rates of geophysical signals are observed before a range of material failure phenomena. They provide insights into the physical processes controlling failure and the basis for failure forecasts. However, examples of accelerating seismicity before landslides are rare, and their behavior and forecasting potential are largely unknown. Here I use a Bayesian methodology to apply a novel gamma point process model to investigate a sequence of quasiperiodic repeating earthquakes preceding a large landslide at Nuugaatsiaq in Greenland in June 2017. The evolution in earthquake rate is best explained by an inverse power law increase with time toward failure, as predicted by material failure theory. However, the commonly accepted power law exponent value of 1.0 is inconsistent with the data. Instead, the mean posterior value of 0.71 indicates a particularly rapid acceleration toward failure and suggests that only relatively short warning times may be possible for similar landslides in future.

  20. Can radon gas measurements be used to predict earthquakes?

    International Nuclear Information System (INIS)

    2009-01-01

    After the tragic earthquake of April 6, 2009 in Aquila (Abruzzo), a debate has begun in Italy regarding the alleged prediction of this earthquake by a scientist working in the Gran Sasso National Laboratory, based on radon content measurements. Radon is a radioactive gas originating from the decay of natural radioactive elements present in the soil. IRSN specialists are actively involved in ongoing research projects on the impact of mechanical stresses on radon emissions from underground structures, and some of their results dating from several years ago are being brought up in this debate. These specialists are therefore currently presenting their perspective on the relationships between radon emissions and seismic activity, based on publications on the subject. (authors)

  1. Automated radon-thoron monitoring for earthquake prediction research

    International Nuclear Information System (INIS)

    Shapiro, M.H.; Melvin, J.D.; Copping, N.A.; Tombrello, T.A.; Whitcomb, J.H.

    1980-01-01

    This paper describes an automated instrument for earthquake prediction research which monitors the emission of radon ( 222 Rn) and thoron ( 220 Rn) from rock. The instrument uses aerosol filtration techniques and beta counting to determine radon and thoron levels. Data from the first year of operation of a field prototype suggest an annual cycle in the radon level at the site which is related to thermoelastic strains in the crust. Two anomalous increases in the radon level of short duration have been observed during the first year of operation. One anomaly appears to have been a precursor for a nearby earthquake (2.8 magnitude, Richter scale), and the other may have been associated with changing hydrological conditions resulting from heavy rainfall

  2. Prediction of strong earthquake motions on rock surface using evolutionary process models

    International Nuclear Information System (INIS)

    Kameda, H.; Sugito, M.

    1984-01-01

    Stochastic process models are developed for prediction of strong earthquake motions for engineering design purposes. Earthquake motions with nonstationary frequency content are modeled by using the concept of evolutionary processes. Discussion is focused on the earthquake motions on bed rocks which are important for construction of nuclear power plants in seismic regions. On this basis, two earthquake motion prediction models are developed, one (EMP-IB Model) for prediction with given magnitude and epicentral distance, and the other (EMP-IIB Model) to account for the successive fault ruptures and the site location relative to the fault of great earthquakes. (Author) [pt

  3. Prediction of accident sequence probabilities in a nuclear power plant due to earthquake events

    International Nuclear Information System (INIS)

    Hudson, J.M.; Collins, J.D.

    1980-01-01

    This paper presents a methodology to predict accident probabilities in nuclear power plants subject to earthquakes. The resulting computer program accesses response data to compute component failure probabilities using fragility functions. Using logical failure definitions for systems, and the calculated component failure probabilities, initiating event and safety system failure probabilities are synthesized. The incorporation of accident sequence expressions allows the calculation of terminal event probabilities. Accident sequences, with their occurrence probabilities, are finally coupled to a specific release category. A unique aspect of the methodology is an analytical procedure for calculating top event probabilities based on the correlated failure of primary events

  4. Earthquake prediction research with plastic nuclear track detectors

    International Nuclear Information System (INIS)

    Woith, H.; Enge, W.; Beaujean, R.; Oschlies, K.

    1988-01-01

    Since 1984 a German-Turkish project on earthquake prediction research has been operating at the North Anatolian fault zone in Turkey. Among many other parameters changes in Radon emission have also been investigated. Plastic nuclear track detectors (Kodak cellulose nitrate LR 115) are used to record alpha-particles emitted from Radon and Thoron atoms and their daughter isotopes. The detectors are replaced and analyzed every 3 weeks. Thus a quasi-continuous time sequence of the Radon soil gas emission is recorded. We present a comparison between measurements made with electronic counters and plastic track detectors. (author)

  5. Study of Earthquake Disaster Prediction System of Langfang city Based on GIS

    Science.gov (United States)

    Huang, Meng; Zhang, Dian; Li, Pan; Zhang, YunHui; Zhang, RuoFei

    2017-07-01

    In this paper, according to the status of China’s need to improve the ability of earthquake disaster prevention, this paper puts forward the implementation plan of earthquake disaster prediction system of Langfang city based on GIS. Based on the GIS spatial database, coordinate transformation technology, GIS spatial analysis technology and PHP development technology, the seismic damage factor algorithm is used to predict the damage of the city under different intensity earthquake disaster conditions. The earthquake disaster prediction system of Langfang city is based on the B / S system architecture. Degree and spatial distribution and two-dimensional visualization display, comprehensive query analysis and efficient auxiliary decision-making function to determine the weak earthquake in the city and rapid warning. The system has realized the transformation of the city’s earthquake disaster reduction work from static planning to dynamic management, and improved the city’s earthquake and disaster prevention capability.

  6. Predicting masking release of lateralized speech

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; MacDonald, Ewen; Dau, Torsten

    2016-01-01

    . The largest masking release (MR) was observed when all maskers were on the opposite side of the target. The data in the conditions containing only energetic masking and modulation masking could be accounted for using a binaural extension of the speech-based envelope power spectrum model [sEPSM; Jørgensen et...... al., 2013, J. Acoust. Soc. Am. 130], which uses a short-term equalization-cancellation process to model binaural unmasking. In the conditions where informational masking (IM) was involved, the predicted SRTs were lower than the measured values because the model is blind to confusions experienced...

  7. Radon monitoring and its application for earthquake prediction

    International Nuclear Information System (INIS)

    Ramchandran, T.V.; Shaikh, A.N.; Khan, A.H.; Mayya, Y.S.; Puranik, V.D.; Venkat Raj, V.

    2004-12-01

    Concentrations ofa wide range of terrestrial gases containing radionuclides like 222 Rn (Radon), H 2 (Hydrogen), Hg (Mercury), CO 2 (Carbon dioxide) and He 4 (Helium) in ground water and soil air have commonly been found to be anomalously high along active faults, suggesting that these faults may be the path for least resistance for the out gassing processes of the solid earth. Among the naturally occurring radionucludes, the 238 U decay series has received great attention in connection with the earthquake prediction and monitoring research all over the world. Due to its nearly ubiquitous occurrence, appreciable abundance, chemical inactivity and convenient half-life (3.823 d), 222 Rn in the 238 U series is the most extensively studied one in this regard. In this report, a brief account of the application of 222 Rn monitoring carried out all over the world, studies carried out in India, modeling of earthquake predictions, measurement techniques, measuring equipments, its availability in India, Indian radon monitoring programme and its prospects are presented. (author)

  8. Prediction of the area affected by earthquake-induced landsliding based on seismological parameters

    Science.gov (United States)

    Marc, Odin; Meunier, Patrick; Hovius, Niels

    2017-07-01

    We present an analytical, seismologically consistent expression for the surface area of the region within which most landslides triggered by an earthquake are located (landslide distribution area). This expression is based on scaling laws relating seismic moment, source depth, and focal mechanism with ground shaking and fault rupture length and assumes a globally constant threshold of acceleration for onset of systematic mass wasting. The seismological assumptions are identical to those recently used to propose a seismologically consistent expression for the total volume and area of landslides triggered by an earthquake. To test the accuracy of the model we gathered geophysical information and estimates of the landslide distribution area for 83 earthquakes. To reduce uncertainties and inconsistencies in the estimation of the landslide distribution area, we propose an objective definition based on the shortest distance from the seismic wave emission line containing 95 % of the total landslide area. Without any empirical calibration the model explains 56 % of the variance in our dataset, and predicts 35 to 49 out of 83 cases within a factor of 2, depending on how we account for uncertainties on the seismic source depth. For most cases with comprehensive landslide inventories we show that our prediction compares well with the smallest region around the fault containing 95 % of the total landslide area. Aspects ignored by the model that could explain the residuals include local variations of the threshold of acceleration and processes modulating the surface ground shaking, such as the distribution of seismic energy release on the fault plane, the dynamic stress drop, and rupture directivity. Nevertheless, its simplicity and first-order accuracy suggest that the model can yield plausible and useful estimates of the landslide distribution area in near-real time, with earthquake parameters issued by standard detection routines.

  9. Comparison of FISGAS swelling and gas release predictions with experiment

    International Nuclear Information System (INIS)

    Ostensen, R.W.

    1979-01-01

    FISGAS calculations were compared to fuel swelling data from the FD1 tests and to gas release data from the FGR39 test. Late swelling and gas release predictions are satisfactory if vacancy depletion effects are added to the code. However, early swelling predictions are not satisfactory, and early gas release predictions are very poor. Explanation of these discrepancies is speculative

  10. Regional distribution of released earthquake energy in northern Egypt along with Inahass area

    International Nuclear Information System (INIS)

    El-hemamy, S.T.; Adel, A.A. Othman

    1999-01-01

    A review of the seismic history of Egypt indicates sone areas of high activity concentrated along Oligocene-Miocene faults. These areas support the idea of recent activation of the Oligocene-Miocene stress cycle. There are similarities in the special distribution of recent and historical epicenters. Form the tectonic map of Egypt, distribution of Intensity and magnitude show strong activity along Nile Delta. This due to the presence of a thick layers of recent alluvial sediments. The released energy of the earthquakes are effective on the structures. The present study deals with the computed released energies of the reported earthquakes in Egypt and around Inshas area . Its effect on the urban and nuclear facilities inside Inshas site is considered. Special consideration will be given to old and new waste repository sites. The application of the determined released energy reveals that Inshas site is affected by seismic activity from five seismo-tectonic source zones, namely the Red Sea, Nile Delta, El-Faiyum, the Mediterranean Sea and the Gulf of Aqaba seismo-tectonic zones. El-Faiyum seismo-tectonic source zone has the maximum effect on the site and gave a high released energy reaching to 5.4E +2 1 erg

  11. Testing for the 'predictability' of dynamically triggered earthquakes in The Geysers geothermal field

    Science.gov (United States)

    Aiken, Chastity; Meng, Xiaofeng; Hardebeck, Jeanne

    2018-03-01

    The Geysers geothermal field is well known for being susceptible to dynamic triggering of earthquakes by large distant earthquakes, owing to the introduction of fluids for energy production. Yet, it is unknown if dynamic triggering of earthquakes is 'predictable' or whether dynamic triggering could lead to a potential hazard for energy production. In this paper, our goal is to investigate the characteristics of triggering and the physical conditions that promote triggering to determine whether or not triggering is in anyway foreseeable. We find that, at present, triggering in The Geysers is not easily 'predictable' in terms of when and where based on observable physical conditions. However, triggered earthquake magnitude positively correlates with peak imparted dynamic stress, and larger dynamic stresses tend to trigger sequences similar to mainshock-aftershock sequences. Thus, we may be able to 'predict' what size earthquakes to expect at The Geysers following a large distant earthquake.

  12. Study on China’s Earthquake Prediction by Mathematical Analysis and its Application in Catastrophe Insurance

    Science.gov (United States)

    Jianjun, X.; Bingjie, Y.; Rongji, W.

    2018-03-01

    The purpose of this paper was to improve catastrophe insurance level. Firstly, earthquake predictions were carried out using mathematical analysis method. Secondly, the foreign catastrophe insurances’ policies and models were compared. Thirdly, the suggestions on catastrophe insurances to China were discussed. The further study should be paid more attention on the earthquake prediction by introducing big data.

  13. Prediction of site specific ground motion for large earthquake

    International Nuclear Information System (INIS)

    Kamae, Katsuhiro; Irikura, Kojiro; Fukuchi, Yasunaga.

    1990-01-01

    In this paper, we apply the semi-empirical synthesis method by IRIKURA (1983, 1986) to the estimation of site specific ground motion using accelerograms observed at Kumatori in Osaka prefecture. Target earthquakes used here are a comparatively distant earthquake (Δ=95 km, M=5.6) caused by the YAMASAKI fault and a near earthquake (Δ=27 km, M=5.6). The results obtained are as follows. 1) The accelerograms from the distant earthquake (M=5.6) are synthesized using the aftershock records (M=4.3) for 1983 YAMASAKI fault earthquake whose source parameters have been obtained by other authors from the hypocentral distribution of the aftershocks. The resultant synthetic motions show a good agreement with the observed ones. 2) The synthesis for a near earthquake (M=5.6, we call this target earthquake) are made using a small earthquake which occurred in the neighborhood of the target earthquake. Here, we apply two methods for giving the parameters for synthesis. One method is to use the parameters of YAMASAKI fault earthquake which has the same magnitude as the target earthquake, and the other is to use the parameters obtained from several existing empirical formulas. The resultant synthetic motion with the former parameters shows a good agreement with the observed one, but that with the latter does not. 3) We estimate the source parameters from the source spectra of several earthquakes which have been observed in this site. Consequently we find that the small earthquakes (M<4) as Green's functions should be carefully used because the stress drops are not constant. 4) We propose that we should designate not only the magnitudes but also seismic moments of the target earthquake and the small earthquake. (J.P.N.)

  14. Flicker-noise Spectroscopy In Earthquake Prediction Research

    Science.gov (United States)

    Desherevsky, A. V.; Lukk, A. A.; Sidorin, A. Y.; Timashev, S. F.

    It has been found out that a two-component model including a seasonal and a flicker- noise components occurs to be a more adequate model of statistical structure of time series of long-term geophysical observations' data. Unlike a white noise which sig- nifies absence of any relation between the system's current dynamics and past events in it, presence of flicker-noise indicates that such a relation in the system does ex- ist. Flicker-noise pertains a property of scale invariance. It seems natural to relate self-similarity of statistical properties of geophysical parameters' variations on dif- ferent scales to self-similar (fractal) properties of geophysical medium. At the same time self-similar time variations of geophysical parameters may indicate to presence of deterministic chaos in geophysical system's evolution. An important element of a proposed approach is application of stochastic models of preparation of each concrete large seismic event. Instead of regular, for example bay-form precursor variations, occurrence of precursors of another kind associated in particular with variation in parameter fluctuations should be expected. To solve a problem of large earthquakes prediction we use Flicker-Noise Spectroscopy (FNS) as a basis of a new approach proposed by us. The basis of the FNS methodology is a postulate about the impor- tant information significance of sequences of various dynamic irregularities (bursts or spikes, jumps with different characteristic values, discontinuities of derivatives) of the measured temporal, spatial and energetic variables on each level of hierarchical orga- nization of studied systems. A proposed new method using integral values of analyzed signals - power spectra and different moments ("structural functions") of a different order as information relations, has demonstrated principally new opportunities in a search of large earthquake precursors already at a preliminary stage of some data analysis. This research was supported by

  15. Automatic Earthquake Shear Stress Measurement Method Developed for Accurate Time- Prediction Analysis of Forthcoming Major Earthquakes Along Shallow Active Faults

    Science.gov (United States)

    Serata, S.

    2006-12-01

    The Serata Stressmeter has been developed to measure and monitor earthquake shear stress build-up along shallow active faults. The development work made in the past 25 years has established the Stressmeter as an automatic stress measurement system to study timing of forthcoming major earthquakes in support of the current earthquake prediction studies based on statistical analysis of seismological observations. In early 1982, a series of major Man-made earthquakes (magnitude 4.5-5.0) suddenly occurred in an area over deep underground potash mine in Saskatchewan, Canada. By measuring underground stress condition of the mine, the direct cause of the earthquake was disclosed. The cause was successfully eliminated by controlling the stress condition of the mine. The Japanese government was interested in this development and the Stressmeter was introduced to the Japanese government research program for earthquake stress studies. In Japan the Stressmeter was first utilized for direct measurement of the intrinsic lateral tectonic stress gradient G. The measurement, conducted at the Mt. Fuji Underground Research Center of the Japanese government, disclosed the constant natural gradients of maximum and minimum lateral stresses in an excellent agreement with the theoretical value, i.e., G = 0.25. All the conventional methods of overcoring, hydrofracturing and deformation, which were introduced to compete with the Serata method, failed demonstrating the fundamental difficulties of the conventional methods. The intrinsic lateral stress gradient determined by the Stressmeter for the Japanese government was found to be the same with all the other measurements made by the Stressmeter in Japan. The stress measurement results obtained by the major international stress measurement work in the Hot Dry Rock Projects conducted in USA, England and Germany are found to be in good agreement with the Stressmeter results obtained in Japan. Based on this broad agreement, a solid geomechanical

  16. VAN method of short-term earthquake prediction shows promise

    Science.gov (United States)

    Uyeda, Seiya

    Although optimism prevailed in the 1970s, the present consensus on earthquake prediction appears to be quite pessimistic. However, short-term prediction based on geoelectric potential monitoring has stood the test of time in Greece for more than a decade [VarotsosandKulhanek, 1993] Lighthill, 1996]. The method used is called the VAN method.The geoelectric potential changes constantly due to causes such as magnetotelluric effects, lightning, rainfall, leakage from manmade sources, and electrochemical instabilities of electrodes. All of this noise must be eliminated before preseismic signals are identified, if they exist at all. The VAN group apparently accomplished this task for the first time. They installed multiple short (100-200m) dipoles with different lengths in both north-south and east-west directions and long (1-10 km) dipoles in appropriate orientations at their stations (one of their mega-stations, Ioannina, for example, now has 137 dipoles in operation) and found that practically all of the noise could be eliminated by applying a set of criteria to the data.

  17. An Ilustrative Nuclide Release Behavior from an HLW Repository due to an Earthquake Event

    International Nuclear Information System (INIS)

    Lee, Youn-Myoung; Hwang, Yong-Soo; Choi, Jong-Won

    2008-01-01

    Program for the evaluation of a high-level waste repository which is conceptually modeled. During the last few years, programs developed with the aid of AMBER and GoldSim by which nuclide transports in the near- and far-field of a repository as well as transport through the biosphere under various normal and disruptive release scenarios could be modeled and evaluated, have been continuously demonstrated. To show its usability, as similarly done for the natural groundwater flow scheme, influence of a possible disruptive event on a nuclide release behavior from an HLW repository system caused naturally due to an earthquake has been investigated and illustrated with the newly developed GoldSim program

  18. A forecast experiment of earthquake activity in Japan under Collaboratory for the Study of Earthquake Predictability (CSEP)

    Science.gov (United States)

    Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.

    2012-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast

  19. Estimate of airborne release of plutonium from Babcock and Wilcox plant as a result of severe wind hazard and earthquake

    International Nuclear Information System (INIS)

    Mishima, J.; Schwendiman, L.C.; Ayer, J.E.

    1978-10-01

    As part of an interdisciplinary study to evaluate the potential radiological consequences of wind hazard and earthquake upon existing commercial mixed oxide fuel fabrication plants, the potential mass airborne releases of plutonium (source terms) from such events are estimated. The estimated souce terms are based upon the fraction of enclosures damaged to three levels of severity (crush, puncture penetrate, and loss of external filter, in order of decreasing severity), called damage ratio, and the airborne release if all enclosures suffered that level of damage. The discussion of damage scenarios and source terms is divided into wind hazard and earthquake scenarios in order of increasing severity. The largest airborne releases from the building were for cases involving the catastrophic collapse of the roof over the major production areas--wind hazard at 110 mph and earthquakes with peak ground accelerations of 0.20 to 0.29 g. Wind hazards at higher air velocities and earthquakes with higher ground acceleration do not result in significantly greater source terms. The source terms were calculated as additional mass of respirable particles released with time up to 4 days; and, under these assumptions, approximately 98% of the mass of material of concern is made airborne from 2 h to 4 days after the event. The overall building source terms from the damage scenarios evaluated are shown in a table. The contribution of individual areas to the overall building source term is presented in order of increasing severity for wind hazard and earthquake

  20. A numerical simulation strategy on occupant evacuation behaviors and casualty prediction in a building during earthquakes

    Science.gov (United States)

    Li, Shuang; Yu, Xiaohui; Zhang, Yanjuan; Zhai, Changhai

    2018-01-01

    Casualty prediction in a building during earthquakes benefits to implement the economic loss estimation in the performance-based earthquake engineering methodology. Although after-earthquake observations reveal that the evacuation has effects on the quantity of occupant casualties during earthquakes, few current studies consider occupant movements in the building in casualty prediction procedures. To bridge this knowledge gap, a numerical simulation method using refined cellular automata model is presented, which can describe various occupant dynamic behaviors and building dimensions. The simulation on the occupant evacuation is verified by a recorded evacuation process from a school classroom in real-life 2013 Ya'an earthquake in China. The occupant casualties in the building under earthquakes are evaluated by coupling the building collapse process simulation by finite element method, the occupant evacuation simulation, and the casualty occurrence criteria with time and space synchronization. A case study of casualty prediction in a building during an earthquake is provided to demonstrate the effect of occupant movements on casualty prediction.

  1. Quantitative prediction of strong motion for a potential earthquake fault

    Directory of Open Access Journals (Sweden)

    Shamita Das

    2010-02-01

    Full Text Available This paper describes a new method for calculating strong motion records for a given seismic region on the basis of the laws of physics using information on the tectonics and physical properties of the earthquake fault. Our method is based on a earthquake model, called a «barrier model», which is characterized by five source parameters: fault length, width, maximum slip, rupture velocity, and barrier interval. The first three parameters may be constrained from plate tectonics, and the fourth parameter is roughly a constant. The most important parameter controlling the earthquake strong motion is the last parameter, «barrier interval». There are three methods to estimate the barrier interval for a given seismic region: 1 surface measurement of slip across fault breaks, 2 model fitting with observed near and far-field seismograms, and 3 scaling law data for small earthquakes in the region. The barrier intervals were estimated for a dozen earthquakes and four seismic regions by the above three methods. Our preliminary results for California suggest that the barrier interval may be determined if the maximum slip is given. The relation between the barrier interval and maximum slip varies from one seismic region to another. For example, the interval appears to be unusually long for Kilauea, Hawaii, which may explain why only scattered evidence of strong ground shaking was observed in the epicentral area of the Island of Hawaii earthquake of November 29, 1975. The stress drop associated with an individual fault segment estimated from the barrier interval and maximum slip lies between 100 and 1000 bars. These values are about one order of magnitude greater than those estimated earlier by the use of crack models without barriers. Thus, the barrier model can resolve, at least partially, the well known discrepancy between the stress-drops measured in the laboratory and those estimated for earthquakes.

  2. Electromagnetic Energy Released in the Subduction (Benioff) Zone in Weeks Previous to Earthquake Occurrence in Central Peru and the Estimation of Earthquake Magnitudes.

    Science.gov (United States)

    Heraud, J. A.; Centa, V. A.; Bleier, T.

    2017-12-01

    During the past four years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone and are connected with the occurrence of earthquakes within a few kilometers of the source of such pulses. This evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. Additional work has been done and the method has now been expanded to provide the instantaneous energy released at the stress areas on the Benioff zone during the precursory stage, before an earthquake occurs. Collected data from several events and in other parts of the country will be shown in a sequential animated form that illustrates the way energy is released in the ULF part of the electromagnetic spectrum. The process has been extended in time and geographical places. Only pulses associated with the occurrence of earthquakes are taken into account in an area which is highly associated with subduction-zone seismic events and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including the animated data video, constitute additional work towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone. The method is providing clearer evidence that electromagnetic precursors in effect conveys physical and useful information prior to the advent of a seismic event

  3. Predicting the Maximum Earthquake Magnitude from Seismic Data in Israel and Its Neighboring Countries.

    Science.gov (United States)

    Last, Mark; Rabinowitz, Nitzan; Leonard, Gideon

    2016-01-01

    This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006-2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year.

  4. A Trial for Earthquake Prediction by Precise Monitoring of Deep Ground Water Temperature

    Science.gov (United States)

    Nasuhara, Y.; Otsuki, K.; Yamauchi, T.

    2006-12-01

    A near future large earthquake is estimated to occur off Miyagi prefecture, northeast Japan within 20 years at a probability of about 80 %. In order to predict this earthquake, we have observed groundwater temperature in a borehole at Sendai city 100 km west of the asperity. This borehole penetrates the fault zone of NE-trending active reverse fault, Nagamachi-Rifu fault zone, at 820m depth. Our concept of the ground water observation is that fault zones are natural amplifier of crustal strain, and hence at 820m depth we set a very precise quartz temperature sensor with the resolution of 0.0002 deg. C. We confirmed our observation system to work normally by both the pumping up tests and the systematic temperature changes at different depths. Since the observation started on June 20 in 2004, we found mysterious intermittent temperature fluctuations of two types; one is of a period of 5-10 days and an amplitude of ca. 0.1 deg. C, and the other is of a period of 11-21 days and an amplitude of ca. 0.2 deg. C. Based on the examination using the product of Grashof number and Prantl number, natural convection of water can be occurred in the borehole. However, since these temperature fluctuations are observed only at the depth around 820 m, thus it is likely that they represent the hydrological natures proper to the Nagamachi-Rifu fault zone. It is noteworthy that the small temperature changes correlatable with earth tide are superposed on the long term and large amplitude fluctuations. The amplitude on the days of the full moon and new moon is ca. 0.001 deg. C. The bottoms of these temperature fluctuations always delay about 6 hours relative to peaks of earth tide. This is interpreted as that water in the borehole is sucked into the fault zone on which tensional normal stress acts on the days of the full moon and new moon. The amplitude of the crustal strain by earth tide was measured at ca. 2∗10^-8 strain near our observation site. High frequency temperature noise of

  5. Four Examples of Short-Term and Imminent Prediction of Earthquakes

    Science.gov (United States)

    zeng, zuoxun; Liu, Genshen; Wu, Dabin; Sibgatulin, Victor

    2014-05-01

    We show here 4 examples of short-term and imminent prediction of earthquakes in China last year. They are Nima Earthquake(Ms5.2), Minxian Earthquake(Ms6.6), Nantou Earthquake (Ms6.7) and Dujiangyan Earthquake (Ms4.1) Imminent Prediction of Nima Earthquake(Ms5.2) Based on the comprehensive analysis of the prediction of Victor Sibgatulin using natural electromagnetic pulse anomalies and the prediction of Song Song and Song Kefu using observation of a precursory halo, and an observation for the locations of a degasification of the earth in the Naqu, Tibet by Zeng Zuoxun himself, the first author made a prediction for an earthquake around Ms 6 in 10 days in the area of the degasification point (31.5N, 89.0 E) at 0:54 of May 8th, 2013. He supplied another degasification point (31N, 86E) for the epicenter prediction at 8:34 of the same day. At 18:54:30 of May 15th, 2013, an earthquake of Ms5.2 occurred in the Nima County, Naqu, China. Imminent Prediction of Minxian Earthquake (Ms6.6) At 7:45 of July 22nd, 2013, an earthquake occurred at the border between Minxian and Zhangxian of Dingxi City (34.5N, 104.2E), Gansu province with magnitude of Ms6.6. We review the imminent prediction process and basis for the earthquake using the fingerprint method. 9 channels or 15 channels anomalous components - time curves can be outputted from the SW monitor for earthquake precursors. These components include geomagnetism, geoelectricity, crust stresses, resonance, crust inclination. When we compress the time axis, the outputted curves become different geometric images. The precursor images are different for earthquake in different regions. The alike or similar images correspond to earthquakes in a certain region. According to the 7-year observation of the precursor images and their corresponding earthquake, we usually get the fingerprint 6 days before the corresponding earthquakes. The magnitude prediction needs the comparison between the amplitudes of the fingerpringts from the same

  6. Earthquake-enhanced permeability – evidence from carbon dioxide release following the ML3.5 earthquake in West Bohemia

    Czech Academy of Sciences Publication Activity Database

    Fischer, Tomáš; Matyska, C.; Heinicke, J.

    2017-01-01

    Roč. 460, February (2017), s. 60-67 ISSN 0012-821X R&D Projects: GA MŠk LM2010008 Institutional support: RVO:67985530 Keywords : earthquake swarms * fluid triggering * crustal CO2 * fault valve Subject RIV: DC - Siesmology, Volcanology, Earth Structure OBOR OECD: Volcanology Impact factor: 4.409, year: 2016

  7. Applications of the gambling score in evaluating earthquake predictions and forecasts

    Science.gov (United States)

    Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe

    2010-05-01

    This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.

  8. Damage Level Prediction of Reinforced Concrete Building Based on Earthquake Time History Using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Suryanita Reni

    2017-01-01

    Full Text Available The strong motion earthquake could cause the building damage in case of the building not considered in the earthquake design of the building. The study aims to predict the damage-level of building due to earthquake using Artificial Neural Networks method. The building model is a reinforced concrete building with ten floors and height between floors is 3.6 m. The model building received a load of the earthquake based on nine earthquake time history records. Each time history scaled to 0,5g, 0,75g, and 1,0g. The Artificial Neural Networks are designed in 4 architectural models using the MATLAB program. Model 1 used the displacement, velocity, and acceleration as input and Model 2 used the displacement only as the input. Model 3 used the velocity as input, and Model 4 used the acceleration just as input. The output of the Neural Networks is the damage level of the building with the category of Safe (1, Immediate Occupancy (2, Life Safety (3 or in a condition of Collapse Prevention (4. According to the results, Neural Network models have the prediction rate of the damage level between 85%-95%. Therefore, one of the solutions for analyzing the structural responses and the damage level promptly and efficiently when the earthquake occurred is by using Artificial Neural Network

  9. Predicting red wolf release success in the southeastern United States

    Science.gov (United States)

    van Manen, Frank T.; Crawford, Barron A.; Clark, Joseph D.

    2000-01-01

    Although the red wolf (Canis rufus) was once found throughout the southeastern United States, indiscriminate killing and habitat destruction reduced its range to a small section of coastal Texas and Louisiana. Wolves trapped from 1973 to 1980 were taken to establish a captive breeding program that was used to repatriate 2 mainland and 3 island red wolf populations. We collected data from 320 red wolf releases in these areas and classified each as a success or failure based on survival and reproductive criteria, and whether recaptures were necessary to resolve conflicts with humans. We evaluated the relations between release success and conditions at the release sites, characteristics of released wolves, and release procedures. Although <44% of the variation in release success was explained, model performance based on jackknife tests indicated a 72-80% correct prediction rate for the 4 operational models we developed. The models indicated that success was associated with human influences on the landscape and the level of wolf habituation to humans prior to release. We applied the models to 31 prospective areas for wolf repatriation and calculated an index of release success for each area. Decision-makers can use these models to objectively rank prospective release areas and compare strengths and weaknesses of each.

  10. Earthquake Prediction Research In Iceland, Applications For Hazard Assessments and Warnings

    Science.gov (United States)

    Stefansson, R.

    Earthquake prediction research in Iceland, applications for hazard assessments and warnings. The first multinational earthquake prediction research project in Iceland was the Eu- ropean Council encouraged SIL project of the Nordic countries, 1988-1995. The path selected for this research was to study the physics of crustal processes leading to earth- quakes. It was considered that small earthquakes, down to magnitude zero, were the most significant for this purpose, because of the detailed information which they pro- vide both in time and space. The test area for the project was the earthquake prone region of the South Iceland seismic zone (SISZ). The PRENLAB and PRENLAB-2 projects, 1996-2000 supported by the European Union were a direct continuation of the SIL project, but with a more multidisciplinary approach. PRENLAB stands for "Earthquake prediction research in a natural labo- ratory". The basic objective was to advance our understanding in general on where, when and how dangerous NH10earthquake motion might strike. Methods were devel- oped to study crustal processes and conditions, by microearthquake information, by continuous GPS, InSAR, theoretical modelling, fault mapping and paleoseismology. New algorithms were developed for short term warnings. A very useful short term warning was issued twice in the year 2000, one for a sudden start of an eruption in Volcano Hekla February 26, and the other 25 hours before a second (in a sequence of two) magnitude 6.6 (Ms) earthquake in the South Iceland seismic zone in June 21, with the correct location and approximate size. A formal short term warning, although not going to the public, was also issued before a magnitude 5 earthquake in November 1998. In the presentation it will be shortly described what these warnings were based on. A general hazard assessmnets was presented in scientific journals 10-15 years ago assessing within a few kilometers the location of the faults of the two 2000 earthquakes and suggesting

  11. Application of geochemical methods in earthquake prediction in China

    Energy Technology Data Exchange (ETDEWEB)

    Fong-liang, J.; Gui-ru, L.

    1981-05-01

    Several geochemical anomalies were observed before the Haichen, Longling, Tangshan, and Songpan earthquakes and their strong aftershocks. They included changes in groundwater radon levels; chemical composition of the groundwater (concentration of Ca/sup + +/, Mg/sup + +/, Cl/sup -/, So/sub 4//sup , and HCO/sub 3//sup -/ ions); conductivity; and dissolved gases such as H/sub 2/, CO/sub 2/, etc. In addition, anomalous changes in water color and quality were observed before these large earthquakes. Before some events gases escaped from the surface, and there were reports of ''ground odors'' being smelled by local residents. The large amount of radon data can be grouped into long-term and short-term anomalies. The long-term anomalies have a radon emission build up time of from a few months to more than a year. The short-term anomalies have durations from a few hours or less to a few months.

  12. Time-predictable model applicability for earthquake occurrence in northeast India and vicinity

    Directory of Open Access Journals (Sweden)

    A. Panthi

    2011-03-01

    Full Text Available Northeast India and its vicinity is one of the seismically most active regions in the world, where a few large and several moderate earthquakes have occurred in the past. In this study the region of northeast India has been considered for an earthquake generation model using earthquake data as reported by earthquake catalogues National Geophysical Data Centre, National Earthquake Information Centre, United States Geological Survey and from book prepared by Gupta et al. (1986 for the period 1906–2008. The events having a surface wave magnitude of Ms≥5.5 were considered for statistical analysis. In this region, nineteen seismogenic sources were identified by the observation of clustering of earthquakes. It is observed that the time interval between the two consecutive mainshocks depends upon the preceding mainshock magnitude (Mp and not on the following mainshock (Mf. This result corroborates the validity of time-predictable model in northeast India and its adjoining regions. A linear relation between the logarithm of repeat time (T of two consecutive events and the magnitude of the preceding mainshock is established in the form LogT = cMp+a, where "c" is a positive slope of line and "a" is function of minimum magnitude of the earthquake considered. The values of the parameters "c" and "a" are estimated to be 0.21 and 0.35 in northeast India and its adjoining regions. The less value of c than the average implies that the earthquake occurrence in this region is different from those of plate boundaries. The result derived can be used for long term seismic hazard estimation in the delineated seismogenic regions.

  13. Long-term predictability of regions and dates of strong earthquakes

    Science.gov (United States)

    Kubyshen, Alexander; Doda, Leonid; Shopin, Sergey

    2016-04-01

    Results on the long-term predictability of strong earthquakes are discussed. It is shown that dates of earthquakes with M>5.5 could be determined in advance of several months before the event. The magnitude and the region of approaching earthquake could be specified in the time-frame of a month before the event. Determination of number of M6+ earthquakes, which are expected to occur during the analyzed year, is performed using the special sequence diagram of seismic activity for the century time frame. Date analysis could be performed with advance of 15-20 years. Data is verified by a monthly sequence diagram of seismic activity. The number of strong earthquakes expected to occur in the analyzed month is determined by several methods having a different prediction horizon. Determination of days of potential earthquakes with M5.5+ is performed using astronomical data. Earthquakes occur on days of oppositions of Solar System planets (arranged in a single line). At that, the strongest earthquakes occur under the location of vector "Sun-Solar System barycenter" in the ecliptic plane. Details of this astronomical multivariate indicator still require further research, but it's practical significant is confirmed by practice. Another one empirical indicator of approaching earthquake M6+ is a synchronous variation of meteorological parameters: abrupt decreasing of minimal daily temperature, increasing of relative humidity, abrupt change of atmospheric pressure (RAMES method). Time difference of predicted and actual date is no more than one day. This indicator is registered 104 days before the earthquake, so it was called as Harmonic 104 or H-104. This fact looks paradoxical, but the works of A. Sytinskiy and V. Bokov on the correlation of global atmospheric circulation and seismic events give a physical basis for this empirical fact. Also, 104 days is a quarter of a Chandler period so this fact gives insight on the correlation between the anomalies of Earth orientation

  14. Mechanistic prediction of iodine and cesium release from LWR fuel

    International Nuclear Information System (INIS)

    Rest, J.

    1983-12-01

    A theoretical model (FASTGRASS) has been used for predicting the behavior of fission gas and volatile fission products (VFPs) in UO 2 -base fuels during steady-state and transient conditions. This model represents an attempt to develop an efficient predictive capability for the full range of possible reactor operating conditions. Fission products released from the fuel are assumed to reach the fuel surface by successively diffusing (via atomic and gas-bubble mobility) from the grains to grain faces and then to the grain edges, where the fission products are released through a network of interconnected tunnels of fission-gas-induced and fabricated porosity

  15. Monitoring of the future strong Vrancea events by using the CN formal earthquake prediction algorithm

    International Nuclear Information System (INIS)

    Moldoveanu, C.L.; Novikova, O.V.; Panza, G.F.; Radulian, M.

    2003-06-01

    The preparation process of the strong subcrustal events originating in Vrancea region, Romania, is monitored using an intermediate-term medium-range earthquake prediction method - the CN algorithm (Keilis-Borok and Rotwain, 1990). We present the results of the monitoring of the preparation of future strong earthquakes for the time interval from January 1, 1994 (1994.1.1), to January 1, 2003 (2003.1.1) using the updated catalogue of the Romanian local network. The database considered for the CN monitoring of the preparation of future strong earthquakes in Vrancea covers the period from 1966.3.1 to 2003.1.1 and the geographical rectangle 44.8 deg - 48.4 deg N, 25.0 deg - 28.0 deg E. The algorithm correctly identifies, by retrospective prediction, the TJPs for all the three strong earthquakes (Mo=6.4) that occurred in Vrancea during this period. The cumulated duration of the TIPs represents 26.5% of the total period of time considered (1966.3.1-2003.1.1). The monitoring of current seismicity using the algorithm CN has been carried out since 1994. No strong earthquakes occurred from 1994.1.1 to 2003.1.1 but the CN declared an extended false alarm from 1999.5.1 to 2000.11.1. No alarm has currently been declared in the region (on January 1, 2003), as can be seen from the TJPs diagram shown. (author)

  16. An Earthquake Prediction System Using The Time Series Analyses of Earthquake Property And Crust Motion

    International Nuclear Information System (INIS)

    Takeda, Fumihide; Takeo, Makoto

    2004-01-01

    We have developed a short-term deterministic earthquake (EQ) forecasting system similar to those used for Typhoons and Hurricanes, which has been under a test operation at website http://www.tec21.jp/ since June of 2003. We use the focus and crust displacement data recently opened to the public by Japanese seismograph and global positioning system (GPS) networks, respectively. Our system divides the forecasting area into the five regional areas of Japan, each of which is about 5 deg. by 5 deg. We have found that it can forecast the focus, date of occurrence and magnitude (M) of an impending EQ (whose M is larger than about 6), all within narrow limits. We have two examples to describe the system. One is the 2003/09/26 EQ of M 8 in the Hokkaido area, which is of hindsight. Another is a successful rollout of the most recent forecast on the 2004/05/30 EQ of M 6.7 off coast of the southern Kanto (Tokyo) area

  17. Predicting Posttraumatic Stress Symptom Prevalence and Local Distribution after an Earthquake with Scarce Data.

    Science.gov (United States)

    Dussaillant, Francisca; Apablaza, Mauricio

    2017-08-01

    After a major earthquake, the assignment of scarce mental health emergency personnel to different geographic areas is crucial to the effective management of the crisis. The scarce information that is available in the aftermath of a disaster may be valuable in helping predict where are the populations that are in most need. The objectives of this study were to derive algorithms to predict posttraumatic stress (PTS) symptom prevalence and local distribution after an earthquake and to test whether there are algorithms that require few input data and are still reasonably predictive. A rich database of PTS symptoms, informed after Chile's 2010 earthquake and tsunami, was used. Several model specifications for the mean and centiles of the distribution of PTS symptoms, together with posttraumatic stress disorder (PTSD) prevalence, were estimated via linear and quantile regressions. The models varied in the set of covariates included. Adjusted R2 for the most liberal specifications (in terms of numbers of covariates included) ranged from 0.62 to 0.74, depending on the outcome. When only including peak ground acceleration (PGA), poverty rate, and household damage in linear and quadratic form, predictive capacity was still good (adjusted R2 from 0.59 to 0.67 were obtained). Information about local poverty, household damage, and PGA can be used as an aid to predict PTS symptom prevalence and local distribution after an earthquake. This can be of help to improve the assignment of mental health personnel to the affected localities. Dussaillant F , Apablaza M . Predicting posttraumatic stress symptom prevalence and local distribution after an earthquake with scarce data. Prehosp Disaster Med. 2017;32(4):357-367.

  18. Radon/helium studies for earthquake prediction N-W Himalaya

    International Nuclear Information System (INIS)

    Virk, H.S.

    1999-01-01

    The paper presents the preliminary data of radon monitoring stated in the Himalayan orogenic belt. Radon anomalies are correlated with microseismic activity in the N-W Himalaya. The He/Rn ratio will be used as a predictive tool for earthquakes

  19. Prediction of organic combined sewer sediment release and transport

    OpenAIRE

    Seco, Raquel Irene; Schellart, Alma Neeltje Antonia; Gómez Valentín, Manuel; Tait, Simon

    2018-01-01

    Accurate predictions of sediment loads released by sewer overflow discharges are important for being able to provide protection to vulnerable receiving waters. These predictions are sensitive to the estimated sediment characteristics and on the site conditions of in-pipe deposit formation. Their application without a detailed analysis and understanding of the initial conditions under which in-sewer deposits were formed normally results in very poor estimations. In this study, in-sewer sedimen...

  20. Can Vrancea earthquakes be accurately predicted from unusual bio-system behavior and seismic-electromagnetic records?

    International Nuclear Information System (INIS)

    Enescu, D.; Chitaru, C.; Enescu, B.D.

    1999-01-01

    The relevance of bio-seismic research for the short-term prediction of strong Vrancea earthquakes is underscored. An unusual animal behavior before and during Vrancea earthquakes is described and illustrated in the individual case of the major earthquake of March 4, 1977. Several hypotheses to account for the uncommon behavior of bio-systems in relation to earthquakes in general and strong Vrancea earthquakes in particular are discussed in the second section. It is reminded that promising preliminary results concerning the identification of seismic-electromagnetic precursor signals have been obtained in the Vrancea seismogenic area using special, highly sensitive equipment. The need to correlate bio-seismic and seismic-electromagnetic researches is evident. Further investigations are suggested and urgent steps are proposed in order to achieve a successful short-term prediction of strong Vrancea earthquakes. (authors)

  1. Roles of Radon-222 and other natural radionuclides in earthquake prediction

    International Nuclear Information System (INIS)

    Smith, A.R.; Wollenberg, H.A.; Mosier, D.F.

    1980-01-01

    The concentration of 222 Rn in subsurface waters is one of the natural parameters being investigated to help develop the capability to predict destructive earthquakes. Since 1966, scientists in several nations have sought to link radon variations with ongoing seismic activity, primarily through the dilatancy model for earthquake occurrences. Within the range of these studies, alpha-, beta-, and gamma-radiation detection techniques have been used in both discrete-sampling and continiuous-monitoring programs. These measured techniques are reviewed in terms of instrumentation adapted to seismic-monitoring purposes. A recent Lawrence Berkeley Laboratory study conducted in central California incorporated discrete sampling of wells in the aftershock area of the 1975 Oroville earthquake and continuous monitoring of water radon in a well on the San Andreas Fault. The results presented show short-term radon variations that may be associated with aftershocks and diurnal changes that may reflect earth tidal forces

  2. A global earthquake discrimination scheme to optimize ground-motion prediction equation selection

    Science.gov (United States)

    Garcia, Daniel; Wald, David J.; Hearne, Michael

    2012-01-01

    We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.

  3. CN earthquake prediction algorithm and the monitoring of the future strong Vrancea events

    International Nuclear Information System (INIS)

    Moldoveanu, C.L.; Radulian, M.; Novikova, O.V.; Panza, G.F.

    2002-01-01

    The strong earthquakes originating at intermediate-depth in the Vrancea region (located in the SE corner of the highly bent Carpathian arc) represent one of the most important natural disasters able to induce heavy effects (high tool of casualties and extensive damage) in the Romanian territory. The occurrence of these earthquakes is irregular, but not infrequent. Their effects are felt over a large territory, from Central Europe to Moscow and from Greece to Scandinavia. The largest cultural and economical center exposed to the seismic risk due to the Vrancea earthquakes is Bucharest. This metropolitan area (230 km 2 wide) is characterized by the presence of 2.5 million inhabitants (10% of the country population) and by a considerable number of high-risk structures and infrastructures. The best way to face strong earthquakes is to mitigate the seismic risk by using the two possible complementary approaches represented by (a) the antiseismic design of structures and infrastructures (able to support strong earthquakes without significant damage), and (b) the strong earthquake prediction (in terms of alarm intervals declared for long, intermediate or short-term space-and time-windows). The intermediate term medium-range earthquake prediction represents the most realistic target to be reached at the present state of knowledge. The alarm declared in this case extends over a time window of about one year or more, and a space window of a few hundreds of kilometers. In the case of Vrancea events the spatial uncertainty is much less, being of about 100 km. The main measures for the mitigation of the seismic risk allowed by the intermediate-term medium-range prediction are: (a) verification of the buildings and infrastructures stability and reinforcement measures when required, (b) elaboration of emergency plans of action, (c) schedule of the main actions required in order to restore the normality of the social and economical life after the earthquake. The paper presents the

  4. Understanding and Predicting the Process of Software Maintenance Releases

    Science.gov (United States)

    Basili, Victor; Briand, Lionel; Condon, Steven; Kim, Yong-Mi; Melo, Walcelio L.; Valett, Jon D.

    1996-01-01

    One of the major concerns of any maintenance organization is to understand and estimate the cost of maintenance releases of software systems. Planning the next release so as to maximize the increase in functionality and the improvement in quality are vital to successful maintenance management. The objective of this paper is to present the results of a case study in which an incremental approach was used to better understand the effort distribution of releases and build a predictive effort model for software maintenance releases. This study was conducted in the Flight Dynamics Division (FDD) of NASA Goddard Space Flight Center(GSFC). This paper presents three main results: 1) a predictive effort model developed for the FDD's software maintenance release process; 2) measurement-based lessons learned about the maintenance process in the FDD; and 3) a set of lessons learned about the establishment of a measurement-based software maintenance improvement program. In addition, this study provides insights and guidelines for obtaining similar results in other maintenance organizations.

  5. Real-time numerical shake prediction and updating for earthquake early warning

    Science.gov (United States)

    Wang, Tianyun; Jin, Xing; Wei, Yongxiang; Huang, Yandan

    2017-12-01

    Ground motion prediction is important for earthquake early warning systems, because the region's peak ground motion indicates the potential disaster. In order to predict the peak ground motion quickly and precisely with limited station wave records, we propose a real-time numerical shake prediction and updating method. Our method first predicts the ground motion based on the ground motion prediction equation after P waves detection of several stations, denoted as the initial prediction. In order to correct the prediction error of the initial prediction, an updating scheme based on real-time simulation of wave propagation is designed. Data assimilation technique is incorporated to predict the distribution of seismic wave energy precisely. Radiative transfer theory and Monte Carlo simulation are used for modeling wave propagation in 2-D space, and the peak ground motion is calculated as quickly as possible. Our method has potential to predict shakemap, making the potential disaster be predicted before the real disaster happens. 2008 M S8.0 Wenchuan earthquake is studied as an example to show the validity of the proposed method.

  6. Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea

    KAUST Repository

    Sawlan, Zaid A

    2012-01-01

    parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while

  7. Intermediate-term earthquake prediction and seismic zoning in Northern Italy

    International Nuclear Information System (INIS)

    Panza, G.F.; Orozova Stanishkova, I.; Costa, G.; Vaccari, F.

    1993-12-01

    The algorithm CN for intermediate earthquake prediction has been applied to an area in Northern Italy, which has been chosen according to a recently proposed seismotectonic model. Earthquakes with magnitude ≥ 5.4 occur in the area with a relevant frequency and their occurrence is predicted by algorithm CN. Therefore a seismic hazard analysis has been performed using a deterministic procedure, based on the computation of complete synthetic seismograms. The results are summarized in a map giving the distribution of peak ground acceleration, but the complete time series are available, which can be used by civil engineers in the design of new seismo-resistant constructions and in the retrofitting of the existing ones. This risk reduction action should be intensified in connection with warnings issued on the basis of the forward predictions made by CN. (author). Refs, 7 figs, 1 tab

  8. Foreshock sequences and short-term earthquake predictability on East Pacific Rise transform faults.

    Science.gov (United States)

    McGuire, Jeffrey J; Boettcher, Margaret S; Jordan, Thomas H

    2005-03-24

    East Pacific Rise transform faults are characterized by high slip rates (more than ten centimetres a year), predominantly aseismic slip and maximum earthquake magnitudes of about 6.5. Using recordings from a hydroacoustic array deployed by the National Oceanic and Atmospheric Administration, we show here that East Pacific Rise transform faults also have a low number of aftershocks and high foreshock rates compared to continental strike-slip faults. The high ratio of foreshocks to aftershocks implies that such transform-fault seismicity cannot be explained by seismic triggering models in which there is no fundamental distinction between foreshocks, mainshocks and aftershocks. The foreshock sequences on East Pacific Rise transform faults can be used to predict (retrospectively) earthquakes of magnitude 5.4 or greater, in narrow spatial and temporal windows and with a high probability gain. The predictability of such transform earthquakes is consistent with a model in which slow slip transients trigger earthquakes, enrich their low-frequency radiation and accommodate much of the aseismic plate motion.

  9. Predicted Attenuation Relation and Observed Ground Motion of Gorkha Nepal Earthquake of 25 April 2015

    Science.gov (United States)

    Singh, R. P.; Ahmad, R.

    2015-12-01

    A comparison of recent observed ground motion parameters of recent Gorkha Nepal earthquake of 25 April 2015 (Mw 7.8) with the predicted ground motion parameters using exitsing attenuation relation of the Himalayan region will be presented. The recent earthquake took about 8000 lives and destroyed thousands of poor quality of buildings and the earthquake was felt by millions of people living in Nepal, China, India, Bangladesh, and Bhutan. The knowledge of ground parameters are very important in developing seismic code of seismic prone regions like Himalaya for better design of buildings. The ground parameters recorded in recent earthquake event and aftershocks are compared with attenuation relations for the Himalayan region, the predicted ground motion parameters show good correlation with the observed ground parameters. The results will be of great use to Civil engineers in updating existing building codes in the Himlayan and surrounding regions and also for the evaluation of seismic hazards. The results clearly show that the attenuation relation developed for the Himalayan region should be only used, other attenuation relations based on other regions fail to provide good estimate of observed ground motion parameters.

  10. Prediction of Global Damage and Reliability Based Upon Sequential Identification and Updating of RC Structures Subject to Earthquakes

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Skjærbæk, P. S.; Köylüoglu, H. U.

    The paper deals with the prediction of global damage and future structural reliability with special emphasis on sensitivity, bias and uncertainty of these predictions dependent on the statistically equivalent realizations of the future earthquake. The predictions are based on a modified Clough......-Johnston single-degree-of-freedom (SDOF) oscillator with three parameters which are calibrated to fit the displacement response and the damage development in the past earthquake....

  11. Recent Achievements of the Collaboratory for the Study of Earthquake Predictability

    Science.gov (United States)

    Jackson, D. D.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Zechar, J. D.; Jordan, T. H.

    2015-12-01

    Maria Liukis, SCEC, USC; Maximilian Werner, University of Bristol; Danijel Schorlemmer, GFZ Potsdam; John Yu, SCEC, USC; Philip Maechling, SCEC, USC; Jeremy Zechar, Swiss Seismological Service, ETH; Thomas H. Jordan, SCEC, USC, and the CSEP Working Group The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 435 models under evaluation. The California testing center, operated by SCEC, has been operational since Sept 1, 2007, and currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. We have reduced testing latency, implemented prototype evaluation of M8 forecasts, and are currently developing formats and procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence has been completed, and the results indicate that some physics-based and hybrid models outperform purely statistical (e.g., ETAS) models. The experiment also demonstrates the power of the CSEP cyberinfrastructure for retrospective testing. Our current development includes evaluation strategies that increase computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss applications of CSEP infrastructure to geodetic transient detection and how CSEP procedures are being

  12. CSEP-Japan: The Japanese node of the collaboratory for the study of earthquake predictability

    Science.gov (United States)

    Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.

    2011-12-01

    Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project of earthquake predictability research. The final goal of this project is to have a look for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined the CSEP and started the Japanese testing center called as CSEP-Japan. This testing center constitutes an open access to researchers contributing earthquake forecast models for applied to Japan. A total of 91 earthquake forecast models were submitted on the prospective experiment starting from 1 November 2009. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by the CSEP. The experiments of 1-day, 3-month, 1-year and 3-year forecasting classes were implemented for 92 rounds, 4 rounds, 1round and 0 round (now in progress), respectively. The results of the 3-month class gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space-distribution with most models in some cases where many earthquakes occurred at the same spot. Throughout the experiment, it has been clarified that some properties of the CSEP's evaluation tests such as the L-test show strong correlation with the N-test. We are now processing to own (cyber-) infrastructure to support the forecast experiment as follows. (1) Japanese seismicity has changed since the 2011 Tohoku earthquake. The 3rd call for forecasting models was announced in order to promote model improvement for forecasting earthquakes after this earthquake. So, we provide Japanese seismicity catalog maintained by JMA for modelers to study how seismicity

  13. A Hybrid Ground-Motion Prediction Equation for Earthquakes in Western Alberta

    Science.gov (United States)

    Spriggs, N.; Yenier, E.; Law, A.; Moores, A. O.

    2015-12-01

    Estimation of ground-motion amplitudes that may be produced by future earthquakes constitutes the foundation of seismic hazard assessment and earthquake-resistant structural design. This is typically done by using a prediction equation that quantifies amplitudes as a function of key seismological variables such as magnitude, distance and site condition. In this study, we develop a hybrid empirical prediction equation for earthquakes in western Alberta, where evaluation of seismic hazard associated with induced seismicity is of particular interest. We use peak ground motions and response spectra from recorded seismic events to model the regional source and attenuation attributes. The available empirical data is limited in the magnitude range of engineering interest (M>4). Therefore, we combine empirical data with a simulation-based model in order to obtain seismologically informed predictions for moderate-to-large magnitude events. The methodology is two-fold. First, we investigate the shape of geometrical spreading in Alberta. We supplement the seismic data with ground motions obtained from mining/quarry blasts, in order to gain insights into the regional attenuation over a wide distance range. A comparison of ground-motion amplitudes for earthquakes and mining/quarry blasts show that both event types decay at similar rates with distance and demonstrate a significant Moho-bounce effect. In the second stage, we calibrate the source and attenuation parameters of a simulation-based prediction equation to match the available amplitude data from seismic events. We model the geometrical spreading using a trilinear function with attenuation rates obtained from the first stage, and calculate coefficients of anelastic attenuation and site amplification via regression analysis. This provides a hybrid ground-motion prediction equation that is calibrated for observed motions in western Alberta and is applicable to moderate-to-large magnitude events.

  14. The ordered network structure and prediction summary for M ≥ 7 earthquakes in Xinjiang region of China

    International Nuclear Information System (INIS)

    Men, Ke-Pei; Zhao, Kai

    2014-01-01

    M ≥ 7 earthquakes have showed an obvious commensurability and orderliness in Xinjiang of China and its adjacent region since 1800. The main orderly values are 30 a x k (k = 1, 2, 3), 11 ∝ 12 a, 41 ∝ 43 a, 18 ∝ 19 a, and 5 ∝ 6 a. In the guidance of the information forecasting theory of Wen-Bo Weng, based on previous research results, combining ordered network structure analysis with complex network technology, we focus on the prediction summary of M ≥ 7 earthquakes by using the ordered network structure, and add new information to further optimize network, hence construct the 2D- and 3D-ordered network structure of M ≥ 7 earthquakes. In this paper, the network structure revealed fully the regularity of seismic activity of M ≥ 7 earthquakes in the study region during the past 210 years. Based on this, the Karakorum M7.1 earthquake in 1996, the M7.9 earthquake on the frontier of Russia, Mongol, and China in 2003, and two Yutian M7.3 earthquakes in 2008 and 2014 were predicted successfully. At the same time, a new prediction opinion is presented that the future two M ≥ 7 earthquakes will probably occur around 2019-2020 and 2025-2026 in this region. The results show that large earthquake occurred in defined region can be predicted. The method of ordered network structure analysis produces satisfactory results for the mid-and-long term prediction of M ≥ 7 earthquakes.

  15. Earthquake prediction in California using regression algorithms and cloud-based big data infrastructure

    Science.gov (United States)

    Asencio-Cortés, G.; Morales-Esteban, A.; Shang, X.; Martínez-Álvarez, F.

    2018-06-01

    Earthquake magnitude prediction is a challenging problem that has been widely studied during the last decades. Statistical, geophysical and machine learning approaches can be found in literature, with no particularly satisfactory results. In recent years, powerful computational techniques to analyze big data have emerged, making possible the analysis of massive datasets. These new methods make use of physical resources like cloud based architectures. California is known for being one of the regions with highest seismic activity in the world and many data are available. In this work, the use of several regression algorithms combined with ensemble learning is explored in the context of big data (1 GB catalog is used), in order to predict earthquakes magnitude within the next seven days. Apache Spark framework, H2 O library in R language and Amazon cloud infrastructure were been used, reporting very promising results.

  16. Inelastic spectra to predict period elongation of structures under earthquake loading

    DEFF Research Database (Denmark)

    Katsanos, Evangelos; Sextos, A.G.

    2015-01-01

    Period lengthening, exhibited by structures when subjected to strong ground motions, constitutes an implicit proxy of structural inelasticity and associated damage. However, the reliable prediction of the inelastic period is tedious and a multi-parametric task, which is related to both epistemic ...... for period lengthening as a function of Ry and Tel. These equations may be used in the framework of the earthquake record selection and scaling....

  17. Comparison of Ground Motion Prediction Equations (GMPE) for Chile and Canada With Recent Chilean Megathust Earthquakes

    Science.gov (United States)

    Herrera, C.; Cassidy, J. F.; Dosso, S. E.

    2017-12-01

    The ground shaking assessment allows quantifying the hazards associated with the occurrence of earthquakes. Chile and western Canada are two areas that have experienced, and are susceptible to imminent large crustal, in-slab and megathrust earthquakes that can affect the population significantly. In this context, we compare the current GMPEs used in the 2015 National Building Code of Canada and the most recent GMPEs calculated for Chile, with observed accelerations generated by four recent Chilean megathrust earthquakes (MW ≥ 7.7) that have occurred during the past decade, which is essential to quantify how well current models predict observations of major events.We collected the 3-component waveform data of more than 90 stations from the Centro Sismologico Nacional and the Universidad de Chile, and processed them by removing the trend and applying a band-pass filter. Then, for each station, we obtained the Peak Ground Acceleration (PGA), and by using a damped response spectra, we calculated the Pseudo Spectral Acceleration (PSA). Finally, we compared those observations with the most recent Chilean and Canadian GMPEs. Given the lack of geotechnical information for most of the Chilean stations, we also used a new method to obtain the VS30 by inverting the H/V ratios using a trans-dimensional Bayesian inversion, which allows us to improve the correction of observations according to soil conditions.As expected, our results show a good fit between observations and the Chilean GMPEs, but we observe that although the shape of the Canadian GMPEs is coherent with the distribution of observations, in general they under predict the observations for PGA and PSA at shorter periods for most of the considered earthquakes. An example of this can be seen in the attached figure for the case of the 2014 Iquique earthquake.These results present important implications related to the hazards associated to large earthquakes, especially for western Canada, where the probability of a

  18. WHY WE CANNOT PREDICT STRONG EARTHQUAKES IN THE EARTH’S CRUST

    Directory of Open Access Journals (Sweden)

    Iosif L. Gufeld

    2011-01-01

    Full Text Available In the past decade, earthquake disasters caused multiple fatalities and significant economic losses and challenged the modern civilization. The wellknown achievements and growing power of civilization are backstrapped when facing the Nature. The question arises, what hinders solving a problem of earthquake prediction, while longterm and continuous seismic monitoring systems are in place in many regions of the world. For instance, there was no forecast of the Japan Great Earthquake of March 11, 2011, despite the fact that monitoring conditions for its prediction were unique. Its focal zone was 100–200 km away from the monitoring network installed in the area of permanent seismic hazard, which is subject to nonstop and longterm seismic monitoring. Lesson should be learned from our common fiasco in forecasting, taking into account research results obtained during the past 50–60 years. It is now evident that we failed to identify precursors of the earthquakes. Prior to the earthquake occurrence, the observed local anomalies of various fields reflected other processes that were mistakenly viewed as processes of preparation for largescale faulting. For many years, geotectonic situations were analyzed on the basis of the physics of destruction of laboratory specimens, which was applied to the lithospheric conditions. Many researchers realize that such an approach is inaccurate. Nonetheless, persistent attempts are being undertaken with application of modern computation to detect anomalies of various fields, which may be interpreted as earthquake precursors. In our opinion, such illusory intentions were smashed by the Great Japan Earthquake (Figure 6. It is also obvious that sufficient attention has not been given yet to fundamental studies of seismic processes.This review presents the authors’ opinion concerning the origin of the seismic process and strong earthquakes, being part of the process. The authors realize that a wide discussion is

  19. Continuous borehole strain and pore pressure in the near field of the 28 September 2004 M 6.0 parkfield, California, earthquake: Implications for nucleation, fault response, earthquake prediction and tremor

    Science.gov (United States)

    Johnston, M.J.S.; Borcherdt, R.D.; Linde, A.T.; Gladwin, M.T.

    2006-01-01

    Near-field observations of high-precision borehole strain and pore pressure, show no indication of coherent accelerating strain or pore pressure during the weeks to seconds before the 28 September 2004 M 6.0 Parkfield earthquake. Minor changes in strain rate did occur at a few sites during the last 24 hr before the earthquake but these changes are neither significant nor have the form expected for strain during slip coalescence initiating fault failure. Seconds before the event, strain is stable at the 10-11 level. Final prerupture nucleation slip in the hypocentral region is constrained to have a moment less than 2 ?? 1012 N m (M 2.2) and a source size less than 30 m. Ground displacement data indicate similar constraints. Localized rupture nucleation and runaway precludes useful prediction of damaging earthquakes. Coseismic dynamic strains of about 10 microstrain peak-to-peak were superimposed on volumetric strain offsets of about 0.5 microstrain to the northwest of the epicenter and about 0.2 microstrain to the southeast of the epicenter, consistent with right lateral slip. Observed strain and Global Positioning System (GPS) offsets can be simply fit with 20 cm of slip between 4 and 10 km on a 20-km segment of the fault north of Gold Hill (M0 = 7 ?? 1017 N m). Variable slip inversion models using GPS data and seismic data indicate similar moments. Observed postseismic strain is 60% to 300% of the coseismic strain, indicating incomplete release of accumulated strain. No measurable change in fault zone compliance preceding or following the earthquake is indicated by stable earth tidal response. No indications of strain change accompany nonvolcanic tremor events reported prior to and following the earthquake.

  20. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    Science.gov (United States)

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  1. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    Directory of Open Access Journals (Sweden)

    Junjie Ren

    2013-01-01

    Full Text Available Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9 occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF and the Guanxian-Jiangyou fault (GJF. However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS and Interferometric Synthetic Aperture Radar (InSAR data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3 × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  2. Predicting Dynamic Response of Structures under Earthquake Loads Using Logical Analysis of Data

    Directory of Open Access Journals (Sweden)

    Ayman Abd-Elhamed

    2018-04-01

    Full Text Available In this paper, logical analysis of data (LAD is used to predict the seismic response of building structures employing the captured dynamic responses. In order to prepare the data, computational simulations using a single degree of freedom (SDOF building model under different ground motion records are carried out. The selected excitation records are real and of different peak ground accelerations (PGA. The sensitivity of the seismic response in terms of displacements of floors to the variation in earthquake characteristics, such as soil class, characteristic period, and time step of records, peak ground displacement, and peak ground velocity, have also been considered. The dynamic equation of motion describing the building model and the applied earthquake load are presented and solved incrementally using the Runge-Kutta method. LAD then finds the characteristic patterns which lead to forecast the seismic response of building structures. The accuracy of LAD is compared to that of an artificial neural network (ANN, since the latter is the most known machine learning technique. Based on the conducted study, the proposed LAD model has been proven to be an efficient technique to learn, simulate, and blindly predict the dynamic response behaviour of building structures subjected to earthquake loads.

  3. Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code

    International Nuclear Information System (INIS)

    Paolucci, Roberto; Stupazzini, Marco

    2008-01-01

    Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion

  4. Analysis methods for predicting the behaviour of isolators and formulation of simplified models for use in predicting response of structures to earthquake type input

    International Nuclear Information System (INIS)

    2002-01-01

    This report describes the simplified models for predicting the response of high-damping natural rubber bearings (HDNRB) to earthquake ground motions and benchmark problems for assessing the accuracy of finite element analyses in designing base-isolators. (author)

  5. Use of Kazakh nuclear explosions for testing dilatancy diffusion model of earthquake prediction

    International Nuclear Information System (INIS)

    Srivastava, H.N.

    1979-01-01

    P wave travel time anomalies from Kazakh explosions during the years 1965-1972 were studied with reference to Jeffreys Bullen (1952) and Herrin Travel time tables (1968) and discussed using F ratio test at seven stations in Himachal Pradesh. For these events, the temporal and spatial variations of travel time residuals were examined from the point of view of long term changes in velocity known to precede earthquakes and local geology. The results show perference for Herrin Travel time tables at these epicentral distances from Kazakh explosions. F ratio test indicated that variation between sample means of different stations in the network showed more variation than can be attributed to the sampling error. Although the spatial variation of mean residuals (1965-1972) could generally be explained on the basis of the local geology, the temporal variations of such residuals from Kazakh explosions offer limited application in the testing of dilatancy model of earthquake prediction. (auth.)

  6. Earthquake prediction using extinct monogenetic volcanoes: A possible new research strategy

    Science.gov (United States)

    Szakács, Alexandru

    2011-04-01

    Volcanoes are extremely effective transmitters of matter, energy and information from the deep Earth towards its surface. Their capacities as information carriers are far to be fully exploited so far. Volcanic conduits can be viewed in general as rod-like or sheet-like vertical features with relatively homogenous composition and structure crosscutting geological structures of far more complexity and compositional heterogeneity. Information-carrying signals such as earthquake precursor signals originating deep below the Earth surface are transmitted with much less loss of information through homogenous vertically extended structures than through the horizontally segmented heterogeneous lithosphere or crust. Volcanic conduits can thus be viewed as upside-down "antennas" or waveguides which can be used as privileged pathways of any possible earthquake precursor signal. In particular, conduits of monogenetic volcanoes are promising transmitters of deep Earth information to be received and decoded at surface monitoring stations because the expected more homogenous nature of their rock-fill as compared to polygenetic volcanoes. Among monogenetic volcanoes those with dominantly effusive activity appear as the best candidates for privileged earthquake monitoring sites. In more details, effusive monogenetic volcanic conduits filled with rocks of primitive parental magma composition indicating direct ascent from sub-lithospheric magma-generating areas are the most suitable. Further selection criteria may include age of the volcanism considered and the presence of mantle xenoliths in surface volcanic products indicating direct and straightforward link between the deep lithospheric mantle and surface through the conduit. Innovative earthquake prediction research strategies can be based and developed on these grounds by considering conduits of selected extinct monogenetic volcanoes and deep trans-crustal fractures as privileged emplacement sites of seismic monitoring stations

  7. State Vector: A New Approach to Prediction of the Failure of Brittle Heterogeneous Media and Large Earthquakes

    Science.gov (United States)

    Yu, Huai-Zhong; Yin, Xiang-Chu; Zhu, Qing-Yong; Yan, Yu-Ding

    2006-12-01

    The concept of state vector stems from statistical physics, where it is usually used to describe activity patterns of a physical field in its manner of coarsegrain. In this paper, we propose an approach by which the state vector was applied to describe quantitatively the damage evolution of the brittle heterogeneous systems, and some interesting results are presented, i.e., prior to the macro-fracture of rock specimens and occurrence of a strong earthquake, evolutions of the four relevant scalars time series derived from the state vectors changed anomalously. As retrospective studies, some prominent large earthquakes occurred in the Chinese Mainland (e.g., the M 7.4 Haicheng earthquake on February 4, 1975, and the M 7.8 Tangshan earthquake on July 28, 1976, etc) were investigated. Results show considerable promise that the time-dependent state vectors could serve as a kind of precursor to predict earthquakes.

  8. Real-time 3-D space numerical shake prediction for earthquake early warning

    Science.gov (United States)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  9. An Integrated and Interdisciplinary Model for Predicting the Risk of Injury and Death in Future Earthquakes.

    Science.gov (United States)

    Shapira, Stav; Novack, Lena; Bar-Dayan, Yaron; Aharonson-Daniel, Limor

    2016-01-01

    A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities' preparedness and response capabilities and to mitigate future consequences. An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model's algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties.

  10. Prediction of strong ground motion based on scaling law of earthquake

    International Nuclear Information System (INIS)

    Kamae, Katsuhiro; Irikura, Kojiro; Fukuchi, Yasunaga.

    1991-01-01

    In order to predict more practically strong ground motion, it is important to study how to use a semi-empirical method in case of having no appropriate observation records for actual small-events as empirical Green's functions. We propose a prediction procedure using artificially simulated small ground motions as substitute for the actual motions. First, we simulate small-event motion by means of stochastic simulation method proposed by Boore (1983) in considering pass effects such as attenuation, and broadening of waveform envelope empirically in the objective region. Finally, we attempt to predict the strong ground motion due to a future large earthquake (M 7, Δ = 13 km) using the same summation procedure as the empirical Green's function method. We obtained the results that the characteristics of the synthetic motion using M 5 motion were in good agreement with those by the empirical Green's function method. (author)

  11. Predicting speech release from masking through spatial separation in distance

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; Dau, Torsten

    2014-01-01

    of spatial release from masking (SRM) where the masker is moved, on-axis, away from the target. Two binaural models, which use the conventional audio signal-to-noise ratio (SNR) in the decision metric, and two monaural models, using a decision metric based on the SNR in the envelope domain (SNRenv), were...... considered. The predictions were compared to data from Westermann et al. [2013, POMA, 19, 050156] in condi- tions where the target was located 0.5 m in front of the listener and the masker was presented at a distance of 0.5, 2, 5 or 10 m in front of the listener. The data showed an SRM of 10 dB when moving...... the masker from a distance of 0.5 m to a distance of 10 m. The long-term monaural model based on the SNRenv metric was able to account for most of the SRM data, whereas the models that used the audio SNR did not predict any SRM, even when they included an equalizationcancellation-like process. The short...

  12. Earthquake Forecasting Methodology Catalogue - A collection and comparison of the state-of-the-art in earthquake forecasting and prediction methodologies

    Science.gov (United States)

    Schaefer, Andreas; Daniell, James; Wenzel, Friedemann

    2015-04-01

    Earthquake forecasting and prediction has been one of the key struggles of modern geosciences for the last few decades. A large number of approaches for various time periods have been developed for different locations around the world. A categorization and review of more than 20 of new and old methods was undertaken to develop a state-of-the-art catalogue in forecasting algorithms and methodologies. The different methods have been categorised into time-independent, time-dependent and hybrid methods, from which the last group represents methods where additional data than just historical earthquake statistics have been used. It is necessary to categorize in such a way between pure statistical approaches where historical earthquake data represents the only direct data source and also between algorithms which incorporate further information e.g. spatial data of fault distributions or which incorporate physical models like static triggering to indicate future earthquakes. Furthermore, the location of application has been taken into account to identify methods which can be applied e.g. in active tectonic regions like California or in less active continental regions. In general, most of the methods cover well-known high-seismicity regions like Italy, Japan or California. Many more elements have been reviewed, including the application of established theories and methods e.g. for the determination of the completeness magnitude or whether the modified Omori law was used or not. Target temporal scales are identified as well as the publication history. All these different aspects have been reviewed and catalogued to provide an easy-to-use tool for the development of earthquake forecasting algorithms and to get an overview in the state-of-the-art.

  13. The ordered network structure of M {>=} 6 strong earthquakes and its prediction in the Jiangsu-South Yellow Sea region

    Energy Technology Data Exchange (ETDEWEB)

    Men, Ke-Pei [Nanjing Univ. of Information Science and Technology (China). College of Mathematics and Statistics; Cui, Lei [California Univ., Santa Barbara, CA (United States). Applied Probability and Statistics Dept.

    2013-05-15

    The the Jiangsu-South Yellow Sea region is one of the key seismic monitoring defence areas in the eastern part of China. Since 1846, M {>=} 6 strong earthquakes have showed an obvious commensurability and orderliness in this region. The main orderly values are 74 {proportional_to} 75 a, 57 {proportional_to} 58 a, 11 {proportional_to} 12 a, and 5 {proportional_to} 6 a, wherein 74 {proportional_to} 75 a and 57 {proportional_to} 58 a with an outstanding predictive role. According to the information prediction theory of Wen-Bo Weng, we conceived the M {>=} 6 strong earthquake ordered network structure in the South Yellow Sea and the whole region. Based on this, we analyzed and discussed the variation of seismicity in detail and also made a trend prediction of M {>=} 6 strong earthquakes in the future. The results showed that since 1998 it has entered into a new quiet episode which may continue until about 2042; and the first M {>=} 6 strong earthquake in the next active episode will probably occur in 2053 pre and post, with the location likely in the sea area of the South Yellow Sea; also, the second and the third ones or strong earthquake swarm in the future will probably occur in 2058 and 2070 pre and post. (orig.)

  14. Prediction of maximum earthquake intensities for the San Francisco Bay region

    Science.gov (United States)

    Borcherdt, Roger D.; Gibbs, James F.

    1975-01-01

    The intensity data for the California earthquake of April 18, 1906, are strongly dependent on distance from the zone of surface faulting and the geological character of the ground. Considering only those sites (approximately one square city block in size) for which there is good evidence for the degree of ascribed intensity, the empirical relation derived between 1906 intensities and distance perpendicular to the fault for 917 sites underlain by rocks of the Franciscan Formation is: Intensity = 2.69 - 1.90 log (Distance) (km). For sites on other geologic units intensity increments, derived with respect to this empirical relation, correlate strongly with the Average Horizontal Spectral Amplifications (AHSA) determined from 99 three-component recordings of ground motion generated by nuclear explosions in Nevada. The resulting empirical relation is: Intensity Increment = 0.27 +2.70 log (AHSA), and average intensity increments for the various geologic units are -0.29 for granite, 0.19 for Franciscan Formation, 0.64 for the Great Valley Sequence, 0.82 for Santa Clara Formation, 1.34 for alluvium, 2.43 for bay mud. The maximum intensity map predicted from these empirical relations delineates areas in the San Francisco Bay region of potentially high intensity from future earthquakes on either the San Andreas fault or the Hazard fault.

  15. Prediction of maximum earthquake intensities for the San Francisco Bay region

    Energy Technology Data Exchange (ETDEWEB)

    Borcherdt, R.D.; Gibbs, J.F.

    1975-01-01

    The intensity data for the California earthquake of Apr 18, 1906, are strongly dependent on distance from the zone of surface faulting and the geological character of the ground. Considering only those sites (approximately one square city block in size) for which there is good evidence for the degree of ascribed intensity, the empirical relation derived between 1906 intensities and distance perpendicular to the fault for 917 sites underlain by rocks of the Franciscan formation is intensity = 2.69 - 1.90 log (distance) (km). For sites on other geologic units, intensity increments, derived with respect to this empirical relation, correlate strongly with the average horizontal spectral amplifications (AHSA) determined from 99 three-component recordings of ground motion generated by nuclear explosions in Nevada. The resulting empirical relation is intensity increment = 0.27 + 2.70 log (AHSA), and average intensity increments for the various geologic units are -0.29 for granite, 0.19 for Franciscan formation, 0.64 for the Great Valley sequence, 0.82 for Santa Clara formation, 1.34 for alluvium, and 2.43 for bay mud. The maximum intensity map predicted from these empirical relations delineates areas in the San Francisco Bay region of potentially high intensity from future earthquakes on either the San Andreas fault or the Hayward fault.

  16. Moment Magnitudes and Local Magnitudes for Small Earthquakes: Implications for Ground-Motion Prediction and b-values

    Science.gov (United States)

    Baltay, A.; Hanks, T. C.; Vernon, F.

    2016-12-01

    We illustrate two essential consequences of the systematic difference between moment magnitude and local magnitude for small earthquakes, illuminating the underlying earthquake physics. Moment magnitude, M 2/3 log M0, is uniformly valid for all earthquake sizes [Hanks and Kanamori, 1979]. However, the relationship between local magnitude ML and moment is itself magnitude dependent. For moderate events, 3> fmax. Just as importantly, if this relation is overlooked, prediction of large-magnitude ground motion from small earthquakes will be misguided. We also consider the effect of this magnitude scale difference on b-value. The oft-cited b-value of 1 should hold for small magnitudes, given M. Use of ML necessitates b=2/3 for the same data set; use of mixed, or unknown, magnitudes complicates the matter further. This is of particular import when estimating the rate of large earthquakes when one has limited data on their recurrence, as is the case for induced earthquakes in the central US.

  17. Prediction of HAMR Debris Population Distribution Released from GEO Space

    Science.gov (United States)

    Rosengren, A.; Scheeres, D.

    2012-09-01

    The high area-to-mass ratio (HAMR) debris population is thought to have origins in the GEO region. Many of these objects are uncharacterized with apparent area-to-mass ratios of up to 30 meters squared per kilogram. The orbits of HAMR objects are highly perturbed due to the combined effect of solar radiation pressure (SRP), anomalies of the Earth gravitational field, and third-body gravitational interactions induced by the Sun and the Moon. A sound understanding of their nature, orbital evolution, and possible origin is critical for space situational awareness. The study of the orbital evolution of HAMR objects, taking into account both short-period and long-period terms, requires numerical integration of the precise set of differential equations, and the investigation of a broad range of possible parameter values. However, such computations become very costly when continuously applied over a period of several decades, as is necessary in the case of HAMR debris. It therefore seems reasonable to investigate the equations that govern the long-term behavior of orbits; such equations can be derived by the method of averaging. We have validated a semi-analytical averaged theory of HAMR object orbit evolution against high precision numerical integrations, and are able to capture the extreme dynamical behaviors reported for these objects. This new averaged model, explicitly given in terms of the eccentricity and angular momentum vectors, is several hundred times faster to numerically integrate than the non-averaged Newtonian counterpart, and provides a very accurate description of the long-term behavior. Using this model, it is possible to make predictions of how a population of HAMR objects, released into GEO orbit, will evolve over time. Our earlier analyses revealed that the population would have a range of orbits much different than circular GEO. Their orbits will suffer a sub-yearly oscillation in the eccentricity and inclination evolutions, and a longer-term drift

  18. Drug release kinetic analysis and prediction of release data via polymer molecular weight in sustained release diltiazem matrices.

    Science.gov (United States)

    Adibkia, K; Ghanbarzadeh, S; Mohammadi, G; Khiavi, H Z; Sabzevari, A; Barzegar-Jalali, M

    2014-03-01

    This study was conducted to investigate the effects of HPMC (K4M and K100M) as well as tragacanth on the drug release rate of diltiazem (DLTZ) from matrix tablets prepared by direct compression method.Mechanism of drug transport through the matrices was studied by fitting the release data to the 10 kinetic models. 3 model independent parameters; i. e., mean dissolution time (MDT), mean release rate (MRR) and release rate efficacy (RE) as well as 5 time point approaches were established to compare the dissolution profiles. To find correlation between fraction of drug released and polymer's molecular weight, dissolution data were fitted into two proposed equations.All polymers could sustain drug release up to 10 h. The release data were fitted best to Peppas and Higuchi square root kinetic models considering squared correlation coefficient and mean percent error (MPE). RE and MRR were decreased when polymer to drug ratio was increased. Conversely, t60% was increased with raising polymer /drug ratio. The fractions of drug released from the formulations prepared with tragacanth were more than those formulated using the same amount of HPMC K4M and HPMC K100M.Preparation of DLTZ matrices applying HPMCK4M, HPMC K100M and tragacanth could effectively extend the drug release. © Georg Thieme Verlag KG Stuttgart · New York.

  19. The potential of continuous, local atomic clock measurements for earthquake prediction and volcanology

    Directory of Open Access Journals (Sweden)

    Bondarescu Mihai

    2015-01-01

    Full Text Available Modern optical atomic clocks along with the optical fiber technology currently being developed can measure the geoid, which is the equipotential surface that extends the mean sea level on continents, to a precision that competes with existing technology. In this proceeding, we point out that atomic clocks have the potential to not only map the sea level surface on continents, but also look at variations of the geoid as a function of time with unprecedented timing resolution. The local time series of the geoid has a plethora of applications. These include potential improvement in the predictions of earthquakes and volcanoes, and closer monitoring of ground uplift in areas where hydraulic fracturing is performed.

  20. Earthquake Prediction Analysis Based on Empirical Seismic Rate: The M8 Algorithm

    International Nuclear Information System (INIS)

    Molchan, G.; Romashkova, L.

    2010-07-01

    The quality of space-time earthquake prediction is usually characterized by a two-dimensional error diagram (n,τ), where n is the rate of failures-to-predict and τ is the normalized measure of space-time alarm. The most reasonable space measure for analysis of a prediction strategy is the rate of target events λ(dg) in a sub-area dg. In that case the quantity H = 1-(n +τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n,τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M ≥ 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw ≥ 5.5, 1977-2004, and the magnitude range of target events 8.0 ≤ M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm. (author)

  1. Earthquake prediction analysis based on empirical seismic rate: the M8 algorithm

    Science.gov (United States)

    Molchan, G.; Romashkova, L.

    2010-12-01

    The quality of space-time earthquake prediction is usually characterized by a 2-D error diagram (n, τ), where n is the fraction of failures-to-predict and τ is the local rate of alarm averaged in space. The most reasonable averaging measure for analysis of a prediction strategy is the normalized rate of target events λ(dg) in a subarea dg. In that case the quantity H = 1 - (n + τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n, τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M >= 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw >= 5.5, 1977-2004, and the magnitude range of target events 8.0 <= M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm.

  2. The USGS plan for short-term prediction of the anticipated Parkfield earthquake

    Science.gov (United States)

    Bakun, W.H.

    1988-01-01

    Aside from the goal of better understanding the Parkfield earthquake cycle, it is the intention of the U.S Geological Survey to attempt to issue a warning shortly before the anticipated earthquake. Although short-term earthquake warnings are not yet generally feasible, the wealth of information available for the previous significant Parkfield earthquakes suggests that if the next earthquake follows the pattern of "characteristic" Parkfield shocks, such a warning might be possible. Focusing on earthquake precursors reported for the previous  "characteristic" shocks, particulary the 1934 and 1966 events, the USGS developed a plan* in late 1985 on which to base earthquake warnings for Parkfield and has assisted State, county, and local officials in the Parkfield area to prepare a coordinated, reasonable response to a warning, should one be issued. 

  3. Fossil intermediate-depth earthquakes in subducting slabs linked to differential stress release

    Science.gov (United States)

    Scambelluri, Marco; Pennacchioni, Giorgio; Gilio, Mattia; Bestmann, Michel; Plümper, Oliver; Nestola, Fabrizio

    2017-12-01

    The cause of intermediate-depth (50-300 km) seismicity in subduction zones is uncertain. It is typically attributed either to rock embrittlement associated with fluid pressurization, or to thermal runaway instabilities. Here we document glassy pseudotachylyte fault rocks—the products of frictional melting during coseismic faulting—in the Lanzo Massif ophiolite in the Italian Western Alps. These pseudotachylytes formed at subduction-zone depths of 60-70 km in poorly hydrated to dry oceanic gabbro and mantle peridotite. This rock suite is a fossil analogue to an oceanic lithospheric mantle that undergoes present-day subduction. The pseudotachylytes locally preserve high-pressure minerals that indicate an intermediate-depth seismic environment. These pseudotachylytes are important because they are hosted in a near-anhydrous lithosphere free of coeval ductile deformation, which excludes an origin by dehydration embrittlement or thermal runaway processes. Instead, our observations indicate that seismicity in cold subducting slabs can be explained by the release of differential stresses accumulated in strong dry metastable rocks.

  4. Predictive property models for use in design of controlled release of pesticides

    DEFF Research Database (Denmark)

    Suné, Nuria Muro; Gani, Rafiqul; Bell, G.

    2005-01-01

    A model capable of predicting the release of an Active Ingredient (AI) from a specific device would be very useful in the field of pesticide controlled release technology for design purposes. For the release of an AI from a microcapsule a mathematical model is briefly presented here, as an introd...

  5. Prediction of Global and Localized Damage and Future Reliability for RC Structures subject to Earthquakes

    DEFF Research Database (Denmark)

    Köyluoglu, H.U.; Nielsen, Søren R.K.; Cakmak, A.S.

    1997-01-01

    the arrival of the first earthquake from non-destructive vibration tests or via structural analysis. The previous excitation and displacement response time series is employed for the identification of the instantaneous softening using an ARMA model. The hysteresis parameters are updated after each earthquake....... The proposed model is next generalized for the MDOF system. Using the adapted models for the structure and the global damage state, the global damage in a future earthquake can then be estimated when a suitable earthquake model is applied. The performance of the model is illustrated on RC frames which were...

  6. Prediction of Global and Localized Damage and Future Reliability for RC Structures subject to Earthquakes

    DEFF Research Database (Denmark)

    Köyluoglu, H.U.; Nielsen, Søren R.K.; Cakmak, A.S.

    1994-01-01

    the arrival of the first earthquake from non-destructive vibration tests or via structural analysis. The previous excitation and displacement response time series is employed for the identification of the instantaneous softening using an ARMA model. The hysteresis parameters are updated after each earthquake....... The proposed model is next generalized for the MDOF system. Using the adapted models for the structure and the global damage state, the global damage in a future earthquake can then be estimated when a suitable earthquake model is applied. The performance of the model is illustrated on RC frames which were...

  7. Predicting the distribution of contamination from a chlorinated hydrocarbon release

    Energy Technology Data Exchange (ETDEWEB)

    Lupo, M.J. [K.W. Brown Environmental Services, College Station, TX (United States); Moridis, G.J. [Lawrence Berkeley Laboratory, Berkeley, CA (United States)

    1995-03-01

    The T2VOC model with the T2CG1 conjugate gradient package was used to simulate the motion of a dense chlorinated hydrocarbon plume released from an industrial plant. The release involved thousands of kilograms of trichloroethylene (TCE) and other chemicals that were disposed of onsite over a period of nearly twenty years. After the disposal practice ceased, an elongated plume was discovered. Because much of the plume underlies a developed area, it was of interest to study the migration history of the plume to determine the distribution of the contamination.

  8. Estimated airborne release of plutonium from the 102 Building at the General Electric Vallecitos Nuclear Center, Vallecitos, California, as a result of postulated damage from severe wind and earthquake hazard

    International Nuclear Information System (INIS)

    Mishima, J.; Ayer, J.E.; Hays, I.D.

    1980-12-01

    This report estimates the potential airborne releases of plutonium as a consequence of various severities of earthquake and wind hazard postulated for the 102 Building at the General Electric Vallecitos Nuclear Center in California. The releases are based on damage scenarios developed by other specialists. The hazard severities presented range up to a nominal velocity of 230 mph for wind hazard and are in excess of 0.8 g linear acceleration for earthquakes. The consequences of thrust faulting are considered. The approaches and factors used to estimate the releases are discussed. Release estimates range from 0.003 to 3 g Pu

  9. Predicted Liquefaction in the Greater Oakland and Northern Santa Clara Valley Areas for a Repeat of the 1868 Hayward Earthquake

    Science.gov (United States)

    Holzer, T. L.; Noce, T. E.; Bennett, M. J.

    2008-12-01

    Probabilities of surface manifestations of liquefaction due to a repeat of the 1868 (M6.7-7.0) earthquake on the southern segment of the Hayward Fault were calculated for two areas along the margin of San Francisco Bay, California: greater Oakland and the northern Santa Clara Valley. Liquefaction is predicted to be more common in the greater Oakland area than in the northern Santa Clara Valley owing to the presence of 57 km2 of susceptible sandy artificial fill. Most of the fills were placed into San Francisco Bay during the first half of the 20th century to build military bases, port facilities, and shoreline communities like Alameda and Bay Farm Island. Probabilities of liquefaction in the area underlain by this sandy artificial fill range from 0.2 to ~0.5 for a M7.0 earthquake, and decrease to 0.1 to ~0.4 for a M6.7 earthquake. In the greater Oakland area, liquefaction probabilities generally are less than 0.05 for Holocene alluvial fan deposits, which underlie most of the remaining flat-lying urban area. In the northern Santa Clara Valley for a M7.0 earthquake on the Hayward Fault and an assumed water-table depth of 1.5 m (the historically shallowest water level), liquefaction probabilities range from 0.1 to 0.2 along Coyote and Guadalupe Creeks, but are less than 0.05 elsewhere. For a M6.7 earthquake, probabilities are greater than 0.1 along Coyote Creek but decrease along Guadalupe Creek to less than 0.1. Areas with high probabilities in the Santa Clara Valley are underlain by latest Holocene alluvial fan levee deposits where liquefaction and lateral spreading occurred during large earthquakes in 1868 and 1906. The liquefaction scenario maps were created with ArcGIS ModelBuilder. Peak ground accelerations first were computed with the new Boore and Atkinson NGA attenuation relation (2008, Earthquake Spectra, 24:1, p. 99-138), using VS30 to account for local site response. Spatial liquefaction probabilities were then estimated using the predicted ground motions

  10. The East Aegean Sea strong earthquake sequence of October–November 2005: lessons learned for earthquake prediction from foreshocks

    Directory of Open Access Journals (Sweden)

    G. A. Papadopoulos

    2006-01-01

    Full Text Available The seismic sequence of October–November 2005 in the Samos area, East Aegean Sea, was studied with the aim to show how it is possible to establish criteria for (a the rapid recognition of both the ongoing foreshock activity and the mainshock, and (b the rapid discrimination between the foreshock and aftershock phases of activity. It has been shown that before the mainshock of 20 October 2005, foreshock activity is not recognizable in the standard earthquake catalogue. However, a detailed examination of the records in the SMG station, which is the closest to the activated area, revealed that hundreds of small shocks not listed in the standard catalogue were recorded in the time interval from 12 October 2005 up to 21 November 2005. The production of reliable relations between seismic signal duration and duration magnitude for earthquakes included in the standard catalogue, made it possible to use signal durations in SMG records and to determine duration magnitudes for 2054 small shocks not included in the standard catalogue. In this way a new catalogue with magnitude determination for 3027 events was obtained while the standard catalogue contains 1025 events. At least 55 of them occurred from 12 October 2005 up to the occurrence of the two strong foreshocks of 17 October 2005. This implies that foreshock activity developed a few days before the strong shocks of 17 October 2005 but it escaped recognition by the routine procedure of seismic analysis. The onset of the foreshock phase of activity is recognizable by the significant increase of the mean seismicity rate which increased exponentially with time. According to the least-squares approach the b-value of the magnitude-frequency relation dropped significantly during the foreshock activity with respect to the b-value prevailing in the declustered background seismicity. However, the maximum likelihood approach does not indicate such a drop of b. The b-value found for the aftershocks that

  11. Protracted releases: inferring source terms and predicting dispersal

    International Nuclear Information System (INIS)

    Vamanu, D.V.

    1988-02-01

    Analytical solutions are given to the transport-diffusion equation for archetype, atmospheric protracted releases featuring fronts of initiation, culminations, and tails of extinction. The interplay of the fitting parameters ensures that the model accommodates a wide typology of events, nearing in the extremes the instantaneous puff of the Lagrangian models, and the continuous stack emission of the Gaussian models, respectively. (author)

  12. Ground Motion Prediction for Great Interplate Earthquakes in Kanto Basin Considering Variation of Source Parameters

    Science.gov (United States)

    Sekiguchi, H.; Yoshimi, M.; Horikawa, H.

    2011-12-01

    Broadband ground motions are estimated in the Kanto sedimentary basin which holds Tokyo metropolitan area inside for anticipated great interplate earthquakes along surrounding plate boundaries. Possible scenarios of great earthquakes along Sagami trough are modeled combining characteristic properties of the source area and adequate variation in source parameters in order to evaluate possible ground motion variation due to next Kanto earthquake. South to the rupture area of the 2011 Tohoku earthquake along the Japan trench, we consider possible M8 earthquake. The ground motions are computed with a four-step hybrid technique. We first calculate low-frequency ground motions at the engineering basement. We then calculate higher-frequency ground motions at the same position, and combine the lower- and higher-frequency motions using a matched filter. We finally calculate ground motions at the surface by computing the response of the alluvium-diluvium layers to the combined motions at the engineering basement.

  13. The bayesian probabilistic prediction of the next earthquake in the ometepec segment of the mexican subduction zone

    Science.gov (United States)

    Ferraes, Sergio G.

    1992-06-01

    A predictive equation to estimate the next interoccurrence time (τ) for the next earthquake ( M≥6) in the Ometepec segment is presented, based on Bayes' theorem and the Gaussian process. Bayes' theorem is used to relate the Gaussian process to both a log-normal distribution of recurrence times (τ) and a log-normal distribution of magnitudes ( M) ( Nishenko and Buland, 1987; Lomnitz, 1964). We constructed two new random variables X=In M and Y=In τ with normal marginal densities, and based on the Gaussian process model we assume that their joint density is normal. Using this information, we determine the Bayesian conditional probability. Finally, a predictive equation is derived, based on the criterion of maximization of the Bayesian conditional probability. The model forecasts the next interoccurrence time, conditional on the magnitude of the last event. Realistic estimates of future damaging earthquakes are based on relocated historical earthquakes. However, at the present time there is a controversy between Nishenko-Singh and Gonzalez-Ruiz-Mc-Nally concerning the rupturing process of the 1907 earthquake. We use our Bayesian analysis to examine and discuss this very important controversy. To clarify to the full significance of the analysis, we put forward the results using two catalogues: (1) The Ometepec catalogue without the 1907 earthquake (González-Ruíz-McNally), and (2) the Ometepec catalogue including the 1907 earthquake (Nishenko-Singh). The comparison of the prediction error reveals that in the Nishenko-Singh catalogue, the errors are considerably smaller than the average error for the González-Ruíz-McNally catalogue of relocated events. Finally, using the Nishenko-Singh catalogue which locates the 1907 event inside the Ometepec segment, we conclude that the next expected damaging earthquake ( M≥6.0) will occur approximately within the next time interval τ=11.82 years from the last event (which occurred on July 2, 1984), or equivalently will

  14. Earthquakes: hydrogeochemical precursors

    Science.gov (United States)

    Ingebritsen, Steven E.; Manga, Michael

    2014-01-01

    Earthquake prediction is a long-sought goal. Changes in groundwater chemistry before earthquakes in Iceland highlight a potential hydrogeochemical precursor, but such signals must be evaluated in the context of long-term, multiparametric data sets.

  15. Release Early, Release Often: Predicting Change in Versioned Knowledge Organization Systems on the Web

    OpenAIRE

    Meroño-Peñuela, Albert; Guéret, Christophe; Schlobach, Stefan

    2015-01-01

    The Semantic Web is built on top of Knowledge Organization Systems (KOS) (vocabularies, ontologies, concept schemes) that provide a structured, interoperable and distributed access to Linked Data on the Web. The maintenance of these KOS over time has produced a number of KOS version chains: subsequent unique version identifiers to unique states of a KOS. However, the release of new KOS versions pose challenges to both KOS publishers and users. For publishers, updating a KOS is a knowledge int...

  16. Ground water and earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Ts' ai, T H

    1977-11-01

    Chinese folk wisdom has long seen a relationship between ground water and earthquakes. Before an earthquake there is often an unusual change in the ground water level and volume of flow. Changes in the amount of particulate matter in ground water as well as changes in color, bubbling, gas emission, and noises and geysers are also often observed before earthquakes. Analysis of these features can help predict earthquakes. Other factors unrelated to earthquakes can cause some of these changes, too. As a first step it is necessary to find sites which are sensitive to changes in ground stress to be used as sensor points for predicting earthquakes. The necessary features are described. Recording of seismic waves of earthquake aftershocks is also an important part of earthquake predictions.

  17. Intermediate-term medium-range earthquake prediction algorithm M8: A new spatially stabilized application in Italy

    International Nuclear Information System (INIS)

    Romashkova, L.L.; Kossobokov, V.G.; Peresan, A.; Panza, G.F.

    2001-12-01

    A series of experiments, based on the intermediate-term earthquake prediction algorithm M8, has been performed for the retrospective simulation of forward predictions in the Italian territory, with the aim to design an experimental routine for real-time predictions. These experiments evidenced two main difficulties for the application of M8 in Italy. The first one is due to the fact that regional catalogues are usually limited in space. The second one concerns certain arbitrariness and instability, with respect to the positioning of the circles of investigation. Here we design a new scheme for the application of the algorithm M8, which is less subjective and less sensitive to the position of the circles of investigation. To perform this test, we consider a recent revision of the Italian catalogue, named UCI2001, composed by CCI1996, NEIC and ALPOR data for the period 1900-1985, and updated with the NEIC reduces the spatial heterogeneity of the data at the boundaries of Italy. The new variant of the M8 algorithm application reduces the number of spurious alarms and increases the reliability of predictions. As a result, three out of four earthquakes with magnitude M max larger than 6.0 are predicted in the retrospective simulation of the forward prediction, during the period 1972-2001, with a space-time volume of alarms comparable to that obtained with the non-stabilized variant of the M8 algorithm in Italy. (author)

  18. On a report that the 2012 M 6.0 earthquake in Italy was predicted after seeing an unusual cloud formation

    Science.gov (United States)

    Thomas, J.N.; Masci, F; Love, Jeffrey J.

    2015-01-01

    Several recently published reports have suggested that semi-stationary linear-cloud formations might be causally precursory to earthquakes. We examine the report of Guangmeng and Jie (2013), who claim to have predicted the 2012 M 6.0 earthquake in the Po Valley of northern Italy after seeing a satellite photograph (a digital image) showing a linear-cloud formation over the eastern Apennine Mountains of central Italy. From inspection of 4 years of satellite images we find numerous examples of linear-cloud formations over Italy. A simple test shows no obvious statistical relationship between the occurrence of these cloud formations and earthquakes that occurred in and around Italy. All of the linear-cloud formations we have identified in satellite images, including that which Guangmeng and Jie (2013) claim to have used to predict the 2012 earthquake, appear to be orographic – formed by the interaction of moisture-laden wind flowing over mountains. Guangmeng and Jie (2013) have not clearly stated how linear-cloud formations can be used to predict the size, location, and time of an earthquake, and they have not published an account of all of their predictions (including any unsuccessful predictions). We are skeptical of the validity of the claim by Guangmeng and Jie (2013) that they have managed to predict any earthquakes.

  19. Design and Optimization of a Telemetric system for appliance in earthquake prediction

    Science.gov (United States)

    Bogdos, G.; Tassoulas, E.; Vereses, A.; Papapanagiotou, A.; Filippi, K.; Koulouras, G.; Nomicos, C.

    2009-04-01

    This project's aim is to design a telemetric system which will be able to collect data from a digitizer, transform it into appropriate form and transfer this data to a central system where an on-line data elaboration will take place. On-line mathematical elaboration (fractal analysis) of pre-seismic electromagnetic signals and instant display may lead to safe earthquake prediction methodologies. Ad-hoc connections and heterogeneous topologies are the core network, while wired and wireless means cooperate for an accurate and on-time transmission. The nature of data is considered very sensitive so the transmission needs to be instant. All stations are situated in rural places in order to prevent electromagnetic interferences; this imposes continuous monitoring and provision of backup data links. The central stations collect the data of every station and allocate them properly in a predefined database. Special software is designed to elaborate mathematically the incoming data and export it graphically. The developing part included digitizer design, workstation software design, transmission protocol study and simulation on OPNET, database programming, mathematical data elaborations and software development for graphical representation. All the package was tested under lab conditions and tested in real conditions. The main aspect that this project serves is the very big interest for the scientific community in case this platform will eventually be implemented and then installed in Greek countryside in large scale. The platform is designed in such a way that techniques of data mining and mathematical elaboration are possible and any extension can be adapted. The main specialization of this project is that these mechanisms and mathematical transformations can be applied on live data. This can help to rapid exploitation of the real meaning of the measured and stored data. The elaboration of this study has as primary intention to help and alleviate the analysis process

  20. CyberShake-derived ground-motion prediction models for the Los Angeles region with application to earthquake early warning

    Science.gov (United States)

    Bose, Maren; Graves, Robert; Gill, David; Callaghan, Scott; Maechling, Phillip J.

    2014-01-01

    Real-time applications such as earthquake early warning (EEW) typically use empirical ground-motion prediction equations (GMPEs) along with event magnitude and source-to-site distances to estimate expected shaking levels. In this simplified approach, effects due to finite-fault geometry, directivity and site and basin response are often generalized, which may lead to a significant under- or overestimation of shaking from large earthquakes (M > 6.5) in some locations. For enhanced site-specific ground-motion predictions considering 3-D wave-propagation effects, we develop support vector regression (SVR) models from the SCEC CyberShake low-frequency (415 000 finite-fault rupture scenarios (6.5 ≤ M ≤ 8.5) for southern California defined in UCERF 2.0. We use CyberShake to demonstrate the application of synthetic waveform data to EEW as a ‘proof of concept’, being aware that these simulations are not yet fully validated and might not appropriately sample the range of rupture uncertainty. Our regression models predict the maximum and the temporal evolution of instrumental intensity (MMI) at 71 selected test sites using only the hypocentre, magnitude and rupture ratio, which characterizes uni- and bilateral rupture propagation. Our regression approach is completely data-driven (where here the CyberShake simulations are considered data) and does not enforce pre-defined functional forms or dependencies among input parameters. The models were established from a subset (∼20 per cent) of CyberShake simulations, but can explain MMI values of all >400 k rupture scenarios with a standard deviation of about 0.4 intensity units. We apply our models to determine threshold magnitudes (and warning times) for various active faults in southern California that earthquakes need to exceed to cause at least ‘moderate’, ‘strong’ or ‘very strong’ shaking in the Los Angeles (LA) basin. These thresholds are used to construct a simple and robust EEW algorithm: to

  1. Comparison of fission product release predictions using PARFUME with results from the AGR-1 irradiation experiment

    International Nuclear Information System (INIS)

    Collin, Blaise P.; Petti, David A.; Demkowicz, Paul A.; Maki, John T.

    2014-01-01

    The PARFUME (PARticle FUel ModEl) code was used to predict fission product release from tristructural isotropic (TRISO) coated fuel particles and compacts during the first irradiation experiment (AGR-1) of the Advanced Gas Reactor Fuel Development and Qualification program. The PARFUME model for the AGR-1 experiment used the fuel compact volume average temperature for each of the 620 days of irradiation to calculate the release of fission products silver, cesium, and strontium from a representative particle for a select number of AGR-1 compacts. Post-irradiation examination (PIE) measurements provided data on release of fission products from fuel compacts and fuel particles, and retention of fission products in the compacts outside of the silicon carbide (SiC) layer. PARFUME-predicted fractional release of these fission products was determined and compared to the PIE measurements. Results show an overall over-prediction of the fractional release of cesium by PARFUME. For particles with failed SiC layers, the over-prediction is by a factor of about two, corresponding to an over-estimation of the diffusivity in uranium oxycarbide (UCO) by a factor of about 100. For intact particles, whose release is much lower, the over-prediction is by an average of about an order of magnitude, which could additionally be attributed to an over-estimated diffusivity in SiC by about 30%. The release of strontium from intact particles is also over-estimated by PARFUME, which also points towards an over-estimated diffusivity of strontium in either SiC or UCO, or possibly both. The measured strontium fractional release from intact particles varied considerably from compact to compact, making it difficult to assess the effective over-estimation of the diffusivities. Furthermore, the release of strontium from particles with failed SiC is difficult to observe experimentally due to the release from intact particles, preventing any conclusions to be made on the accuracy or validity of the

  2. Debris-flows scale predictions based on basin spatial parameters calculated from Remote Sensing images in Wenchuan earthquake area

    International Nuclear Information System (INIS)

    Zhang, Huaizhen; Chi, Tianhe; Liu, Tianyue; Wang, Wei; Yang, Lina; Zhao, Yuan; Shao, Jing; Yao, Xiaojing; Fan, Jianrong

    2014-01-01

    Debris flow is a common hazard in the Wenchuan earthquake area. Collapse and Landslide Regions (CLR), caused by earthquakes, could be located from Remote Sensing images. CLR are the direct material source regions for debris flow. The Spatial Distribution of Collapse and Landslide Regions (SDCLR) strongly impact debris-flow formation. In order to depict SDCLR, we referred to Strahler's Hypsometric analysis method and developed 3 functional models to depict SDCLR quantitatively. These models mainly depict SDCLR relative to altitude, basin mouth and main gullies of debris flow. We used the integral of functions as the spatial parameters of SDCLR and these parameters were employed during the process of debris-flows scale predictions. Grouping-occurring debris-flows triggered by the rainstorm, which occurred on September 24th 2008 in Beichuan County, Sichuan province China, were selected to build the empirical equations for debris-flows scale predictions. Given the existing data, only debris-flows runout zone parameters (Max. runout distance L and Lateral width B) were estimated in this paper. The results indicate that the predicted results were more accurate when the spatial parameters were used. Accordingly, we suggest spatial parameters of SDCLR should be considered in the process of debris-flows scale prediction and proposed several strategies to prevent debris flow in the future

  3. Earthquake forecasting and warning

    Energy Technology Data Exchange (ETDEWEB)

    Rikitake, T.

    1983-01-01

    This review briefly describes two other books on the same subject either written or partially written by Rikitake. In this book, the status of earthquake prediction efforts in Japan, China, the Soviet Union, and the United States are updated. An overview of some of the organizational, legal, and societal aspects of earthquake prediction in these countries is presented, and scientific findings of precursory phenomena are included. A summary of circumstances surrounding the 1975 Haicheng earthquake, the 1978 Tangshan earthquake, and the 1976 Songpan-Pingwu earthquake (all magnitudes = 7.0) in China and the 1978 Izu-Oshima earthquake in Japan is presented. This book fails to comprehensively summarize recent advances in earthquake prediction research.

  4. AN EFFECTIVE HYBRID SUPPORT VECTOR REGRESSION WITH CHAOS-EMBEDDED BIOGEOGRAPHY-BASED OPTIMIZATION STRATEGY FOR PREDICTION OF EARTHQUAKE-TRIGGERED SLOPE DEFORMATIONS

    Directory of Open Access Journals (Sweden)

    A. A. Heidari

    2015-12-01

    Full Text Available Earthquake can pose earth-shattering health hazards to the natural slops and land infrastructures. One of the chief consequences of the earthquakes can be land sliding, which is instigated by durable shaking. In this research, an efficient procedure is proposed to assist the prediction of earthquake-originated slope displacements (EIDS. New hybrid SVM-CBBO strategy is implemented to predict the EIDS. For this purpose, first, chaos paradigm is combined with initialization of BBO to enhance the diversification and intensification capacity of the conventional BBO optimizer. Then, chaotic BBO is developed as the searching scheme to investigate the best values of SVR parameters. In this paper, it will be confirmed that how the new computing approach is effective in prediction of EIDS. The outcomes affirm that the SVR-BBO strategy with chaos can be employed effectively as a predicting tool for evaluating the EIDS.

  5. Comparison of silver release predictions using PARFUME with results from the AGR-2 irradiation experiment

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise P.; Demkowicz, Paul A.; Baldwin, Charles A.; Harp, Jason M.; Hunn, John D.

    2016-11-01

    The PARFUME (PARticle FUel ModEl) code was used to predict silver release from tristructural isotropic (TRISO) coated fuel particles and compacts during the second irradiation experiment (AGR-2) of the Advanced Gas Reactor Fuel Development and Qualification program. The PARFUME model for the AGR-2 experiment used the fuel compact volume average temperature for each of the 559 days of irradiation to calculate the release of fission product silver from a representative particle for a select number of AGR-2 compacts and individual fuel particles containing either mixed uranium carbide/oxide (UCO) or 100% uranium dioxide (UO2) kernels. Post-irradiation examination (PIE) measurements were performed to provide data on release of silver from these compacts and individual fuel particles. The available experimental fractional releases of silver were compared to their corresponding PARFUME predictions. Preliminary comparisons show that PARFUME under-predicts the PIE results in UCO compacts and is in reasonable agreement with experimental data for UO2 compacts. The accuracy of PARFUME predictions is impacted by the code limitations in the modeling of the temporal and spatial distributions of the temperature across the compacts. Nevertheless, the comparisons on silver release lie within the same order of magnitude.

  6. Unusual Animal Behavior Preceding the 2011 Earthquake off the Pacific Coast of Tohoku, Japan: A Way to Predict the Approach of Large Earthquakes

    Directory of Open Access Journals (Sweden)

    Hiroyuki Yamauchi

    2014-04-01

    Full Text Available Unusual animal behaviors (UABs have been observed before large earthquakes (EQs, however, their mechanisms are unclear. While information on UABs has been gathered after many EQs, few studies have focused on the ratio of emerged UABs or specific behaviors prior to EQs. On 11 March 2011, an EQ (Mw 9.0 occurred in Japan, which took about twenty thousand lives together with missing and killed persons. We surveyed UABs of pets preceding this EQ using a questionnaire. Additionally, we explored whether dairy cow milk yields varied before this EQ in particular locations. In the results, 236 of 1,259 dog owners and 115 of 703 cat owners observed UABs in their pets, with restless behavior being the most prominent change in both species. Most UABs occurred within one day of the EQ. The UABs showed a precursory relationship with epicentral distance. Interestingly, cow milk yields in a milking facility within 340 km of the epicenter decreased significantly about one week before the EQ. However, cows in facilities farther away showed no significant decreases. Since both the pets’ behavior and the dairy cows’ milk yields were affected prior to the EQ, with careful observation they could contribute to EQ predictions.

  7. Implications of next generation attenuation ground motion prediction equations for site coefficients used in earthquake resistant design

    Science.gov (United States)

    Borcherdt, Roger D.

    2014-01-01

    Proposals are developed to update Tables 11.4-1 and 11.4-2 of Minimum Design Loads for Buildings and Other Structures published as American Society of Civil Engineers Structural Engineering Institute standard 7-10 (ASCE/SEI 7–10). The updates are mean next generation attenuation (NGA) site coefficients inferred directly from the four NGA ground motion prediction equations used to derive the maximum considered earthquake response maps adopted in ASCE/SEI 7–10. Proposals include the recommendation to use straight-line interpolation to infer site coefficients at intermediate values of (average shear velocity to 30-m depth). The NGA coefficients are shown to agree well with adopted site coefficients at low levels of input motion (0.1 g) and those observed from the Loma Prieta earthquake. For higher levels of input motion, the majority of the adopted values are within the 95% epistemic-uncertainty limits implied by the NGA estimates with the exceptions being the mid-period site coefficient, Fv, for site class D and the short-period coefficient, Fa, for site class C, both of which are slightly less than the corresponding 95% limit. The NGA data base shows that the median value  of 913 m/s for site class B is more typical than 760 m/s as a value to characterize firm to hard rock sites as the uniform ground condition for future maximum considered earthquake response ground motion estimates. Future updates of NGA ground motion prediction equations can be incorporated easily into future adjustments of adopted site coefficients using procedures presented herein. 

  8. Predictions of PuO2 and tracer compound release from ISV melts

    International Nuclear Information System (INIS)

    Cronenberg, A.W.; Callow, R.A.

    1992-04-01

    Two field tests were conducted at the Idaho National Engineering Laboratory (INEL) to assess in situ vitrification (ISV) suitability for long-term stabilization of buried radioactive waste. Both tests contained rare-earth oxide tracers (DY 2 O 3 , Yb 2 O 3 , and Tb 4 O 7 ) to simulate the presence of plutonium in the form of PuO 2 . In the first test, Intermediate Field Test (IFT)-l, approximately 4-% release of tracer material occurred during soil melting and associated off-gassing, while essentially nil release was observed for the second experiment (IFT-2) for which off-gassing was much reduced. This report presents an evaluation of the IFT test data in terms of governing release processes. Prediction of tracer release during ISV melting centered on an assessment of three potential transport mechanisms, (a) tracer diffusion through stagnant pool, (b) tracer transport by convective currents, and (c) tracer carry-off by escaping gas bubbles. Analysis indicates that tracer release by escaping gas is the dominant release mechanism, which is consistent with video records of gas bubble escape from the ISV melt surface. Quantitative mass transport predictions were also made for the IFT-I test conditions, indicating similarity between the 4-% release data and calculational results at viscosities of ∼ poise and tracer diffusivities of ∼10 -6 CM 2 /s. Since PuO 2 has similar chemical and transport (diffusivity) properties as the rare-earth tracers used in the rare earth tracers used in the IFT experiments, release of PuO 2 is predicted for similar off-gassing conditions. Reduced off-gassing during ISV would thus be expected to improve the overall retention of heavy-oxides within vitrified soil

  9. Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes

    Science.gov (United States)

    Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467

  10. Uncertainty estimates for predictions of the impact of breeder-reactor radionuclide releases

    International Nuclear Information System (INIS)

    Miller, C.W.; Little, C.A.

    1982-01-01

    This paper summarizes estimates, compiled in a larger report, of the uncertainty associated with models and parameters used to assess the impact on man radionuclide releases to the environment by breeder reactor facilities. These estimates indicate that, for many sites, generic models and representative parameter values may reasonably be used to calculate doses from annual average radionuclide releases when these calculated doses are on the order of one-tenth or less of a relevant dose limit. For short-term, accidental releases, the uncertainty in the dose calculations may be much larger than an order of magnitude. As a result, it may be necessary to incorporate site-specific information into the dose calculation under such circumstances. However, even using site-specific information, inherent natural variability within human receptors, and the uncertainties in the dose conversion factor will likely result in an overall uncertainty of greater than an order of magnitude for predictions of dose following short-term releases

  11. Computer-aided and predictive models for design of controlled release of pesticides

    DEFF Research Database (Denmark)

    Suné, Nuria Muro; Gani, Rafiqul

    2004-01-01

    In the field of pesticide controlled release technology, a computer based model that can predict the delivery of the Active Ingredient (AI) from fabricated units is important for purposes of product design and marketing. A model for the release of an M from a microcapsule device is presented...... in this paper, together with a specific case study application to highlight its scope and significance. The paper also addresses the need for predictive models and proposes a computer aided modelling framework for achieving it through the development and introduction of reliable and predictive constitutive...... models. A group-contribution based model for one of the constitutive variables (AI solubility in polymers) is presented together with examples of application and validation....

  12. The Iquique earthquake sequence of April 2014: Bayesian modeling accounting for prediction uncertainty

    Science.gov (United States)

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.

    2016-01-01

    The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.

  13. Predicting patient exposure to nickel released from cardiovascular devices using multi-scale modeling.

    Science.gov (United States)

    Saylor, David M; Craven, Brent A; Chandrasekar, Vaishnavi; Simon, David D; Brown, Ronald P; Sussman, Eric M

    2018-04-01

    Many cardiovascular device alloys contain nickel, which if released in sufficient quantities, can lead to adverse health effects. However, in-vivo nickel release from implanted devices and subsequent biodistribution of nickel ions to local tissues and systemic circulation are not well understood. To address this uncertainty, we have developed a multi-scale (material, tissue, and system) biokinetic model. The model links nickel release from an implanted cardiovascular device to concentrations in peri-implant tissue, as well as in serum and urine, which can be readily monitored. The model was parameterized for a specific cardiovascular implant, nitinol septal occluders, using in-vitro nickel release test results, studies of ex-vivo uptake into heart tissue, and in-vivo and clinical measurements from the literature. Our results show that the model accurately predicts nickel concentrations in peri-implant tissue in an animal model and in serum and urine of septal occluder patients. The congruity of the model with these data suggests it may provide useful insight to establish nickel exposure limits and interpret biomonitoring data. Finally, we use the model to predict local and systemic nickel exposure due to passive release from nitinol devices produced using a wide range of manufacturing processes, as well as general relationships between release rate and exposure. These relationships suggest that peri-implant tissue and serum levels of nickel will remain below 5 μg/g and 10 μg/l, respectively, in patients who have received implanted nitinol cardiovascular devices provided the rate of nickel release per device surface area does not exceed 0.074 μg/(cm 2  d) and is less than 32 μg/d in total. The uncertainty in whether in-vitro tests used to evaluate metal ion release from medical products are representative of clinical environments is one of the largest roadblocks to establishing the associated patient risk. We have developed and validated a multi

  14. News about newspaper advertisers: To what extent can corporate advertising budgets predict editorial uptake and coverage of corporate press releases?

    OpenAIRE

    Lischka, Juliane A; Stressig, J; Bünzli, F

    2016-01-01

    News value theory aims to predict a story’s chance of being selected for publication based on news factors and ascribed news values. News values can also predict the coverage of corporate press releases. For news decisions, a newspaper’s revenue model may force editors to consider whether the source of a press release is an advertising client, despite the ‘separation of church and state’. In addition, for business journalism, corporate press releases have become an increasingly important news...

  15. Prediction and Validation of Heat Release Direct Injection Diesel Engine Using Multi-Zone Model

    Science.gov (United States)

    Anang Nugroho, Bagus; Sugiarto, Bambang; Prawoto; Shalahuddin, Lukman

    2014-04-01

    The objective of this study is to develop simulation model which capable to predict heat release of diesel combustion accurately in efficient computation time. A multi-zone packet model has been applied to solve the combustion phenomena inside diesel cylinder. The model formulations are presented first and then the numerical results are validated on a single cylinder direct injection diesel engine at various engine speed and timing injections. The model were found to be promising to fulfill the objective above.

  16. Computer program for prediction of the deposition of material released from fixed and rotary wing aircraft

    Science.gov (United States)

    Teske, M. E.

    1984-01-01

    This is a user manual for the computer code ""AGDISP'' (AGricultural DISPersal) which has been developed to predict the deposition of material released from fixed and rotary wing aircraft in a single-pass, computationally efficient manner. The formulation of the code is novel in that the mean particle trajectory and the variance about the mean resulting from turbulent fluid fluctuations are simultaneously predicted. The code presently includes the capability of assessing the influence of neutral atmospheric conditions, inviscid wake vortices, particle evaporation, plant canopy and terrain on the deposition pattern.

  17. Earthquakes and Earthquake Engineering. LC Science Tracer Bullet.

    Science.gov (United States)

    Buydos, John F., Comp.

    An earthquake is a shaking of the ground resulting from a disturbance in the earth's interior. Seismology is the (1) study of earthquakes; (2) origin, propagation, and energy of seismic phenomena; (3) prediction of these phenomena; and (4) investigation of the structure of the earth. Earthquake engineering or engineering seismology includes the…

  18. An interdisciplinary approach to study Pre-Earthquake processes

    Science.gov (United States)

    Ouzounov, D.; Pulinets, S. A.; Hattori, K.; Taylor, P. T.

    2017-12-01

    We will summarize a multi-year research effort on wide-ranging observations of pre-earthquake processes. Based on space and ground data we present some new results relevant to the existence of pre-earthquake signals. Over the past 15-20 years there has been a major revival of interest in pre-earthquake studies in Japan, Russia, China, EU, Taiwan and elsewhere. Recent large magnitude earthquakes in Asia and Europe have shown the importance of these various studies in the search for earthquake precursors either for forecasting or predictions. Some new results were obtained from modeling of the atmosphere-ionosphere connection and analyses of seismic records (foreshocks /aftershocks), geochemical, electromagnetic, and thermodynamic processes related to stress changes in the lithosphere, along with their statistical and physical validation. This cross - disciplinary approach could make an impact on our further understanding of the physics of earthquakes and the phenomena that precedes their energy release. We also present the potential impact of these interdisciplinary studies to earthquake predictability. A detail summary of our approach and that of several international researchers will be part of this session and will be subsequently published in a new AGU/Wiley volume. This book is part of the Geophysical Monograph series and is intended to show the variety of parameters seismic, atmospheric, geochemical and historical involved is this important field of research and will bring this knowledge and awareness to a broader geosciences community.

  19. Can mine tremors be predicted? Observational studies of earthquake nucleation, triggering and rupture in South African mines

    CSIR Research Space (South Africa)

    Durrheim, RJ

    2012-05-01

    Full Text Available Earthquakes, and the tsunamis and landslides they trigger, pose a serious risk to people living close to plate boundaries, and a lesser but still significant risk to inhabitants of stable continental regions where destructive earthquakes are rare... of experiments that seek to identify reliable precursors of damaging seismic events. 1. Introduction Earthquakes, and the tsunamis and landslides they trigger, pose a serious risk to people living close to plate boundaries, and a lesser but still significant...

  20. An approach to estimating radiological risk of offsite release from a design basis earthquake for the Process Experimental Pilot Plant (PREPP)

    International Nuclear Information System (INIS)

    Lucero, V.; Meale, B.M.; Reny, D.A.; Brown, A.N.

    1990-09-01

    In compliance with Department of Energy (DOE) Order 6430.1A, a seismic analysis was performed on DOE's Process Experimental Pilot Plant (PREPP), a facility for processing low-level and transuranic (TRU) waste. Because no hazard curves were available for the Idaho National Engineering Laboratory (INEL), DOE guidelines were used to estimate the frequency for the specified design-basis earthquake (DBE). A dynamic structural analysis of the building was performed, using the DBE parameters, followed by a probabilistic risk assessment (PRA). For the PRA, a functional organization of the facility equipment was effected so that top events for a representative event tree model could be determined. Building response spectra (calculated from the structural analysis), in conjunction with generic fragility data, were used to generate fragility curves for the PREPP equipment. Using these curves, failure probabilities for each top event were calculated. These probabilities were integrated into the event tree model, and accident sequences and respective probabilities were calculated through quantification. By combining the sequences failure probabilities with a transport analysis of the estimated airborne source term from a DBE, onsite and offsite consequences were calculated. The results of the comprehensive analysis substantiated the ability of the PREPP facility to withstand a DBE with negligible consequence (i.e., estimated release was within personnel and environmental dose guidelines). 57 refs., 19 figs., 20 tabs

  1. An approach to estimating radiological risk of offsite release from a design basis earthquake for the Process Experimental Pilot Plant (PREPP)

    Energy Technology Data Exchange (ETDEWEB)

    Lucero, V.; Meale, B.M.; Reny, D.A.; Brown, A.N.

    1990-09-01

    In compliance with Department of Energy (DOE) Order 6430.1A, a seismic analysis was performed on DOE's Process Experimental Pilot Plant (PREPP), a facility for processing low-level and transuranic (TRU) waste. Because no hazard curves were available for the Idaho National Engineering Laboratory (INEL), DOE guidelines were used to estimate the frequency for the specified design-basis earthquake (DBE). A dynamic structural analysis of the building was performed, using the DBE parameters, followed by a probabilistic risk assessment (PRA). For the PRA, a functional organization of the facility equipment was effected so that top events for a representative event tree model could be determined. Building response spectra (calculated from the structural analysis), in conjunction with generic fragility data, were used to generate fragility curves for the PREPP equipment. Using these curves, failure probabilities for each top event were calculated. These probabilities were integrated into the event tree model, and accident sequences and respective probabilities were calculated through quantification. By combining the sequences failure probabilities with a transport analysis of the estimated airborne source term from a DBE, onsite and offsite consequences were calculated. The results of the comprehensive analysis substantiated the ability of the PREPP facility to withstand a DBE with negligible consequence (i.e., estimated release was within personnel and environmental dose guidelines). 57 refs., 19 figs., 20 tabs.

  2. Fault geometry and earthquake mechanics

    Directory of Open Access Journals (Sweden)

    D. J. Andrews

    1994-06-01

    Full Text Available Earthquake mechanics may be determined by the geometry of a fault system. Slip on a fractal branching fault surface can explain: 1 regeneration of stress irregularities in an earthquake; 2 the concentration of stress drop in an earthquake into asperities; 3 starting and stopping of earthquake slip at fault junctions, and 4 self-similar scaling of earthquakes. Slip at fault junctions provides a natural realization of barrier and asperity models without appealing to variations of fault strength. Fault systems are observed to have a branching fractal structure, and slip may occur at many fault junctions in an earthquake. Consider the mechanics of slip at one fault junction. In order to avoid a stress singularity of order 1/r, an intersection of faults must be a triple junction and the Burgers vectors on the three fault segments at the junction must sum to zero. In other words, to lowest order the deformation consists of rigid block displacement, which ensures that the local stress due to the dislocations is zero. The elastic dislocation solution, however, ignores the fact that the configuration of the blocks changes at the scale of the displacement. A volume change occurs at the junction; either a void opens or intense local deformation is required to avoid material overlap. The volume change is proportional to the product of the slip increment and the total slip since the formation of the junction. Energy absorbed at the junction, equal to confining pressure times the volume change, is not large enongh to prevent slip at a new junction. The ratio of energy absorbed at a new junction to elastic energy released in an earthquake is no larger than P/µ where P is confining pressure and µ is the shear modulus. At a depth of 10 km this dimensionless ratio has th value P/µ= 0.01. As slip accumulates at a fault junction in a number of earthquakes, the fault segments are displaced such that they no longer meet at a single point. For this reason the

  3. Earthquake recurrence models fail when earthquakes fail to reset the stress field

    Science.gov (United States)

    Tormann, Thessa; Wiemer, Stefan; Hardebeck, Jeanne L.

    2012-01-01

    Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning once more the predictability of earthquakes in general. Our study investigates the spatial pattern of b-values along the Parkfield segment through the seismic cycle and documents a stably stressed structure. The forecasted rate of M6 earthquakes based on Parkfield's microseismicity b-values corresponds well to observed rates. We interpret the observed b-value stability in terms of the evolution of the stress field in that area: the M6 Parkfield earthquakes do not fully unload the stress on the fault, explaining why time recurrent models fail. We present the 1989 M6.9 Loma Prieta earthquake as counter example, which did release a significant portion of the stress along its fault segment and yields a substantial change in b-values.

  4. Impact of rainstorm and runoff modeling on predicted consequences of atmospheric releases from nuclear reactor accidents

    International Nuclear Information System (INIS)

    Ritchie, L.T.; Brown, W.D.; Wayland, J.R.

    1980-05-01

    A general temperate latitude cyclonic rainstorm model is presented which describes the effects of washout and runoff on consequences of atmospheric releases of radioactive material from potential nuclear reactor accidents. The model treats the temporal and spatial variability of precipitation processes. Predicted air and ground concentrations of radioactive material and resultant health consequences for the new model are compared to those of the original WASH-1400 model under invariant meteorological conditions and for realistic weather events using observed meteorological sequences. For a specific accident under a particular set of meteorological conditions, the new model can give significantly different results from those predicted by the WASH-1400 model, but the aggregate consequences produced for a large number of meteorological conditions are similar

  5. A 30-year history of earthquake crisis communication in California and lessons for the future

    Science.gov (United States)

    Jones, L.

    2015-12-01

    The first statement from the US Geological Survey to the California Office of Emergency Services quantifying the probability of a possible future earthquake was made in October 1985 about the probability (approximately 5%) that a M4.7 earthquake located directly beneath the Coronado Bay Bridge in San Diego would be a foreshock to a larger earthquake. In the next 30 years, publication of aftershock advisories have become routine and formal statements about the probability of a larger event have been developed in collaboration with the California Earthquake Prediction Evaluation Council (CEPEC) and sent to CalOES more than a dozen times. Most of these were subsequently released to the public. These communications have spanned a variety of approaches, with and without quantification of the probabilities, and using different ways to express the spatial extent and the magnitude distribution of possible future events. The USGS is re-examining its approach to aftershock probability statements and to operational earthquake forecasting with the goal of creating pre-vetted automated statements that can be released quickly after significant earthquakes. All of the previous formal advisories were written during the earthquake crisis. The time to create and release a statement became shorter with experience from the first public advisory (to the 1988 Lake Elsman earthquake) that was released 18 hours after the triggering event, but was never completed in less than 2 hours. As was done for the Parkfield experiment, the process will be reviewed by CEPEC and NEPEC (National Earthquake Prediction Evaluation Council) so the statements can be sent to the public automatically. This talk will review the advisories, the variations in wording and the public response and compare this with social science research about successful crisis communication, to create recommendations for future advisories

  6. NGA-West 2 Equations for predicting PGA, PGV, and 5%-Damped PSA for shallow crustal earthquakes

    Science.gov (United States)

    Boore, David M.; Stewart, Jon P.; Seyhan, Emel; Atkinson, Gail M.

    2013-01-01

    We provide ground-motion prediction equations for computing medians and standard deviations of average horizontal component intensity measures (IMs) for shallow crustal earthquakes in active tectonic regions. The equations were derived from a global database with M 3.0–7.9 events. We derived equations for the primary M- and distance-dependence of the IMs after fixing the VS30-based nonlinear site term from a parallel NGA-West 2 study. We then evaluated additional effects using mixed effects residuals analysis, which revealed no trends with source depth over the M range of interest, indistinct Class 1 and 2 event IMs, and basin depth effects that increase and decrease long-period IMs for depths larger and smaller, respectively, than means from regional VS30-depth relations. Our aleatory variability model captures decreasing between-event variability with M, as well as within-event variability that increases or decreases with M depending on period, increases with distance, and decreases for soft sites.

  7. NGA-West2 equations for predicting vertical-component PGA, PGV, and 5%-damped PSA from shallow crustal earthquakes

    Science.gov (United States)

    Stewart, Jonathan P.; Boore, David M.; Seyhan, Emel; Atkinson, Gail M.

    2016-01-01

    We present ground motion prediction equations (GMPEs) for computing natural log means and standard deviations of vertical-component intensity measures (IMs) for shallow crustal earthquakes in active tectonic regions. The equations were derived from a global database with M 3.0–7.9 events. The functions are similar to those for our horizontal GMPEs. We derive equations for the primary M- and distance-dependence of peak acceleration, peak velocity, and 5%-damped pseudo-spectral accelerations at oscillator periods between 0.01–10 s. We observe pronounced M-dependent geometric spreading and region-dependent anelastic attenuation for high-frequency IMs. We do not observe significant region-dependence in site amplification. Aleatory uncertainty is found to decrease with increasing magnitude; within-event variability is independent of distance. Compared to our horizontal-component GMPEs, attenuation rates are broadly comparable (somewhat slower geometric spreading, faster apparent anelastic attenuation), VS30-scaling is reduced, nonlinear site response is much weaker, within-event variability is comparable, and between-event variability is greater.

  8. Earthquake Early Warning: User Education and Designing Effective Messages

    Science.gov (United States)

    Burkett, E. R.; Sellnow, D. D.; Jones, L.; Sellnow, T. L.

    2014-12-01

    The U.S. Geological Survey (USGS) and partners are transitioning from test-user trials of a demonstration earthquake early warning system (ShakeAlert) to deciding and preparing how to implement the release of earthquake early warning information, alert messages, and products to the public and other stakeholders. An earthquake early warning system uses seismic station networks to rapidly gather information about an occurring earthquake and send notifications to user devices ahead of the arrival of potentially damaging ground shaking at their locations. Earthquake early warning alerts can thereby allow time for actions to protect lives and property before arrival of damaging shaking, if users are properly educated on how to use and react to such notifications. A collaboration team of risk communications researchers and earth scientists is researching the effectiveness of a chosen subset of potential earthquake early warning interface designs and messages, which could be displayed on a device such as a smartphone. Preliminary results indicate, for instance, that users prefer alerts that include 1) a map to relate their location to the earthquake and 2) instructions for what to do in response to the expected level of shaking. A number of important factors must be considered to design a message that will promote appropriate self-protective behavior. While users prefer to see a map, how much information can be processed in limited time? Are graphical representations of wavefronts helpful or confusing? The most important factor to promote a helpful response is the predicted earthquake intensity, or how strong the expected shaking will be at the user's location. Unlike Japanese users of early warning, few Californians are familiar with the earthquake intensity scale, so we are exploring how differentiating instructions between intensity levels (e.g., "Be aware" for lower shaking levels and "Drop, cover, hold on" at high levels) can be paired with self-directed supplemental

  9. Discussion of the design of satellite-laser measurement stations in the eastern Mediterranean under the geological aspect. Contribution to the earthquake prediction research by the Wegener Group and to NASA's Crustal Dynamics Project

    Science.gov (United States)

    Paluska, A.; Pavoni, N.

    1983-01-01

    Research conducted for determining the location of stations for measuring crustal dynamics and predicting earthquakes is discussed. Procedural aspects, the extraregional kinematic tendencies, and regional tectonic deformation mechanisms are described.

  10. The GIS and analysis of earthquake damage distribution of the 1303 Hongtong M=8 earthquake

    Science.gov (United States)

    Gao, Meng-Tan; Jin, Xue-Shen; An, Wei-Ping; Lü, Xiao-Jian

    2004-07-01

    The geography information system of the 1303 Hongton M=8 earthquake has been established. Using the spatial analysis function of GIS, the spatial distribution characteristics of damage and isoseismal of the earthquake are studies. By comparing with the standard earthquake intensity attenuation relationship, the abnormal damage distribution of the earthquake is found, so the relationship of the abnormal distribution with tectonics, site condition and basin are analyzed. In this paper, the influence on the ground motion generated by earthquake source and the underground structures near source also are studied. The influence on seismic zonation, anti-earthquake design, earthquake prediction and earthquake emergency responding produced by the abnormal density distribution are discussed.

  11. Critical groups vs. representative person: dose calculations due to predicted releases from USEXA

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, N.L.D., E-mail: nelson.luiz@ctmsp.mar.mil.br [Centro Tecnologico da Marinha (CTM/SP), Sao Paulo, SP (Brazil); Rochedo, E.R.R., E-mail: elainerochedo@gmail.com [Instituto de Radiprotecao e Dosimetria (lRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Mazzilli, B.P., E-mail: mazzilli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    The critical group cf Centro Experimental Aramar (CEA) site was previously defined based 00 the effluents releases to the environment resulting from the facilities already operational at CEA. In this work, effective doses are calculated to members of the critical group considering the predicted potential uranium releases from the Uranium Hexafluoride Production Plant (USEXA). Basically, this work studies the behavior of the resulting doses related to the type of habit data used in the analysis and two distinct situations are considered: (a) the utilization of average values obtained from official institutions (IBGE, IEA-SP, CNEN, IAEA) and from the literature; and (b) the utilization of the 95{sup tb} percentile of the values derived from distributions fit to the obtained habit data. The first option corresponds to the way that data was used for the definition of the critical group of CEA done in former assessments, while the second one corresponds to the use of data in deterministic assessments, as recommended by ICRP to estimate doses to the so--called 'representative person' . (author)

  12. Predicting Ascospore Release of Monilinia vaccinii-corymbosi of Blueberry with Machine Learning.

    Science.gov (United States)

    Harteveld, Dalphy O C; Grant, Michael R; Pscheidt, Jay W; Peever, Tobin L

    2017-11-01

    Mummy berry, caused by Monilinia vaccinii-corymbosi, causes economic losses of highbush blueberry in the U.S. Pacific Northwest (PNW). Apothecia develop from mummified berries overwintering on soil surfaces and produce ascospores that infect tissue emerging from floral and vegetative buds. Disease control currently relies on fungicides applied on a calendar basis rather than inoculum availability. To establish a prediction model for ascospore release, apothecial development was tracked in three fields, one in western Oregon and two in northwestern Washington in 2015 and 2016. Air and soil temperature, precipitation, soil moisture, leaf wetness, relative humidity and solar radiation were monitored using in-field weather stations and Washington State University's AgWeatherNet stations. Four modeling approaches were compared: logistic regression, multivariate adaptive regression splines, artificial neural networks, and random forest. A supervised learning approach was used to train the models on two data sets: training (70%) and testing (30%). The importance of environmental factors was calculated for each model separately. Soil temperature, soil moisture, and solar radiation were identified as the most important factors influencing ascospore release. Random forest models, with 78% accuracy, showed the best performance compared with the other models. Results of this research helps PNW blueberry growers to optimize fungicide use and reduce production costs.

  13. Critical groups vs. representative person: dose calculations due to predicted releases from USEXA

    International Nuclear Information System (INIS)

    Ferreira, N.L.D.; Rochedo, E.R.R.; Mazzilli, B.P.

    2013-01-01

    The critical group cf Centro Experimental Aramar (CEA) site was previously defined based 00 the effluents releases to the environment resulting from the facilities already operational at CEA. In this work, effective doses are calculated to members of the critical group considering the predicted potential uranium releases from the Uranium Hexafluoride Production Plant (USEXA). Basically, this work studies the behavior of the resulting doses related to the type of habit data used in the analysis and two distinct situations are considered: (a) the utilization of average values obtained from official institutions (IBGE, IEA-SP, CNEN, IAEA) and from the literature; and (b) the utilization of the 95 tb percentile of the values derived from distributions fit to the obtained habit data. The first option corresponds to the way that data was used for the definition of the critical group of CEA done in former assessments, while the second one corresponds to the use of data in deterministic assessments, as recommended by ICRP to estimate doses to the so--called 'representative person' . (author)

  14. Muscle enzyme release does not predict muscle function impairment after triathlon.

    Science.gov (United States)

    Margaritis, I; Tessier, F; Verdera, F; Bermon, S; Marconnet, P

    1999-06-01

    We sought to determine the effects of a long distance triathlon (4 km swim, 120 km bike-ride, and 30 km run) on the four-day kinetics of the biochemical markers of muscle damage, and whether they were quantitatively linked with muscle function impairment and soreness. Data were collected from 2 days before until 4 days after the completion of the race. Twelve triathletes performed the triathlon and five did not. Maximal voluntary contraction (MVC), muscle soreness (DOMS) and total serum CK, CK-MB, LDH, AST and ALT activities were assessed. Significant changes after triathlon completion were found for all muscle damage indirect markers over time (p triathlon. Long distance triathlon race caused muscle damage, but extent, as well as muscle recovery cannot be evaluated by the magnitude of changes in serum enzyme activities. Muscle enzyme release cannot be used to predict the magnitude of the muscle function impairment caused by muscle damage.

  15. The severity of an earthquake

    Science.gov (United States)

    ,

    1997-01-01

    The severity of an earthquake can be expressed in terms of both intensity and magnitude. However, the two terms are quite different, and they are often confused. Intensity is based on the observed effects of ground shaking on people, buildings, and natural features. It varies from place to place within the disturbed region depending on the location of the observer with respect to the earthquake epicenter. Magnitude is related to the amount of seismic energy released at the hypocenter of the earthquake. It is based on the amplitude of the earthquake waves recorded on instruments

  16. Retrospective stress-forecasting of earthquakes

    Science.gov (United States)

    Gao, Yuan; Crampin, Stuart

    2015-04-01

    Observations of changes in azimuthally varying shear-wave splitting (SWS) above swarms of small earthquakes monitor stress-induced changes to the stress-aligned vertical microcracks pervading the upper crust, lower crust, and uppermost ~400km of the mantle. (The microcracks are intergranular films of hydrolysed melt in the mantle.) Earthquakes release stress, and an appropriate amount of stress for the relevant magnitude must accumulate before each event. Iceland is on an extension of the Mid-Atlantic Ridge, where two transform zones, uniquely run onshore. These onshore transform zones provide semi-continuous swarms of small earthquakes, which are the only place worldwide where SWS can be routinely monitored. Elsewhere SWS must be monitored above temporally-active occasional swarms of small earthquakes, or in infrequent SKS and other teleseismic reflections from the mantle. Observations of changes in SWS time-delays are attributed to stress-induced changes in crack aspect-ratios allowing stress-accumulation and stress-relaxation to be identified. Monitoring SWS in SW Iceland in 1988, stress-accumulation before an impending earthquake was recognised and emails were exchanged between the University of Edinburgh (EU) and the Iceland Meteorological Office (IMO). On 10th November 1988, EU emailed IMO that a M5 earthquake could occur soon on a seismically-active fault plane where seismicity was still continuing following a M5.1 earthquake six-months earlier. Three-days later, IMO emailed EU that a M5 earthquake had just occurred on the specified fault-plane. We suggest this is a successful earthquake stress-forecast, where we refer to the procedure as stress-forecasting earthquakes as opposed to predicting or forecasting to emphasise the different formalism. Lack of funds has prevented us monitoring SWS on Iceland seismograms, however, we have identified similar characteristic behaviour of SWS time-delays above swarms of small earthquakes which have enabled us to

  17. Statistical physics approach to earthquake occurrence and forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Arcangelis, Lucilla de [Department of Industrial and Information Engineering, Second University of Naples, Aversa (CE) (Italy); Godano, Cataldo [Department of Mathematics and Physics, Second University of Naples, Caserta (Italy); Grasso, Jean Robert [ISTerre, IRD-CNRS-OSUG, University of Grenoble, Saint Martin d’Héres (France); Lippiello, Eugenio, E-mail: eugenio.lippiello@unina2.it [Department of Mathematics and Physics, Second University of Naples, Caserta (Italy)

    2016-04-25

    There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space–time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for

  18. Estimated airborne release of radionuclides from the Battelle Memorial Institute Columbus Laboratories JN-1b building at the West Jefferson site as a result of postulated damage from severe wind and earthquake hazard

    International Nuclear Information System (INIS)

    Mishima, J.; Ayer, J.E.

    1981-11-01

    The potential airborne releases of radionuclides (source terms) that could result from wind and earthquake dmage are estimated for the Battelle Memorial Institute Columbus Laboratories JN-1b Building at the West Jefferson site in Ohio. The estimated source terms are based on the damage to barriers containing the radionuclides, the inventory of radionuclides at risk, and the fraction of the inventory made airborne as a result of the loss of containment. In an attempt to provide a realistic range of potential source terms that include most of the normal operating conditions, a best estimate bounded by upper and lower limits is calculated by combining the upper-bound, best-estimate, and lower-bound inventories-at-risk with an airborne release factor (upper-bound, best-estimate, and lower-bound if possible) for the situation. The factors used to evaluate the fractional airborne release of materials and the exchange rates between enclosed and exterior atmospheres are discussed. The postulated damage and source terms are discussed for wind and earthquake hazard scenarios in order of their increasing severity

  19. Methods for prediction of strong earthquake ground motion. Final technical report, October 1, 1976--September 30, 1977

    International Nuclear Information System (INIS)

    Trifunac, M.D.

    1977-09-01

    The purpose of this report is to summarize the results of the work on characterization of strong earthquake ground motion. The objective of this effort has been to initiate presentation of simple yet detailed methodology for characterization of strong earthquake ground motion for use in licensing and evaluation of operating Nuclear Power Plants. This report will emphasize the simplicity of the methodology by presenting only the end results in a format that may be useful for the development of the site specific criteria in seismic risk analysis, for work on the development of modern standards and regulatory guides, and for re-evaluation of the existing power plant sites

  20. PRECURSORS OF EARTHQUAKES: VLF SIGNALSIONOSPHERE IONOSPHERE RELATION

    Directory of Open Access Journals (Sweden)

    Mustafa ULAS

    2013-01-01

    Full Text Available lot of people have died because of earthquakes every year. Therefore It is crucial to predict the time of the earthquakes reasonable time before it had happed. This paper presents recent information published in the literature about precursors of earthquakes. The relationships between earthquakes and ionosphere are targeted to guide new researches in order to study further to find novel prediction methods.

  1. Earthquakes: no danger for deep underground nuclear waste repositories

    International Nuclear Information System (INIS)

    2010-03-01

    On the Earth, the continental plates are steadily moving. Principally at the plate boundaries such shifts produce stresses which are released in form of earthquakes. The highest the built-up energy, the more violent will be the shaking. Earthquakes accompany mankind from very ancient times on and they disturb the population. Till now nobody is able to predict where and when they will take place. But on the Earth there are regions where, due to their geological situation, the occurrence of earthquakes is more probable than elsewhere. The impact of a very strong earthquake on the structures at the Earth surface depends on several factors. Besides the ground structure, the density of buildings, construction style and materials used play an important role. Construction-related technical measures can improve the safety of buildings and, together with a correct behaviour of the people concerned, save many lives. Earthquakes are well known in Switzerland. Here, the stresses are due to the collision of the African and European continental plates that created the Alps. The impact of earthquake is more limited in the underground than at the Earth surface. There is no danger for deep underground repositories

  2. Predicting release and transport of pesticides from a granular formulation during unsaturated diffusion in porous media

    DEFF Research Database (Denmark)

    Paradelo Pérez, Marcos; Soto-Gómez, Diego; Pérez-Rodrígez, Paula

    2014-01-01

    The release and transport of active ingredients (AIs) from controlled-release formulations (CRFs) have potential to reduce groundwater pesticide pollution. These formulations have a major effect on the release rate and subsequent transport to groundwater. Therefore the influence of CRFs should be...

  3. Gas and Dust Phenomena of Mega-earthquakes and the Cause

    Science.gov (United States)

    Yue, Z.

    2013-12-01

    A mega-earthquake suddenly releases a large to extremely large amount of kinetic energy within a few tens to two hundreds seconds and over ten to hundreds kilometer distances in the Earth's crust and on ground surface. It also generates seismic waves that can be received globally and co-seismic ground damages such co-seismic ruptures and landslides. However, such vast, dramatic and devastating kinetic actions in the Earth's crustal rocks and on the ground soils cannot be known or predicted by people at few weeks, days, hours, or minutes before they are happening. Although seismologists can develop and use seismometers to report the locations and magnitudes of earthquakes within minutes of their occurrence, they cannot predict earthquakes at present. Therefore, damage earthquakes have caused and would continue to cause huge disasters, fatalities and injuries to our human beings. This problem may indicate that it is necessary to re-examine the cause of mega-earthquakes in addition to the conventional cause of active fault elastic rebounding. In the last ten years, many mega-earthquakes occurred in China and around the Pacific Ocean and caused many casualties to human beings and devastating disasters to environments. The author will give a brief review on the impacts of the mega-earthquakes happened in recent years. He will then present many gas and dust related phenomena associated with the sudden occurrences of these mega earthquakes. They include the 2001 Kunlunshan Earthquake M8.1, 2008 Wenchuan Earthquake M8.0 and the 2010 Yushu Earthquake M7.1 in China, the 2010 Haiti Earthquake M7.0, the 2010 Mexicali Earthquake M7.2, the 2010 Chile Earthquake M8.8, the 2011 Christchurch earthquake M6.3 and the 2011 Japan Earthquake M9.0 around the Pacific Ocean. He will discuss the cause of these gas and dust related phenomena. He will use these phenomena and their common cause to show that the earthquakes were caused the rapid migration and expansion of highly compressed and

  4. Predicting the release of metals from ombrotrophic peat due to drought-induced acidification

    Energy Technology Data Exchange (ETDEWEB)

    Tipping, E.; Smith, E.J.; Lawlor, A.J.; Hughes, S.; Stevens, P.A

    2003-05-01

    Metals stored in peats can be remobilised by sulphuric acid, generated by the drought-induced oxidation of reduced sulphur. - Ombrotrophic peats in northern England and Scotland, close to industrial areas, have substantial contents of potentially toxic metals (Al, Ni, Cu, Zn, Cd and Pb) and of pollutant sulphur, all derived from atmospheric deposition. The peat sulphur, ordinarily in reduced form, may be converted to sulphuric acid under drought conditions, due to the entry of oxygen into the peats. The consequent lowering of soil solution pH is predicted to cause the release of metals held on ligand sites of the peat organic matter. The purpose of the present study was to explore, by simulation modelling, the extent of the metal response. Chemical variables (elemental composition, pH, metal contents) were measured for samples of ombrotrophic peats from three locations. Water extracts of the peats, and samples of local surface water, were also analysed, for pH, dissolved organic carbon (DOC) and metals. Metal release from peats due to acidification was demonstrated experimentally, and could be accounted for reasonably well using a speciation code (WHAM/Model VI). These data, together with information on metal and S deposition, and meteorology, were used to construct a simple description of peat hydrochemistry, based on WHAM/Model VI, that takes into account ion-binding by humic substances (assumed to be the 'active' constituents of the peat with respect to ion-binding). The model was used to simulate steady state situations that approximated the observed soil pH, metal pools and dissolved metal concentrations. Then, drought conditions were imposed, to generate increased concentrations of H{sub 2}SO{sub 4}, in line with those observed during the drought of 1995. The model calculations suggest that the pH will decrease from the initial steady state value of 4.3 to 3.3-3.6 during rewetting periods following droughts, depending upon assumptions about the

  5. Impact of atmospheric release in stable night meteorological conditions; can emergency models predict dispersion

    Energy Technology Data Exchange (ETDEWEB)

    Connan, O.; Hebert, D.; Solier, L.; Voiseux, C.; Lamotte, M.; Laguionie, P.; Maro, D.; Thomas, L. [IRSN/PRP-ENV/SERIS/LRC (France)

    2014-07-01

    Atmospheric dispersion of pollutant or radionuclides in stratified meteorological condition, i.e. especially when weather conditions are very stable, mainly at night, is still poorly understood and not well apprehended by the operational atmospheric dispersion models. However, correctly predicting the dispersion of a radioactive plume, and estimating the radiological consequences for the population, following an unplanned atmospheric release of radionuclides are crucial steps in an emergency response. To better understand dispersion in these special weather conditions, IRSN performed a series of 22 air sampling campaigns between 2010 and 2013 in the vicinity of the La Hague nuclear reprocessing plant (AREVA - NC, France), at distances between 200 m and 3000 m from the facility. Krypton-85 ({sup 85}Kr), a b-and g-emitting radionuclide, released during the reprocessing of spent nuclear fuel was used as a non-reactive tracer of radioactive plumes. Experimental campaigns were realized in stability class stable or very stable (E or F according to Pasquill classification) 18 times, and in neutral conditions (D according to Pasquill classification) 4 times. During each campaign, Krypton-85 real time measurement were made to find the plume around the plant, and then integrated samples (30 min) were collected in bag perpendicularly to the assumed wind direction axis. After measurement by gamma spectrometry, we have, when it was possible, estimate the point of impact and the width of the plume. The objective was to estimate the horizontal dispersion (width) of the plume at ground level in function of the distance and be able to calculate atmospheric transfer coefficients. In a second step, objective was to conclude on the use of common model and on their uncertainties. The results will be presented in terms of impact on the near-field. They will be compared with data obtained in previous years in neutral atmospheric conditions, and finally the results will be confronted with

  6. Extreme value distribution of earthquake magnitude

    Science.gov (United States)

    Zi, Jun Gan; Tung, C. C.

    1983-07-01

    Probability distribution of maximum earthquake magnitude is first derived for an unspecified probability distribution of earthquake magnitude. A model for energy release of large earthquakes, similar to that of Adler-Lomnitz and Lomnitz, is introduced from which the probability distribution of earthquake magnitude is obtained. An extensive set of world data for shallow earthquakes, covering the period from 1904 to 1980, is used to determine the parameters of the probability distribution of maximum earthquake magnitude. Because of the special form of probability distribution of earthquake magnitude, a simple iterative scheme is devised to facilitate the estimation of these parameters by the method of least-squares. The agreement between the empirical and derived probability distributions of maximum earthquake magnitude is excellent.

  7. Do earthquakes exhibit self-organized criticality?

    International Nuclear Information System (INIS)

    Yang Xiaosong; Ma Jin; Du Shuming

    2004-01-01

    If earthquakes are phenomena of self-organized criticality (SOC), statistical characteristics of the earthquake time series should be invariant after the sequence of events in an earthquake catalog are randomly rearranged. In this Letter we argue that earthquakes are unlikely phenomena of SOC because our analysis of the Southern California Earthquake Catalog shows that the first-return-time probability P M (T) is apparently changed after the time series is rearranged. This suggests that the SOC theory should not be used to oppose the efforts of earthquake prediction

  8. Resource loss, self-efficacy, and family support predict posttraumatic stress symptoms: a 3-year study of earthquake survivors.

    Science.gov (United States)

    Warner, Lisa Marie; Gutiérrez-Doña, Benicio; Villegas Angulo, Maricela; Schwarzer, Ralf

    2015-01-01

    Social support and self-efficacy are regarded as coping resources that may facilitate readjustment after traumatic events. The 2009 Cinchona earthquake in Costa Rica serves as an example for such an event to study resources to prevent subsequent severity of posttraumatic stress symptoms. At Time 1 (1-6 months after the earthquake in 2009), N=200 survivors were interviewed, assessing resource loss, received family support, and posttraumatic stress response. At Time 2 in 2012, severity of posttraumatic stress symptoms and general self-efficacy beliefs were assessed. Regression analyses estimated the severity of posttraumatic stress symptoms accounted for by all variables. Moderator and mediator models were examined to understand the interplay of received family support and self-efficacy with posttraumatic stress symptoms. Baseline posttraumatic stress symptoms and resource loss (T1) accounted for significant but small amounts of the variance in the severity of posttraumatic stress symptoms (T2). The main effects of self-efficacy (T2) and social support (T1) were negligible, but social support buffered resource loss, indicating that only less supported survivors were affected by resource loss. Self-efficacy at T2 moderated the support-stress relationship, indicating that low levels of self-efficacy could be compensated by higher levels of family support. Receiving family support at T1 enabled survivors to feel self-efficacious, underlining the enabling hypothesis. Receiving social support from relatives shortly after an earthquake was found to be an important coping resource, as it alleviated the association between resource loss and the severity of posttraumatic stress response, compensated for deficits of self-efficacy, and enabled self-efficacy, which was in turn associated with more adaptive adjustment 3 years after the earthquake.

  9. BIODOSE: a code for predicting the dose to man from radionuclides released from underground nuclear waste repositories

    International Nuclear Information System (INIS)

    Bonner, N.A.; Ng, Y.C.

    1980-03-01

    The BIODOSE computer program simulates the environmental transport of radionuclides released to surface water and predicts the resulting dosage to humans. This report describes the program and discusses its use in the evaluation of nuclear waste repositories. The methods used to estimate dose are examined critically, and the most important parameters in each stage of the calculations are identified as an aid in planning for measurements in the field. Dose predictions from releases of nuclear waste to a large northwestern river (the baseline river) are presented to point out the nuclides, compartments and pathways that contribute most to the hazard as a function of waste storage time. Predictions for five other water systems are presented to identify the most important system parameters that determine the concentrations of individual nuclides in compartments and the resultant dose. The uncertainties in the biological parameters for dose prediction are identified, and changes in current values are suggested. Various ways of reporting dose estimates for radiological safety assessments are discussed. Additional work needed to improve the dose predictions from BIODOSE and specific areas and steps to improve our capabilities to assess the environmental transport of nuclides released from nuclear waste repositories and the resultant dose to man are suggested

  10. Differential Dopamine Release Dynamics in the Nucleus Accumbens Core and Shell Reveal Complementary Signals for Error Prediction and Incentive Motivation.

    Science.gov (United States)

    Saddoris, Michael P; Cacciapaglia, Fabio; Wightman, R Mark; Carelli, Regina M

    2015-08-19

    Mesolimbic dopamine (DA) is phasically released during appetitive behaviors, though there is substantive disagreement about the specific purpose of these DA signals. For example, prediction error (PE) models suggest a role of learning, while incentive salience (IS) models argue that the DA signal imbues stimuli with value and thereby stimulates motivated behavior. However, within the nucleus accumbens (NAc) patterns of DA release can strikingly differ between subregions, and as such, it is possible that these patterns differentially contribute to aspects of PE and IS. To assess this, we measured DA release in subregions of the NAc during a behavioral task that spatiotemporally separated sequential goal-directed stimuli. Electrochemical methods were used to measure subsecond NAc dopamine release in the core and shell during a well learned instrumental chain schedule in which rats were trained to press one lever (seeking; SL) to gain access to a second lever (taking; TL) linked with food delivery, and again during extinction. In the core, phasic DA release was greatest following initial SL presentation, but minimal for the subsequent TL and reward events. In contrast, phasic shell DA showed robust release at all task events. Signaling decreased between the beginning and end of sessions in the shell, but not core. During extinction, peak DA release in the core showed a graded decrease for the SL and pauses in release during omitted expected rewards, whereas shell DA release decreased predominantly during the TL. These release dynamics suggest parallel DA signals capable of supporting distinct theories of appetitive behavior. Dopamine signaling in the brain is important for a variety of cognitive functions, such as learning and motivation. Typically, it is assumed that a single dopamine signal is sufficient to support these cognitive functions, though competing theories disagree on how dopamine contributes to reward-based behaviors. Here, we have found that real

  11. The Fusion of Financial Analysis and Seismology: Statistical Methods from Financial Market Analysis Applied to Earthquake Data

    Science.gov (United States)

    Ohyanagi, S.; Dileonardo, C.

    2013-12-01

    As a natural phenomenon earthquake occurrence is difficult to predict. Statistical analysis of earthquake data was performed using candlestick chart and Bollinger Band methods. These statistical methods, commonly used in the financial world to analyze market trends were tested against earthquake data. Earthquakes above Mw 4.0 located on shore of Sanriku (37.75°N ~ 41.00°N, 143.00°E ~ 144.50°E) from February 1973 to May 2013 were selected for analysis. Two specific patterns in earthquake occurrence were recognized through the analysis. One is a spread of candlestick prior to the occurrence of events greater than Mw 6.0. A second pattern shows convergence in the Bollinger Band, which implies a positive or negative change in the trend of earthquakes. Both patterns match general models for the buildup and release of strain through the earthquake cycle, and agree with both the characteristics of the candlestick chart and Bollinger Band analysis. These results show there is a high correlation between patterns in earthquake occurrence and trend analysis by these two statistical methods. The results of this study agree with the appropriateness of the application of these financial analysis methods to the analysis of earthquake occurrence.

  12. Earthquake clustering in modern seismicity and its relationship with strong historical earthquakes around Beijing, China

    Science.gov (United States)

    Wang, Jian; Main, Ian G.; Musson, Roger M. W.

    2017-11-01

    Beijing, China's capital city, is located in a typical intraplate seismic belt, with relatively high-quality instrumental catalogue data available since 1970. The Chinese historical earthquake catalogue contains six strong historical earthquakes of Ms ≥ 6 around Beijing, the earliest in 294 AD. This poses a significant potential hazard to one of the most densely populated and economically active parts of China. In some intraplate areas, persistent clusters of events associated with historical events can occur over centuries, for example, the ongoing sequence in the New Madrid zone of the eastern US. Here we will examine the evidence for such persistent clusters around Beijing. We introduce a metric known as the `seismic density index' that quantifies the degree of clustering of seismic energy release. For a given map location, this multi-dimensional index depends on the number of events, their magnitudes, and the distances to the locations of the surrounding population of earthquakes. We apply the index to modern instrumental catalogue data between 1970 and 2014, and identify six clear candidate zones. We then compare these locations to earthquake epicentre and seismic intensity data for the six largest historical earthquakes. Each candidate zone contains one of the six historical events, and the location of peak intensity is within 5 km or so of the reported epicentre in five of these cases. In one case—the great Ms 8 earthquake of 1679—the peak is closer to the area of strongest shaking (Intensity XI or more) than the reported epicentre. The present-day event rates are similar to those predicted by the modified Omori law but there is no evidence of ongoing decay in event rates. Accordingly, the index is more likely to be picking out the location of persistent weaknesses in the lithosphere. Our results imply zones of high seismic density index could be used in principle to indicate the location of unrecorded historical of palaeoseismic events, in China and

  13. Estimated airborne release of plutonium from Atomics International's Nuclear Materials Development Facility in the Santa Susana site, California, as a result of postulated damage from severe wind and earthquake hazard

    International Nuclear Information System (INIS)

    Mishima, J.; Ayer, J.E.

    1981-09-01

    The potential mass of airborne releases of plutonium (source term) that could result from wind and seismic damage is estimated for the Atomics International Company's Nuclear Materials Development Facility (NMDF) at the Santa Susana site in California. The postulated source terms will be useful as the basis for estimating the potential dose to the maximum exposed individual by inhalation and to the total population living within a prescribed radius of the site. The respirable fraction of airborne particles is thus the principal concern. The estimated source terms are based on the damage ratio, and the potential airborne releases if all enclosures suffer particular levels of damage. In an attempt to provide a realistic range of potential source terms that include most of the normal processing conditions, a best estimate bounded by upper and lower limits is provided. The range of source terms is calculated by combining a high best estimate and a low damage ratio, based on a fraction of enclosures suffering crush or perforation, with the airborne release from enclosures based upon an upper limit, average, and lower limit inventory of dispersible materials at risk. Two throughput levels are considered. The factors used to evaluate the fractional airborne release of materials and the exchange rates between enclosed and exterior atmospheres are discussed. The postulated damage and source terms are discussed for wind and earthquake hazard scenarios in order of their increasing severity

  14. Gas release during salt-well pumping: Model predictions and laboratory validation studies for soluble and insoluble gases

    International Nuclear Information System (INIS)

    Peurrung, L.M.; Caley, S.M.; Gauglitz, P.A.

    1997-08-01

    The Hanford Site has 149 single-shell tanks (SSTs) containing radioactive wastes that are complex mixes of radioactive and chemical products. Of these, 67 are known or suspected to have leaked liquid from the tanks into the surrounding soil. Salt-well pumping, or interim stabilization, is a well-established operation for removing drainable interstitial liquid from SSTs. The overall objective of this ongoing study is to develop a quantitative understanding of the release rates and cumulative releases of flammable gases from SSTs as a result of salt-well pumping. The current study is an extension of the previous work reported by Peurrung et al. (1996). The first objective of this current study was to conduct laboratory experiments to quantify the release of soluble and insoluble gases. The second was to determine experimentally the role of characteristic waste heterogeneities on the gas release rates. The third objective was to evaluate and validate the computer model STOMP (Subsurface Transport over Multiple Phases) used by Peurrung et al. (1996) to predict the release of both soluble (typically ammonia) and insoluble gases (typically hydrogen) during and after salt-well pumping. The fourth and final objective of the current study was to predict the gas release behavior for a range of typical tank conditions and actual tank geometry. In these models, the authors seek to include all the pertinent salt-well pumping operational parameters and a realistic range of physical properties of the SST wastes. For predicting actual tank behavior, two-dimensional (2-D) simulations were performed with a representative 2-D tank geometry

  15. Earthquake precursory events around epicenters and local active faults; the cases of two inland earthquakes in Iran

    Science.gov (United States)

    Valizadeh Alvan, H.; Mansor, S.; Haydari Azad, F.

    2012-12-01

    The possibility of earthquake prediction in the frame of several days to few minutes before its occurrence has stirred interest among researchers, recently. Scientists believe that the new theories and explanations of the mechanism of this natural phenomenon are trustable and can be the basis of future prediction efforts. During the last thirty years experimental researches resulted in some pre-earthquake events which are now recognized as confirmed warning signs (precursors) of past known earthquakes. With the advances in in-situ measurement devices and data analysis capabilities and the emergence of satellite-based data collectors, monitoring the earth's surface is now a regular work. Data providers are supplying researchers from all over the world with high quality and validated imagery and non-imagery data. Surface Latent Heat Flux (SLHF) or the amount of energy exchange in the form of water vapor between the earth's surface and atmosphere has been frequently reported as an earthquake precursor during the past years. The accumulated stress in the earth's crust during the preparation phase of earthquakes is said to be the main cause of temperature anomalies weeks to days before the main event and subsequent shakes. Chemical and physical interactions in the presence of underground water lead to higher water evaporation prior to inland earthquakes. On the other hand, the leak of Radon gas occurred as rocks break during earthquake preparation causes the formation of airborne ions and higher Air Temperature (AT) prior to main event. Although co-analysis of direct and indirect observation for precursory events is considered as a promising method for future successful earthquake prediction, without proper and thorough knowledge about the geological setting, atmospheric factors and geodynamics of the earthquake-prone regions we will not be able to identify anomalies due to seismic activity in the earth's crust. Active faulting is a key factor in identification of the

  16. New streams and springs after the 2014 Mw6.0 South Napa earthquake.

    Science.gov (United States)

    Wang, Chi-Yuen; Manga, Michael

    2015-07-09

    Many streams and springs, which were dry or nearly dry before the 2014 Mw6.0 South Napa earthquake, started to flow after the earthquake. A United States Geological Survey stream gauge also registered a coseismic increase in discharge. Public interest was heightened by a state of extreme drought in California. Since the new flows were not contaminated by pre-existing surface water, their composition allowed unambiguous identification of their origin. Following the earthquake we repeatedly surveyed the new flows, collecting data to test hypotheses about their origin. We show that the new flows originated from groundwater in nearby mountains released by the earthquake. The estimated total amount of new water is ∼ 10(6) m(3), about 1/40 of the annual water use in the Napa-Sonoma area. Our model also makes a testable prediction of a post-seismic decrease of seismic velocity in the shallow crust of the affected region.

  17. The threat of silent earthquakes

    Science.gov (United States)

    Cervelli, Peter

    2004-01-01

    Not all earthquakes shake the ground. The so-called silent types are forcing scientists to rethink their understanding of the way quake-prone faults behave. In rare instances, silent earthquakes that occur along the flakes of seaside volcanoes may cascade into monstrous landslides that crash into the sea and trigger towering tsunamis. Silent earthquakes that take place within fault zones created by one tectonic plate diving under another may increase the chance of ground-shaking shocks. In other locations, however, silent slip may decrease the likelihood of destructive quakes, because they release stress along faults that might otherwise seem ready to snap.

  18. Analog earthquakes

    International Nuclear Information System (INIS)

    Hofmann, R.B.

    1995-01-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository

  19. Multi-component observation in deep boreholes, and its applications to earthquake prediction research and rock mechanics

    International Nuclear Information System (INIS)

    Ishii, Hiroshi

    2014-01-01

    The Tono Research Institute of Earthquake Science (TRIES) has developed a multicomponent instrument that can be operated in deep boreholes (e.g., those one km in depth). It is equipped with stress meters, strain meters, tilt meters, seismometers, magnetometers, and thermometers; in addition, these sensors can be arbitrarily combined. The stress meters, which were developed recently, can observe stress and strain; in the future, data obtained from these sensors will offer new information on seismology and rock mechanics. The size of typical probe is 12 cm diameter 7.8 m total length and 290 kg total weight. It consists of many meters in tandem connection. (authors)

  20. Deployable Plume and Aerosol Release Prediction and Tracking System. Nuclear Non-Proliferation Task 1. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Kleppe, John; Norris, William; Etezadi, Mehdi

    2006-07-19

    This contract was awarded in response to a proposal in which a deployable plume and aerosol release prediction and tracking system would be designed, fabricated, and tested. The system would gather real time atmospheric data and input it into a real time atmospheric model that could be used for plume predition and tracking. The system would be able to be quickly deployed by aircraft to points of interest or positioned for deployment by vehicles. The system would provide three dimensional (u, v, and w) wind vector data, inversion height measurements, surface wind information, classical weather station data, and solar radiation. The on-board real time computer model would provide the prediction of the behavior of plumes and released aerosols.

  1. Mechanistic prediction of fission product release under normal and accident conditions: key uncertainties that need better resolution

    International Nuclear Information System (INIS)

    Rest, J.

    1983-09-01

    A theoretical model has been used for predicting the behavior of fission gas and volatile fission products (VFPs) in UO 2 -base fuels during steady-state and transient conditions. This model represents an attempt to develop an efficient predictive capability for the full range of possible reactor operating conditions. Fission products released from the fuel are assumed to reach the fuel surface by successively diffusing (via atomic and gas-bubble mobility) from the grains to grain faces and then to the grain edges, where the fission products are released through a network of interconnected tunnels of fission-gas induced and fabricated porosity. The model provides for a multi-region calculation and uses only one size class to characterize a distribution of fission gas bubbles

  2. Accuracy of the paracetamol-aminotransferase multiplication product to predict hepatotoxicity in modified-release paracetamol overdose.

    Science.gov (United States)

    Wong, Anselm; Sivilotti, Marco L A; Graudins, Andis

    2017-06-01

    The paracetamol-aminotransferase multiplication product (APAP × ALT) is a risk predictor of hepatotoxicity that is somewhat independent of time and type of ingestion. However, its accuracy following ingestion of modified-release formulations is not known, as the product has been derived and validated after immediate-release paracetamol overdoses. The aim of this retrospective cohort study was to evaluate the accuracy of the multiplication product to predict hepatotoxicity in a cohort of patients with modified-release paracetamol overdose. We assessed all patients with modified-release paracetamol overdose presenting to our hospital network from October 2009 to July 2016. Ingestion of a modified-release formulation was identified by patient self-report or retrieval of the original container. Hepatotoxicity was defined as peak alanine aminotransferase ≥1000 IU/L, and acute liver injury (ALI) as a doubling of baseline ALT to more than 50 IU/L. Of 1989 paracetamol overdose presentations, we identified 73 modified-release paracetamol exposures treated with acetylcysteine. Five patients developed hepatotoxicity, including one who received acetylcysteine within eight hours of an acute ingestion. No patient with an initial multiplication product paracetamol overdose treated with acetylcysteine, the paracetamol-aminotransferase multiplication product demonstrated similar accuracy and temporal profile to previous reports involving mostly immediate-release formulations. Above a cut-point of 10,000 mg/L × IU/L, it was very strongly associated with the development of acute liver injury and hepatotoxicity, especially when calculated more than eight hours post-ingestion. When below 1500 mg/L × IU/L the likelihood of developing hepatotoxicity was very low. Persistently high serial multiplication product calculations were associated with the greatest risk of hepatotoxicity.

  3. Earthquake recurrence and magnitude and seismic deformation of the northwestern Okhotsk plate, northeast Russia

    Science.gov (United States)

    Hindle, D.; Mackey, K.

    2011-02-01

    Recorded seismicity from the northwestern Okhotsk plate, northeast Asia, is currently insufficient to account for the predicted slip rates along its boundaries due to plate tectonics. However, the magnitude-frequency relationship for earthquakes from the region suggests that larger earthquakes are possible in the future and that events of ˜Mw 7.5 which should occur every ˜100-350 years would account for almost all the slip of the plate along its boundaries due to Eurasia-North America convergence. We use models for seismic slip distribution along the bounding faults of Okhotsk to conclude that relatively little aseismic strain release is occurring and that larger future earthquakes are likely in the region. Our models broadly support the idea of a single Okhotsk plate, with the large majority of tectonic strain released along its boundaries.

  4. Ethanol Exposure History and Alcoholic Reward Differentially Alter Dopamine Release in the Nucleus Accumbens to a Reward-Predictive Cue.

    Science.gov (United States)

    Fiorenza, Amanda M; Shnitko, Tatiana A; Sullivan, Kaitlin M; Vemuru, Sudheer R; Gomez-A, Alexander; Esaki, Julie Y; Boettiger, Charlotte A; Da Cunha, Claudio; Robinson, Donita L

    2018-06-01

    Conditioned stimuli (CS) that predict reward delivery acquire the ability to induce phasic dopamine release in the nucleus accumbens (NAc). This dopamine release may facilitate conditioned approach behavior, which often manifests as approach to the site of reward delivery (called "goal-tracking") or to the CS itself (called "sign-tracking"). Previous research has linked sign-tracking in particular to impulsivity and drug self-administration, and addictive drugs may promote the expression of sign-tracking. Ethanol (EtOH) acutely promotes phasic release of dopamine in the accumbens, but it is unknown whether an alcoholic reward alters dopamine release to a CS. We hypothesized that Pavlovian conditioning with an alcoholic reward would increase dopamine release triggered by the CS and subsequent sign-tracking behavior. Moreover, we predicted that chronic intermittent EtOH (CIE) exposure would promote sign-tracking while acute administration of naltrexone (NTX) would reduce it. Rats received 14 doses of EtOH (3 to 5 g/kg, intragastric) or water followed by 6 days of Pavlovian conditioning training. Rewards were a chocolate solution with or without 10% (w/v) alcohol. We used fast-scan cyclic voltammetry to measure phasic dopamine release in the NAc core in response to the CS and the rewards. We also determined the effect of NTX (1 mg/kg, subcutaneous) on conditioned approach. Both CIE and alcoholic reward, individually but not together, associated with greater dopamine to the CS than control conditions. However, this increase in dopamine release was not linked to greater sign-tracking, as both CIE and alcoholic reward shifted conditioned approach from sign-tracking behavior to goal-tracking behavior. However, they both also increased sensitivity to NTX, which reduced goal-tracking behavior. While a history of EtOH exposure or alcoholic reward enhanced dopamine release to a CS, they did not promote sign-tracking under the current conditions. These findings are

  5. Gas release during salt well pumping: model predictions and comparisons to laboratory experiments

    International Nuclear Information System (INIS)

    Peurrung, L.M.; Caley, S.M.; Bian, E.Y.; Gauglitz, P.A.

    1996-09-01

    The Hanford Site has 149 single-shell tanks (SSTs) containing radioactive wastes that are complex mixes of radioactive and chemical products. Some of these wastes are known to generate mixtures of flammable gases, including hydrogen, nitrous oxide, and ammonia. Nineteen of these SSTs have been placed on the Flammable Gas Watch List (FGWL) because they are known or suspected, in all but one case, to retain these flammable gases. Salt well pumping to remove the interstitial liquid from SSTs is expected to cause the release of much of the retained gas, posing a number of safety concerns. Research at the Pacific Northwest National Laboratory (PNNL) has sought to quantify the release of flammable gases during salt well pumping operations. This study is being conducted for Westinghouse Hanford Company as part of the PNNL Flammable Gas Project. Understanding and quantifying the physical mechanisms and waste properties that govern gas release during salt well pumping will help to resolve the associated safety issues

  6. Modeling, Forecasting and Mitigating Extreme Earthquakes

    Science.gov (United States)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  7. Earthquake number forecasts testing

    Science.gov (United States)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  8. GRSIS program to predict fission gas release and swelling behavior of metallic fast reactor fuel

    International Nuclear Information System (INIS)

    Lee, Chan Bock; Lee, Byung Ho; Nam, Cheol; Sohn, Dong Seong

    1999-03-01

    A mechanistic model of fission gas release and swelling for the U-(Pu)-Zr metallic fuel in the fast reactor, GRSIS (Gas Release and Swelling in ISotropic fuel matrix) was developed. Fission gas bubbles are assumed to nucleate isotropically from the gas atoms in the metallic fuel matrix since they can nucleate at both the grain boundaries and the phase boundaries which are randomly distributed inside the grain. Bubbles can grow to larger size by gas diffusion and coalition with other bubbles so that they are classified as three classes depending upon their sizes. When bubble swelling reaches the threshold value, bubbles become interconnected each other to make the open channel to the external free space, that is, the open bubbles and then fission gases inside the interconnected open bubbles are released instantaneously. During the irradiation, fission gases are released through the open bubbles. GRSIS model can take into account the fuel gap closure by fuel bubble swelling. When the fuel gap is closed by fuel swelling, the contact pressure between fuel and cladding in relation to the bubble swelling and temperature is calculated. GRSIS model was validated by comparison with the irradiation test results of U-(Pu)-Zr fuels in ANL as well as the parametric studies of the key variable in the model. (author). 13 refs., 1 tab., 22 figs

  9. GRSIS program to predict fission gas release and swelling behavior of metallic fast reactor fuel

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chan Bock; Lee, Byung Ho; Nam, Cheol; Sohn, Dong Seong

    1999-03-01

    A mechanistic model of fission gas release and swelling for the U-(Pu)-Zr metallic fuel in the fast reactor, GRSIS (Gas Release and Swelling in ISotropic fuel matrix) was developed. Fission gas bubbles are assumed to nucleate isotropically from the gas atoms in the metallic fuel matrix since they can nucleate at both the grain boundaries and the phase boundaries which are randomly distributed inside the grain. Bubbles can grow to larger size by gas diffusion and coalition with other bubbles so that they are classified as three classes depending upon their sizes. When bubble swelling reaches the threshold value, bubbles become interconnected each other to make the open channel to the external free space, that is, the open bubbles and then fission gases inside the interconnected open bubbles are released instantaneously. During the irradiation, fission gases are released through the open bubbles. GRSIS model can take into account the fuel gap closure by fuel bubble swelling. When the fuel gap is closed by fuel swelling, the contact pressure between fuel and cladding in relation to the bubble swelling and temperature is calculated. GRSIS model was validated by comparison with the irradiation test results of U-(Pu)-Zr fuels in ANL as well as the parametric studies of the key variable in the model. (author). 13 refs., 1 tab., 22 figs.

  10. Refresher Course on Physics of Earthquakes -98 ...

    Indian Academy of Sciences (India)

    The objective of this course is to help teachers gain an understanding of the earhquake phenomenon and the physical processes involved in its genesis as well as offhe earthquake waves which propagate the energy released by the earthquake rupture outward from the source. The Course will begin with mathematical ...

  11. Physics of Earthquake Rupture Propagation

    Science.gov (United States)

    Xu, Shiqing; Fukuyama, Eiichi; Sagy, Amir; Doan, Mai-Linh

    2018-05-01

    A comprehensive understanding of earthquake rupture propagation requires the study of not only the sudden release of elastic strain energy during co-seismic slip, but also of other processes that operate at a variety of spatiotemporal scales. For example, the accumulation of the elastic strain energy usually takes decades to hundreds of years, and rupture propagation and termination modify the bulk properties of the surrounding medium that can influence the behavior of future earthquakes. To share recent findings in the multiscale investigation of earthquake rupture propagation, we held a session entitled "Physics of Earthquake Rupture Propagation" during the 2016 American Geophysical Union (AGU) Fall Meeting in San Francisco. The session included 46 poster and 32 oral presentations, reporting observations of natural earthquakes, numerical and experimental simulations of earthquake ruptures, and studies of earthquake fault friction. These presentations and discussions during and after the session suggested a need to document more formally the research findings, particularly new observations and views different from conventional ones, complexities in fault zone properties and loading conditions, the diversity of fault slip modes and their interactions, the evaluation of observational and model uncertainties, and comparison between empirical and physics-based models. Therefore, we organize this Special Issue (SI) of Tectonophysics under the same title as our AGU session, hoping to inspire future investigations. Eighteen articles (marked with "this issue") are included in this SI and grouped into the following six categories.

  12. Predicting the liquefaction phenomena from shear velocity profiling: Empirical approach to 6.3 Mw, May 2006 Yogyakarta earthquake

    Energy Technology Data Exchange (ETDEWEB)

    Hartantyo, Eddy, E-mail: hartantyo@ugm.ac.id [PhD student, Physics Department, FMIPA, UGM. Sekip Utara Yogyakarta 55281 Indonesia (Indonesia); Brotopuspito, Kirbani S.; Sismanto; Waluyo [Geophysics Laboratory, FMIPA, Universitas Gadjah Mada, Sekip Utara Yogyakarta 55281 (Indonesia)

    2015-04-24

    The liquefactions phenomena have been reported after a shocking 6.5Mw earthquake hit Yogyakarta province in the morning at 27 May 2006. Several researchers have reported the damage, casualties, and soil failure due to the quake, including the mapping and analyzing the liquefaction phenomena. Most of them based on SPT test. The study try to draw the liquefaction susceptibility by means the shear velocity profiling using modified Multichannel Analysis of Surface Waves (MASW). This paper is a preliminary report by using only several measured MASW points. The study built 8-channel seismic data logger with 4.5 Hz geophones for this purpose. Several different offsets used to record the high and low frequencies of surface waves. The phase-velocity diagrams were stacked in the frequency domain rather than in time domain, for a clearer and easier dispersion curve picking. All codes are implementing in Matlab. From these procedures, shear velocity profiling was collected beneath each geophone’s spread. By mapping the minimum depth of shallow water table, calculating PGA with soil classification, using empirical formula for saturated soil weight from shear velocity profile, and calculating CRR and CSR at every depth, the liquefaction characteristic can be identify in every layer. From several acquired data, a liquefiable potential at some depth below water table was obtained.

  13. Connecting slow earthquakes to huge earthquakes

    OpenAIRE

    Obara, Kazushige; Kato, Aitaro

    2016-01-01

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of th...

  14. The mechanism of earthquake

    Science.gov (United States)

    Lu, Kunquan; Cao, Zexian; Hou, Meiying; Jiang, Zehui; Shen, Rong; Wang, Qiang; Sun, Gang; Liu, Jixing

    2018-03-01

    earthquakes and deep-focus earthquakes are the energy release caused by the slip or flow of rocks following a jamming-unjamming transition. (4) The energetics and impending precursors of earthquake: The energy of earthquake is the kinetic energy released from the jamming-unjamming transition. Calculation shows that the kinetic energy of seismic rock sliding is comparable with the total work demanded for rocks’ shear failure and overcoming of frictional resistance. There will be no heat flow paradox. Meanwhile, some valuable seismic precursors are likely to be identified by observing the accumulation of additional tectonic forces, local geological changes, as well as the effect of rock state changes, etc.

  15. Predicting Mineral N Release during Decomposition of Organic Wastes in Soil by Use of the SOILNNO Model

    International Nuclear Information System (INIS)

    Sogn, T.A.; Haugen, L.E.

    2011-01-01

    In order to predict the mineral N release associated with the use of organic waste as fertilizer in agricultural plant production, the adequacy of the SOILN N O model has been evaluated. The original thought was that the model calibrated to data from simple incubation experiments could predict the mineral N release from organic waste products used as N fertilizer on agricultural land. First, the model was calibrated to mineral N data achieved in a laboratory experiment where different organic wastes were added to soil and incubated at 15 degree C for 8 weeks. Secondly, the calibrated model was tested by use of NO 3 -leaching data from soil columns with barley growing in 4 different soil types, added organic waste and exposed to natural climatic conditions during three growing seasons. The SOILN N O model reproduced relatively well the NO 3 -leaching from some of the soils included in the outdoor experiment, but failed to reproduce others. Use of the calibrated model often induced underestimation of the observed NO 3 -leaching. To achieve a satisfactory simulation of the NO 3 -leaching, recalibration of the model had to be carried out. Thus, SOILN N O calibrated to data from simple incubation experiments in the laboratory could not directly be used as a tool to predict the N-leaching following organic waste application in more natural agronomic plant production systems. The results emphasised the need for site- and system-specific data for model calibration before using a model for predictive purposes related to fertilizer N value of organic wastes applied to agricultural land.

  16. Hierarchical Theoretical Methods for Understanding and Predicting Anisotropic Thermal Transport Release in Rocket Propellant Formulations

    Science.gov (United States)

    2016-12-08

    University of Illinois at Urbana-Champaign Michael Ortiz, Frank and Ora Lee Marble Professor of Aeronautics and Mechanical Engineering, California...combustion • Formalism and constitutive theory used to describe reactants and their products that reflect the underlying material mechanics and physical...carried out for energetic materials to predict thermo- mechanical and transport properties, phase diagrams, and interfacial structure. Mesoscopic models

  17. Prediction of Fission Product Release during the LOFC Experiments at the HTTR

    International Nuclear Information System (INIS)

    Shi, D.; Xhonneux, A.; Verfondern, K.; Ueta, S.; Allelein, H.-J.

    2014-01-01

    Demonstration tests were conducted using the High Temperature Engineering Test Reactor (HTTR) in Oarai, Japan, to confirm the safety of HTGR technologies and assure the expected physical phenomena to occur under given conditions. As part of the OECD directed LOFC (“loss of forced cooling”) project, a series of three tests at the HTTR has been planned with tripping of all gas circulators while deactivating all reactor reactivity control to disallow reactor scram due to abnormal reduction of primary coolant flow rate. The tests fall into anticipated transient without scram (ATWS) with occurrence of reactor recriticality. They serve the important purpose to provide a valuable data base for the validation of computer models regarding neutronics, heat transfer and fluid dynamics, fuel performance and fission product transport and release behavior in HTGRs. The Source Term Analysis Code System (STACY) is a new code development at the Research Center Jülich encompassing the original verified and validated computer models for simulating fission product transport and release. For verification of the modernized and extended version, it was assured that results obtained with the original tools could be reproduced. One of the new features of STACY is its ability to also treat fuel compacts of (full) cylindrical or annular shape and a complete prismatic block reactor core, respectively, supposed sufficient input data be available. The paper will describe the new STACY tool and present the results of fission product behavior in the HTTR core under the LOFC test conditions. Calculations are based on time-dependent neutronics and fluid dynamics results obtained with the Serpent and MGT models. (author)

  18. A mobile dose prediction system based on artificial neural networks for NPP emergencies with radioactive material releases

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Claudio M.N.A.; Schirru, Roberto; Gomes, Kelcio J.; Cunha, José Luiz, E-mail: cmnap@ien.gov.br, E-mail: schirru@lmp.ufrj.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Coordenacao dos Programas de Pos-Graduacao em Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)

    2017-11-01

    This work presents the approach of a mobile dose prediction system for NPP emergencies with nuclear material release. The objective is to provide extra support to field teams decisions when plant information systems are not available. However, predicting doses due to atmospheric dispersion of radionuclide generally requires execution of complex and computationally intensive physical models. In order to allow such predictions to be made by using limited computational resources such as mobile phones, it is proposed the use of artificial neural networks (ANN) previously trained (offline) with data generated by precise simulations using the NPP atmospheric dispersion system. Typical situations for each postulated accident and respective source terms, as well as a wide range of meteorological conditions have been considered. As a first step, several ANN architectures have been investigated in order to evaluate their ability for dose prediction in hypothetical scenarios in the vicinity of CNAAA Brazilian NPP, in Angra dos Reis, Brazil. As a result, good generalization and a correlation coefficient of 0.99 was achieved for a validation data set (untrained patterns). Then, selected ANNs have been coded in Java programming language to run as an Android application aimed to plot the spatial dose distribution into a map.In this paper, the general architecture of the proposed system is described; numerical results and comparisons between investigated ANN architectures are discussed; performance and limitations of running the Application into a commercial mobile phone are evaluated and possible improvements and future works are pointed. (author)

  19. Development of a mobile dose prediction system based on artificial neural networks for NPP emergencies with radioactive material releases

    International Nuclear Information System (INIS)

    Pereira, Claudio M.N.A.; Schirru, Roberto; Gomes, Kelcio J.; Cunha, José L.

    2017-01-01

    This work presents the approach of a mobile dose prediction system for NPP emergencies with nuclear material release. The objective is to provide extra support to field teams decisions when plant information systems are not available. However, predicting doses due to atmospheric dispersion of radionuclide generally requires execution of complex and computationally intensive physical models. In order to allow such predictions to be made by using limited computational resources such as mobile phones, it is proposed the use of artificial neural networks (ANN) previously trained (offline) with data generated by precise simulations using the NPP atmospheric dispersion system. Typical situations for each postulated accident and respective source terms, as well as a wide range of meteorological conditions have been considered. As a first step, several ANN architectures have been investigated in order to evaluate their ability for dose prediction in hypothetical scenarios in the vicinity of CNAAA Brazilian NPP, in Angra dos Reis, Brazil. As a result, good generalization and a correlation coefficient of 0.99 was achieved for a validation data set (untrained patterns). Then, selected ANNs have been coded in Java programming language to run as an Android application aimed to plot the spatial dose distribution into a map. In this paper, the general architecture of the proposed system is described; numerical results and comparisons between investigated ANN architectures are discussed; performance and limitations of running the Application into a commercial mobile phone are evaluated and possible improvements and future works are pointed.

  20. A mobile dose prediction system based on artificial neural networks for NPP emergencies with radioactive material releases

    International Nuclear Information System (INIS)

    Pereira, Claudio M.N.A.; Schirru, Roberto; Gomes, Kelcio J.; Cunha, José Luiz

    2017-01-01

    This work presents the approach of a mobile dose prediction system for NPP emergencies with nuclear material release. The objective is to provide extra support to field teams decisions when plant information systems are not available. However, predicting doses due to atmospheric dispersion of radionuclide generally requires execution of complex and computationally intensive physical models. In order to allow such predictions to be made by using limited computational resources such as mobile phones, it is proposed the use of artificial neural networks (ANN) previously trained (offline) with data generated by precise simulations using the NPP atmospheric dispersion system. Typical situations for each postulated accident and respective source terms, as well as a wide range of meteorological conditions have been considered. As a first step, several ANN architectures have been investigated in order to evaluate their ability for dose prediction in hypothetical scenarios in the vicinity of CNAAA Brazilian NPP, in Angra dos Reis, Brazil. As a result, good generalization and a correlation coefficient of 0.99 was achieved for a validation data set (untrained patterns). Then, selected ANNs have been coded in Java programming language to run as an Android application aimed to plot the spatial dose distribution into a map.In this paper, the general architecture of the proposed system is described; numerical results and comparisons between investigated ANN architectures are discussed; performance and limitations of running the Application into a commercial mobile phone are evaluated and possible improvements and future works are pointed. (author)

  1. Predicting lift-off of major self-heating releases under the influence of a building

    International Nuclear Information System (INIS)

    Benodekar, R.W.; Goddard, A.J.H.; Gosman, A.D.

    1985-01-01

    Turbulent flow around a bluff body poses many problems with respect to both numerical and turbulence modelling. The review of numerical studies made indicates the need for refined numerical treatment such as use of higher order differencing schemes. The 2-D predictions made in the present study represent preparatory work aimed in this direction. The 2-D solution procedure described employs a differencing scheme which simultaneously satisfies the requirements of low numerical diffusion and positivity of coefficients. The effects of turbulence are modelled by a variant of the k-epsilon turbulence model incorporating curvature corrections. The governing equations are solved by an efficient pressure-implicit split-operator algorithm known as PISO. Predictions have been compared with experimental data and previous calculations

  2. Earthquake forewarning in the Cascadia region

    Science.gov (United States)

    Gomberg, Joan S.; Atwater, Brian F.; Beeler, Nicholas M.; Bodin, Paul; Davis, Earl; Frankel, Arthur; Hayes, Gavin P.; McConnell, Laura; Melbourne, Tim; Oppenheimer, David H.; Parrish, John G.; Roeloffs, Evelyn A.; Rogers, Gary D.; Sherrod, Brian; Vidale, John; Walsh, Timothy J.; Weaver, Craig S.; Whitmore, Paul M.

    2015-08-10

    This report, prepared for the National Earthquake Prediction Evaluation Council (NEPEC), is intended as a step toward improving communications about earthquake hazards between information providers and users who coordinate emergency-response activities in the Cascadia region of the Pacific Northwest. NEPEC charged a subcommittee of scientists with writing this report about forewarnings of increased probabilities of a damaging earthquake. We begin by clarifying some terminology; a “prediction” refers to a deterministic statement that a particular future earthquake will or will not occur. In contrast to the 0- or 100-percent likelihood of a deterministic prediction, a “forecast” describes the probability of an earthquake occurring, which may range from >0 to processes or conditions, which may include Increased rates of M>4 earthquakes on the plate interface north of the Mendocino region 

  3. Earthquake Facts

    Science.gov (United States)

    ... North Dakota, and Wisconsin. The core of the earth was the first internal structural element to be identified. In 1906 R.D. Oldham discovered it from his studies of earthquake records. The inner core is solid, and the outer core is liquid and so does not transmit ...

  4. Understanding Earthquakes

    Science.gov (United States)

    Davis, Amanda; Gray, Ron

    2018-01-01

    December 26, 2004 was one of the deadliest days in modern history, when a 9.3 magnitude earthquake--the third largest ever recorded--struck off the coast of Sumatra in Indonesia (National Centers for Environmental Information 2014). The massive quake lasted at least 10 minutes and devastated the Indian Ocean. The quake displaced an estimated…

  5. Summary of the GK15 ground‐motion prediction equation for horizontal PGA and 5% damped PSA from shallow crustal continental earthquakes

    Science.gov (United States)

    Graizer, Vladimir;; Kalkan, Erol

    2016-01-01

    We present a revised ground‐motion prediction equation (GMPE) for computing medians and standard deviations of peak ground acceleration (PGA) and 5% damped pseudospectral acceleration (PSA) response ordinates of the horizontal component of randomly oriented ground motions to be used for seismic‐hazard analyses and engineering applications. This GMPE is derived from the expanded Next Generation Attenuation (NGA)‐West 1 database (see Data and Resources; Chiou et al., 2008). The revised model includes an anelastic attenuation term as a function of quality factor (Q0) to capture regional differences in far‐source (beyond 150 km) attenuation, and a new frequency‐dependent sedimentary‐basin scaling term as a function of depth to the 1.5  km/s shear‐wave velocity isosurface to improve ground‐motion predictions at sites located on deep sedimentary basins. The new Graizer–Kalkan 2015 (GK15) model, developed to be simple, is applicable for the western United States and other similar shallow crustal continental regions in active tectonic environments for earthquakes with moment magnitudes (M) 5.0–8.0, distances 0–250 km, average shear‐wave velocities in the upper 30 m (VS30) 200–1300  m/s, and spectral periods (T) 0.01–5 s. Our aleatory variability model captures interevent (between‐event) variability, which decreases with magnitude and increases with distance. The mixed‐effect residuals analysis reveals that the GK15 has no trend with respect to the independent predictor parameters. Compared to our 2007–2009 GMPE, the PGA values are very similar, whereas spectral ordinates predicted are larger at T<0.2  s and they are smaller at longer periods.

  6. Understanding dynamic friction through spontaneously evolving laboratory earthquakes.

    Science.gov (United States)

    Rubino, V; Rosakis, A J; Lapusta, N

    2017-06-29

    Friction plays a key role in how ruptures unzip faults in the Earth's crust and release waves that cause destructive shaking. Yet dynamic friction evolution is one of the biggest uncertainties in earthquake science. Here we report on novel measurements of evolving local friction during spontaneously developing mini-earthquakes in the laboratory, enabled by our ultrahigh speed full-field imaging technique. The technique captures the evolution of displacements, velocities and stresses of dynamic ruptures, whose rupture speed range from sub-Rayleigh to supershear. The observed friction has complex evolution, featuring initial velocity strengthening followed by substantial velocity weakening. Our measurements are consistent with rate-and-state friction formulations supplemented with flash heating but not with widely used slip-weakening friction laws. This study develops a new approach for measuring local evolution of dynamic friction and has important implications for understanding earthquake hazard since laws governing frictional resistance of faults are vital ingredients in physically-based predictive models of the earthquake source.

  7. Dose estimation and prediction of radiation effects on aquatic biota resulting from radioactive releases from the nuclear fuel cycle

    International Nuclear Information System (INIS)

    Blaylock, B.G.; Witherspoon, J.P.

    1975-01-01

    Aquatic organisms are exposed to radionuclides released to the environment during various steps of the nuclear fuel cycle. Routine releases from these processes are limited in compliance with technical specifications and requirements of federal regulations. These regulations reflect I.C.R.P. recommendations which are designed to provide an environment considered safe for man. It is generally accepted that aquatic organisms will not receive damaging external radiation doses in such environments; however, because of possible bioaccumulation of radionuclides there is concern that aquatic organisms might be adversely affected by internal doses. The objectives of this paper are: to estimate the radiation dose received by aquatic biota from the different processes and determine the major dose-contributing radionuclides, and to assess the impact of estimated doses on aquatic biota. Dose estimates are made by using radionuclide concentration measured in the liquid effluents of representative facilities. This evaluation indicates the potential for the greatest radiation dose to aquatic biota from the nuclear fuel supply facilities (i.e., uranium mining and milling). The effects of chronic low-level radiation on aquatic organisms are discussed from somatic and genetic viewpoints. Based on the body of radiobiological evidence accumulated up to the present time, no significant deleterious effects are predicted for populations of aquatic organisms exposed to the estimated dose rates resulting from routine releases from conversion, enrichment, fabrication, reactors and reprocessing facilities. At the doses estimated for milling and mining operations it would be difficult to detect radiation effects on aquatic populations; however, the significance of such radiation exposures to aquatic populations cannot be fully evaluated without further research on effects of chronic low-level radiation. (U.S.)

  8. Generation of earthquake signals

    International Nuclear Information System (INIS)

    Kjell, G.

    1994-01-01

    Seismic verification can be performed either as a full scale test on a shaker table or as numerical calculations. In both cases it is necessary to have an earthquake acceleration time history. This report describes generation of such time histories by filtering white noise. Analogue and digital filtering methods are compared. Different methods of predicting the response spectrum of a white noise signal filtered by a band-pass filter are discussed. Prediction of both the average response level and the statistical variation around this level are considered. Examples with both the IEEE 301 standard response spectrum and a ground spectrum suggested for Swedish nuclear power stations are included in the report

  9. The limits of earthquake early warning: Timeliness of ground motion estimates

    OpenAIRE

    Minson, Sarah E.; Meier, Men-Andrin; Baltay, Annemarie S.; Hanks, Thomas C.; Cochran, Elizabeth S.

    2018-01-01

    The basic physics of earthquakes is such that strong ground motion cannot be expected from an earthquake unless the earthquake itself is very close or has grown to be very large. We use simple seismological relationships to calculate the minimum time that must elapse before such ground motion can be expected at a distance from the earthquake, assuming that the earthquake magnitude is not predictable. Earthquake early warning (EEW) systems are in operation or development for many regions aroun...

  10. GIS BASED SYSTEM FOR POST-EARTHQUAKE CRISIS MANAGMENT USING CELLULAR NETWORK

    OpenAIRE

    Raeesi, M.; Sadeghi-Niaraki, A.

    2013-01-01

    Earthquakes are among the most destructive natural disasters. Earthquakes happen mainly near the edges of tectonic plates, but they may happen just about anywhere. Earthquakes cannot be predicted. Quick response after disasters, like earthquake, decreases loss of life and costs. Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. After the earthquake and destroyed some areas, several teams are sent to find the location of the d...

  11. Underground water stress release models

    Science.gov (United States)

    Li, Yong; Dang, Shenjun; Lü, Shaochuan

    2011-08-01

    The accumulation of tectonic stress may cause earthquakes at some epochs. However, in most cases, it leads to crustal deformations. Underground water level is a sensitive indication of the crustal deformations. We incorporate the information of the underground water level into the stress release models (SRM), and obtain the underground water stress release model (USRM). We apply USRM to the earthquakes occurred at Tangshan region. The analysis shows that the underground water stress release model outperforms both Poisson model and stress release model. Monte Carlo simulation shows that the simulated seismicity by USRM is very close to the real seismicity.

  12. Autoregressive models as a tool to discriminate chaos from randomness in geoelectrical time series: an application to earthquake prediction

    Directory of Open Access Journals (Sweden)

    C. Serio

    1997-06-01

    Full Text Available The time dynamics of geoelectrical precursory time series has been investigated and a method to discriminate chaotic behaviour in geoelectrical precursory time series is proposed. It allows us to detect low-dimensional chaos when the only information about the time series comes from the time series themselves. The short-term predictability of these time series is evaluated using two possible forecasting approaches: global autoregressive approximation and local autoregressive approximation. The first views the data as a realization of a linear stochastic process, whereas the second considers the data points as a realization of a deterministic process, supposedly non-linear. The comparison of the predictive skill of the two techniques is a test to discriminate between low-dimensional chaos and random dynamics. The analyzed time series are geoelectrical measurements recorded by an automatic station located in Tito (Southern Italy in one of the most seismic areas of the Mediterranean region. Our findings are that the global (linear approach is superior to the local one and the physical system governing the phenomena of electrical nature is characterized by a large number of degrees of freedom. Power spectra of the filtered time series follow a P(f = F-a scaling law: they exhibit the typical behaviour of a broad class of fractal stochastic processes and they are a signature of the self-organized systems.

  13. Connecting slow earthquakes to huge earthquakes.

    Science.gov (United States)

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  14. Geological and historical evidence of irregular recurrent earthquakes in Japan.

    Science.gov (United States)

    Satake, Kenji

    2015-10-28

    Great (M∼8) earthquakes repeatedly occur along the subduction zones around Japan and cause fault slip of a few to several metres releasing strains accumulated from decades to centuries of plate motions. Assuming a simple 'characteristic earthquake' model that similar earthquakes repeat at regular intervals, probabilities of future earthquake occurrence have been calculated by a government committee. However, recent studies on past earthquakes including geological traces from giant (M∼9) earthquakes indicate a variety of size and recurrence interval of interplate earthquakes. Along the Kuril Trench off Hokkaido, limited historical records indicate that average recurrence interval of great earthquakes is approximately 100 years, but the tsunami deposits show that giant earthquakes occurred at a much longer interval of approximately 400 years. Along the Japan Trench off northern Honshu, recurrence of giant earthquakes similar to the 2011 Tohoku earthquake with an interval of approximately 600 years is inferred from historical records and tsunami deposits. Along the Sagami Trough near Tokyo, two types of Kanto earthquakes with recurrence interval of a few hundred years and a few thousand years had been recognized, but studies show that the recent three Kanto earthquakes had different source extents. Along the Nankai Trough off western Japan, recurrence of great earthquakes with an interval of approximately 100 years has been identified from historical literature, but tsunami deposits indicate that the sizes of the recurrent earthquakes are variable. Such variability makes it difficult to apply a simple 'characteristic earthquake' model for the long-term forecast, and several attempts such as use of geological data for the evaluation of future earthquake probabilities or the estimation of maximum earthquake size in each subduction zone are being conducted by government committees. © 2015 The Author(s).

  15. ELER software - a new tool for urban earthquake loss assessment

    Science.gov (United States)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Erdik, M.

    2010-12-01

    Rapid loss estimation after potentially damaging earthquakes is critical for effective emergency response and public information. A methodology and software package, ELER-Earthquake Loss Estimation Routine, for rapid estimation of earthquake shaking and losses throughout the Euro-Mediterranean region was developed under the Joint Research Activity-3 (JRA3) of the EC FP6 Project entitled "Network of Research Infrastructures for European Seismology-NERIES". Recently, a new version (v2.0) of ELER software has been released. The multi-level methodology developed is capable of incorporating regional variability and uncertainty originating from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. Although primarily intended for quasi real-time estimation of earthquake shaking and losses, the routine is also equally capable of incorporating scenario-based earthquake loss assessments. This paper introduces the urban earthquake loss assessment module (Level 2) of the ELER software which makes use of the most detailed inventory databases of physical and social elements at risk in combination with the analytical vulnerability relationships and building damage-related casualty vulnerability models for the estimation of building damage and casualty distributions, respectively. Spectral capacity-based loss assessment methodology and its vital components are presented. The analysis methods of the Level 2 module, i.e. Capacity Spectrum Method (ATC-40, 1996), Modified Acceleration-Displacement Response Spectrum Method (FEMA 440, 2005), Reduction Factor Method (Fajfar, 2000) and Coefficient Method (ASCE 41-06, 2006), are applied to the selected building types for validation and verification purposes. The damage estimates are compared to the results obtained from the other studies available in the literature, i.e. SELENA v4.0 (Molina et al., 2008) and

  16. Countermeasures to earthquakes in nuclear plants

    International Nuclear Information System (INIS)

    Sato, Kazuhide

    1979-01-01

    The contribution of atomic energy to mankind is unmeasured, but the danger of radioactivity is a special thing. Therefore in the design of nuclear power plants, the safety has been regarded as important, and in Japan where earthquakes occur frequently, the countermeasures to earthquakes have been incorporated in the examination of safety naturally. The radioactive substances handled in nuclear power stations and spent fuel reprocessing plants are briefly explained. The occurrence of earthquakes cannot be predicted effectively, and the disaster due to earthquakes is apt to be remarkably large. In nuclear plants, the prevention of damage in the facilities and the maintenance of the functions are required at the time of earthquakes. Regarding the location of nuclear plants, the history of earthquakes, the possible magnitude of earthquakes, the properties of ground and the position of nuclear plants should be examined. After the place of installation has been decided, the earthquake used for design is selected, evaluating live faults and determining the standard earthquakes. As the fundamentals of aseismatic design, the classification according to importance, the earthquakes for design corresponding to the classes of importance, the combination of loads and allowable stress are explained. (Kako, I.)

  17. Modeling of fission product release in integral codes

    International Nuclear Information System (INIS)

    Obaidurrahman, K.; Raman, Rupak K.; Gaikwad, Avinash J.

    2014-01-01

    The Great Tohoku earthquake and tsunami that stroke the Fukushima-Daiichi nuclear power station in March 11, 2011 has intensified the needs of detailed nuclear safety research and with this objective all streams associated with severe accident phenomenology are being revisited thoroughly. The present paper would cover an overview of state of art FP release models being used, the important phenomenon considered in semi-mechanistic models and knowledge gaps in present FP release modeling. Capability of FP release module, ELSA of ASTEC integral code in appropriate prediction of FP release under several diversified core degraded conditions will also be demonstrated. Use of semi-mechanistic fission product release models at AERB in source-term estimation shall be briefed. (author)

  18. Defeating Earthquakes

    Science.gov (United States)

    Stein, R. S.

    2012-12-01

    The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by

  19. Thermal infrared anomalies of several strong earthquakes.

    Science.gov (United States)

    Wei, Congxin; Zhang, Yuansheng; Guo, Xiao; Hui, Shaoxing; Qin, Manzhong; Zhang, Ying

    2013-01-01

    In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of "time-frequency relative power spectrum." (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting.

  20. Earthquake Early Warning Systems

    OpenAIRE

    Pei-Yang Lin

    2011-01-01

    Because of Taiwan’s unique geographical environment, earthquake disasters occur frequently in Taiwan. The Central Weather Bureau collated earthquake data from between 1901 and 2006 (Central Weather Bureau, 2007) and found that 97 earthquakes had occurred, of which, 52 resulted in casualties. The 921 Chichi Earthquake had the most profound impact. Because earthquakes have instant destructive power and current scientific technologies cannot provide precise early warnings in advance, earthquake ...

  1. Ionospheric Anomaly before Kyushu|Japan Earthquake

    Directory of Open Access Journals (Sweden)

    YANG Li

    2017-05-01

    Full Text Available GIM data released by IGS is used in the article and a new method of combining the Sliding Time Window Method and the Ionospheric TEC correlation analysis method of adjacent grid points is proposed to study the relationship between pre-earthquake ionospheric anomalies and earthquake. By analyzing the abnormal change of TEC in the 5 grid points around the seismic region, the abnormal change of ionospheric TEC is found before the earthquake and the correlation between the TEC sequences of lattice points is significantly affected by earthquake. Based on the analysis of the spatial distribution of TEC anomaly, anomalies of 6 h, 12 h and 6 h were found near the epicenter three days before the earthquake. Finally, ionospheric tomographic technology is used to do tomographic inversion on electron density. And the distribution of the electron density in the ionospheric anomaly is further analyzed.

  2. Global earthquake fatalities and population

    Science.gov (United States)

    Holzer, Thomas L.; Savage, James C.

    2013-01-01

    Modern global earthquake fatalities can be separated into two components: (1) fatalities from an approximately constant annual background rate that is independent of world population growth and (2) fatalities caused by earthquakes with large human death tolls, the frequency of which is dependent on world population. Earthquakes with death tolls greater than 100,000 (and 50,000) have increased with world population and obey a nonstationary Poisson distribution with rate proportional to population. We predict that the number of earthquakes with death tolls greater than 100,000 (50,000) will increase in the 21st century to 8.7±3.3 (20.5±4.3) from 4 (7) observed in the 20th century if world population reaches 10.1 billion in 2100. Combining fatalities caused by the background rate with fatalities caused by catastrophic earthquakes (>100,000 fatalities) indicates global fatalities in the 21st century will be 2.57±0.64 million if the average post-1900 death toll for catastrophic earthquakes (193,000) is assumed.

  3. A smartphone application for earthquakes that matter!

    Science.gov (United States)

    Bossu, Rémy; Etivant, Caroline; Roussel, Fréderic; Mazet-Roux, Gilles; Steed, Robert

    2014-05-01

    level of shaking intensity with empirical models of fatality losses calibrated on past earthquakes in each country. Non-seismic detections and macroseismic questionnaires collected online are combined to identify as many as possible of the felt earthquakes regardless their magnitude. Non seismic detections include Twitter earthquake detections, developed by the US Geological Survey, where the number of tweets containing the keyword "earthquake" is monitored in real time and flashsourcing, developed by the EMSC, which detect traffic surges on its rapid earthquake information website caused by the natural convergence of eyewitnesses who rush to the Internet to investigate the cause of the shaking that they have just felt. All together, we estimate that the number of detected felt earthquakes is around 1 000 per year, compared with the 35 000 earthquakes annually reported by the EMSC! Felt events are already the subject of the web page "Latest significant earthquakes" on EMSC website (http://www.emsc-csem.org/Earthquake/significant_earthquakes.php) and of a dedicated Twitter service @LastQuake. We will present the identification process of the earthquakes that matter, the smartphone application itself (to be released in May) and its future evolutions.

  4. Small discussion of electromagnetic wave anomalies preceding earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    1980-01-01

    Six brief pieces on various aspects of electromagnetic wave anomalies are presented. They cover: earthquake electromagnetic emanations; the use of magnetic induction information for earthquake forecasting; electromagnetic pulse emissions as pre-earthquake indicators; the use of magnetic sensors to determine medium-wavelength field strength for earthquake prediction purposes; magnetic deviation indicators inside reinforced-concrete buildings; and a discussion of the general physical principles involved.

  5. Earthquake focal mechanism forecasting in Italy for PSHA purposes

    Science.gov (United States)

    Roselli, Pamela; Marzocchi, Warner; Mariucci, Maria Teresa; Montone, Paola

    2018-01-01

    In this paper, we put forward a procedure that aims to forecast focal mechanism of future earthquakes. One of the primary uses of such forecasts is in probabilistic seismic hazard analysis (PSHA); in fact, aiming at reducing the epistemic uncertainty, most of the newer ground motion prediction equations consider, besides the seismicity rates, the forecast of the focal mechanism of the next large earthquakes as input data. The data set used to this purpose is relative to focal mechanisms taken from the latest stress map release for Italy containing 392 well-constrained solutions of events, from 1908 to 2015, with Mw ≥ 4 and depths from 0 down to 40 km. The data set considers polarity focal mechanism solutions until to 1975 (23 events), whereas for 1976-2015, it takes into account only the Centroid Moment Tensor (CMT)-like earthquake focal solutions for data homogeneity. The forecasting model is rooted in the Total Weighted Moment Tensor concept that weighs information of past focal mechanisms evenly distributed in space, according to their distance from the spatial cells and magnitude. Specifically, for each cell of a regular 0.1° × 0.1° spatial grid, the model estimates the probability to observe a normal, reverse, or strike-slip fault plane solution for the next large earthquakes, the expected moment tensor and the related maximum horizontal stress orientation. These results will be available for the new PSHA model for Italy under development. Finally, to evaluate the reliability of the forecasts, we test them with an independent data set that consists of some of the strongest earthquakes with Mw ≥ 3.9 occurred during 2016 in different Italian tectonic provinces.

  6. Late Release of Circulating Endothelial Cells and Endothelial Progenitor Cells after Chemotherapy Predicts Response and Survival in Cancer Patients

    Directory of Open Access Journals (Sweden)

    Jeanine M. Roodhart

    2010-01-01

    Full Text Available We and others have previously demonstrated that the acute release of progenitor cells in response to chemotherapy actually reduces the efficacy of the chemotherapy. Here, we take these data further and investigate the clinical relevance of circulating endothelial (progenitor cells (CE(PCs and modulatory cytokines in patients after chemotherapy with relation to progression-free and overall survival (PFS/OS. Patients treated with various chemotherapeutics were included. Blood sampling was performed at baseline, 4 hours, and 7 and 21 days after chemotherapy. The mononuclear cell fraction was analyzed for CE(PC by FACS analysis. Plasma was analyzed for cytokines by ELISA or Luminex technique. CE(PCs were correlated with response and PFS/OS using Cox proportional hazard regression analysis. We measured CE(PCs and cytokines in 71 patients. Only patients treated with paclitaxel showed an immediate increase in endothelial progenitor cell 4 hours after start of treatment. These immediate changes did not correlate with response or survival. After 7 and 21 days of chemotherapy, a large and consistent increase in CE(PC was found (P < .01, independent of the type of chemotherapy. Changes in CE(PC levels at day 7 correlated with an increase in tumor volume after three cycles of chemotherapy and predicted PFS/OS, regardless of the tumor type or chemotherapy. These findings indicate that the late release of CE(PC is a common phenomenon after chemotherapeutic treatment. The correlation with a clinical response and survival provides further support for the biologic relevance of these cells in patients' prognosis and stresses their possible use as a therapeutic target.

  7. Prediction of the UO/sub 2/ fission gas release data of Bellamy and Rich using a model recently developed by Combustion Engineering

    International Nuclear Information System (INIS)

    Freeburn, H.R.; Pati, S.R.

    1983-01-01

    The trend in the light water reactor industry to higher discharge burnups of UO/sub 2/ fuel rods has initiated the modification of existing fuel rod models to better account for high burnup effects. The degree to which fission gas release from UO/sub 2/ fuel is enhanced at higher burnup is being addressed in the process. Fission gas release modeling should include the separation of the individual effects of thermal diffusion and any burnup enhancement on the release. Although some modelers have interpreted the Bellamy and Rich data on fission gas release from UO/sub 2/ fuel in this fashion, they have assumed that below about 1250 0 C the gas release is not temperature-dependent, and this has led them to predict a very strong burnup enhancement of gas release above 20 MWd/kgU. More recent data, however, suggest that an appreciable amount of fission gas is released by a thermal diffusion mechanism at even lower temperatures and will add to the fission gas released due to the temperature-independent mechanisms of knockout and recoil

  8. How fault geometry controls earthquake magnitude

    Science.gov (United States)

    Bletery, Q.; Thomas, A.; Karlstrom, L.; Rempel, A. W.; Sladen, A.; De Barros, L.

    2016-12-01

    Recent large megathrust earthquakes, such as the Mw9.3 Sumatra-Andaman earthquake in 2004 and the Mw9.0 Tohoku-Oki earthquake in 2011, astonished the scientific community. The first event occurred in a relatively low-convergence-rate subduction zone where events of its size were unexpected. The second event involved 60 m of shallow slip in a region thought to be aseismicaly creeping and hence incapable of hosting very large magnitude earthquakes. These earthquakes highlight gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution. Here we show that gradients in dip angle exert a primary control on mega-earthquake occurrence. We calculate the curvature along the major subduction zones of the world and show that past mega-earthquakes occurred on flat (low-curvature) interfaces. A simplified analytic model demonstrates that shear strength heterogeneity increases with curvature. Stress loading on flat megathrusts is more homogeneous and hence more likely to be released simultaneously over large areas than on highly-curved faults. Therefore, the absence of asperities on large faults might counter-intuitively be a source of higher hazard.

  9. Predicting the effect of gonadotropin-releasing hormone (GnRH) analogue treatment on uterine leiomyomas based on MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Matsuno, Y.; Yamashita, Y.; Takahashi, M. [Dept. of Radiology, Kumamoto Univ. School of Medicine, Kumamoto (Japan); Katabuchi, H.; Okamura, H. [Dept. of Gynecology and Obstetrics, Kumamoto Univ. School of Medicine, Kumamoto (Japan); Kitano, Y.; Shimamura, T. [Dept. of Gynecology and Obstetrics, Amakusa Chuou General Hospital, Hondo (Japan)

    1999-11-01

    Purpose: To test the hypothesis that the simple assessment of signal intensity on T2-weighted MR images is predictive of the effect of hormonal treatment with gonadotropin-releasing hormone (GnRH) analogue. Material and methods: The correlation between T2-weighted MR imaging of uterine leiomyomas and histologic findings was evaluated using 85 leiomyomas from 62 females who underwent myomectomy or hysterectomy. We also correlated the pretreatment MR images features obtained in 110 women with 143 leiomyomas with the effect of GnRH analogue treatment. The size (length x width x depth) of the leiomyoma was evaluated before and at 6 months after treatment by ultrasound. Results: The proportion of leiomyoma cell fascicles and that of extracellular matrix affected signal intensities of uterine leiomyomas on T2-weighted MR images. The amount of extracellular matrix was predominant in hypointense leiomyomas on T2-weighted images, while diffuse intermediate signal leiomyomas were predominantly composed of leiomyoma cell fascicles. Marked degenerative changes were noted in leiomyomas with heterogenous hyperintensity. The homogeneously intermediate signal intensity leiomyomas showed significant size reduction after treatment (size ratio; posttreatment volume/pretreatment volume 0.29{+-}0.11). The size ratio for the hypointense tumors was 0.82{+-}0.14, and 0.82{+-}0.18 for the heterogeneously hyperintense tumors. There was a significant difference in the response to treatment between the homogeneously intermediate signal intensity leiomyomas and the hypointense or heterogeneously hyperintense leiomyomas (both p<0.01). Conclusion: Signal intensity on T2-weighted MR images depends on the amount of leiomyoma cell fascicles and extracellular matrix. Simple assessment of the MR signal intensity is useful in predicting the effect of GnRH analogue on uterine leiomyomas. (orig.)

  10. Predicting the effect of gonadotropin-releasing hormone (GnRH) analogue treatment on uterine leiomyomas based on MR imaging

    International Nuclear Information System (INIS)

    Matsuno, Y.; Yamashita, Y.; Takahashi, M.; Katabuchi, H.; Okamura, H.; Kitano, Y.; Shimamura, T.

    1999-01-01

    Purpose: To test the hypothesis that the simple assessment of signal intensity on T2-weighted MR images is predictive of the effect of hormonal treatment with gonadotropin-releasing hormone (GnRH) analogue. Material and methods: The correlation between T2-weighted MR imaging of uterine leiomyomas and histologic findings was evaluated using 85 leiomyomas from 62 females who underwent myomectomy or hysterectomy. We also correlated the pretreatment MR images features obtained in 110 women with 143 leiomyomas with the effect of GnRH analogue treatment. The size (length x width x depth) of the leiomyoma was evaluated before and at 6 months after treatment by ultrasound. Results: The proportion of leiomyoma cell fascicles and that of extracellular matrix affected signal intensities of uterine leiomyomas on T2-weighted MR images. The amount of extracellular matrix was predominant in hypointense leiomyomas on T2-weighted images, while diffuse intermediate signal leiomyomas were predominantly composed of leiomyoma cell fascicles. Marked degenerative changes were noted in leiomyomas with heterogenous hyperintensity. The homogeneously intermediate signal intensity leiomyomas showed significant size reduction after treatment (size ratio; posttreatment volume/pretreatment volume 0.29±0.11). The size ratio for the hypointense tumors was 0.82±0.14, and 0.82±0.18 for the heterogeneously hyperintense tumors. There was a significant difference in the response to treatment between the homogeneously intermediate signal intensity leiomyomas and the hypointense or heterogeneously hyperintense leiomyomas (both p<0.01). Conclusion: Signal intensity on T2-weighted MR images depends on the amount of leiomyoma cell fascicles and extracellular matrix. Simple assessment of the MR signal intensity is useful in predicting the effect of GnRH analogue on uterine leiomyomas. (orig.)

  11. Factors that predict a positive response on gonadotropin-releasing hormone stimulation test for diagnosing central precocious puberty in girls

    Directory of Open Access Journals (Sweden)

    Junghwan Suh

    2013-12-01

    Full Text Available PurposeThe rapid increase in the incidence of precocious puberty in Korea has clinical and social significance. Gonadotropin-releasing hormone (GnRH stimulation test is required to diagnose central precocious puberty (CPP, however this test is expensive and time-consuming. This study aimed to identify factors that can predict a positive response to the GnRH stimulation test.MethodsClinical and laboratory parameters, including basal serum luteinizing hormone (LH, follicle-stimulating hormone (FSH, and estradiol (E2, were measured in 540 girls with clinical signs of CPP.ResultsTwo hundred twenty-nine of 540 girls with suspected CPP had a peak serum LH level higher than 5 IU/L (the CPP group. The CPP group had advanced bone age (P<0.001, accelerated yearly growth rate (P<0.001, increased basal levels of LH (P=0.02, FSH (P<0.001, E2 (P=0.001, and insulin-like growth factor-I levels (P<0.001 compared to the non-CPP group. In contrast, body weight (P<0.001 and body mass index (P<0.001 were lower in the CPP group. Although basal LH was significantly elevated in the CPP group compared to the non-CPP group, there was considerable overlap between the 2 groups. Cutoff values of basal LH (0.22 IU/L detected CPP with 87.8% sensitivity and 20.9% specificity.ConclusionNo single parameter can predict a positive response on the GnRH stimulation test with both high sensitivity and specificity. Therefore, multiple factors should be considered in evaluation of sexual precocity when deciding the timing of the GnRH stimulation test.

  12. Kozeny-Carman permeability relationship with disintegration process predicted from early dissolution profiles of immediate release tablets.

    Science.gov (United States)

    Kumari, Parveen; Rathi, Pooja; Kumar, Virender; Lal, Jatin; Kaur, Harmeet; Singh, Jasbir

    2017-07-01

    This study was oriented toward the disintegration profiling of the diclofenac sodium (DS) immediate-release (IR) tablets and development of its relationship with medium permeability k perm based on Kozeny-Carman equation. Batches (L1-L9) of DS IR tablets with different porosities and specific surface area were prepared at different compression forces and evaluated for porosity, in vitro dissolution and particle-size analysis of the disintegrated mass. The k perm was calculated from porosities and specific surface area, and disintegration profiles were predicted from the dissolution profiles of IR tablets by stripping/residual method. The disintegration profiles were subjected to exponential regression to find out the respective disintegration equations and rate constants k d . Batches L1 and L2 showed the fastest disintegration rates as evident from their bi-exponential equations while the rest of the batches L3-L9 exhibited the first order or mono-exponential disintegration kinetics. The 95% confidence interval (CI 95% ) revealed significant differences between k d values of different batches except L4 and L6. Similar results were also spotted for dissolution profiles of IR tablets by similarity (f 2 ) test. The final relationship between k d and k perm was found to be hyperbolic, signifying the initial effect of k perm on the disintegration rate. The results showed that disintegration profiling is possible because a relationship exists between k d and k perm . The later being relatable with porosity and specific surface area can be determined by nondestructive tests.

  13. Comparison of two insulin assays for first-phase insulin release in type 1 diabetes prediction and prevention studies.

    Science.gov (United States)

    Mahon, Jeffrey L; Beam, Craig A; Marcovina, Santica M; Boulware, David C; Palmer, Jerry P; Winter, William E; Skyler, Jay S; Krischer, Jeffrey P

    2011-11-20

    Detection of below-threshold first-phase insulin release or FPIR (1+3 minute insulin concentrations during an intravenous glucose tolerance test [IVGTT]) is important in type 1 diabetes prediction and prevention studies including the TrialNet Oral Insulin Prevention Trial. We assessed whether an insulin immunoenzymometric assay (IEMA) could replace the less practical but current standard of a radioimmunoassay (RIA) for FPIR. One hundred thirty-three islet autoantibody positive relatives of persons with type 1 diabetes underwent 161 IVGTTs. Insulin concentrations were measured by both assays in 1056 paired samples. A rule classifying FPIR (below-threshold, above-threshold, uncertain) by the IEMA was derived and validated against FPIR by the RIA. The insulin IEMA-based rule accurately classified below- and above-threshold FPIRs by the RIA in 110/161 (68%) IVGTTs, but was uncertain in 51/161 (32%) tests for which FPIR by RIA is needed. An uncertain FPIR by the IEMA was more likely among below-threshold vs above-threshold FPIRs by the RIA (64% [30/47] vs. 18% [21/114], respectively; pTrialNet is limiting the insulin RIA for FPIR to the latter given the practical advantages of the more specific IEMA. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. A Decade of Giant Earthquakes - What does it mean?

    Energy Technology Data Exchange (ETDEWEB)

    Wallace, Terry C. Jr. [Los Alamos National Laboratory

    2012-07-16

    On December 26, 2004 the largest earthquake since 1964 occurred near Ache, Indonesia. The magnitude 9.2 earthquake and subsequent tsunami killed a quarter of million people; it also marked the being of a period of extraordinary seismicity. Since the Ache earthquake there have been 16 magnitude 8 earthquakes globally, including 2 this last April. For the 100 years previous to 2004 there was an average of 1 magnitude 8 earthquake every 2.2 years; since 2004 there has been 2 per year. Since magnitude 8 earthquakes dominate global seismic energy release, this period of seismicity has seismologist rethinking what they understand about plate tectonics and the connectivity between giant earthquakes. This talk will explore this remarkable period of time and its possible implications.

  15. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  16. Disruption of Pseudomonas putida by high pressure homogenization: a comparison of the predictive capacity of three process models for the efficient release of arginine deiminase.

    Science.gov (United States)

    Patil, Mahesh D; Patel, Gopal; Surywanshi, Balaji; Shaikh, Naeem; Garg, Prabha; Chisti, Yusuf; Banerjee, Uttam Chand

    2016-12-01

    Disruption of Pseudomonas putida KT2440 by high-pressure homogenization in a French press is discussed for the release of arginine deiminase (ADI). The enzyme release response of the disruption process was modelled for the experimental factors of biomass concentration in the broth being disrupted, the homogenization pressure and the number of passes of the cell slurry through the homogenizer. For the same data, the response surface method (RSM), the artificial neural network (ANN) and the support vector machine (SVM) models were compared for their ability to predict the performance parameters of the cell disruption. The ANN model proved to be best for predicting the ADI release. The fractional disruption of the cells was best modelled by the RSM. The fraction of the cells disrupted depended mainly on the operating pressure of the homogenizer. The concentration of the biomass in the slurry was the most influential factor in determining the total protein release. Nearly 27 U/mL of ADI was released within a single pass from slurry with a biomass concentration of 260 g/L at an operating pressure of 510 bar. Using a biomass concentration of 100 g/L, the ADI release by French press was 2.7-fold greater than in a conventional high-speed bead mill. In the French press, the total protein release was 5.8-fold more than in the bead mill. The statistical analysis of the completely unseen data exhibited ANN and SVM modelling as proficient alternatives to RSM for the prediction and generalization of the cell disruption process in French press.

  17. Tiltmeter studies in earthquake prediction

    Science.gov (United States)

    Johnston, M.

    1978-01-01

    Our knowledge is still very limited as to the way in which the Earth's surface deforms around active faults and why it does so. By far the easiest method of providing clues to the mechanisms involved is to record the associated pattern of tilt of the Earth's surface. 

  18. Statistical distributions of earthquakes and related non-linear features in seismic waves

    International Nuclear Information System (INIS)

    Apostol, B.-F.

    2006-01-01

    A few basic facts in the science of the earthquakes are briefly reviewed. An accumulation, or growth, model is put forward for the focal mechanisms and the critical focal zone of the earthquakes, which relates the earthquake average recurrence time to the released seismic energy. The temporal statistical distribution for average recurrence time is introduced for earthquakes, and, on this basis, the Omori-type distribution in energy is derived, as well as the distribution in magnitude, by making use of the semi-empirical Gutenberg-Richter law relating seismic energy to earthquake magnitude. On geometric grounds, the accumulation model suggests the value r = 1/3 for the Omori parameter in the power-law of energy distribution, which leads to β = 1,17 for the coefficient in the Gutenberg-Richter recurrence law, in fair agreement with the statistical analysis of the empirical data. Making use of this value, the empirical Bath's law is discussed for the average magnitude of the aftershocks (which is 1.2 less than the magnitude of the main seismic shock), by assuming that the aftershocks are relaxation events of the seismic zone. The time distribution of the earthquakes with a fixed average recurrence time is also derived, the earthquake occurrence prediction is discussed by means of the average recurrence time and the seismicity rate, and application of this discussion to the seismic region Vrancea, Romania, is outlined. Finally, a special effect of non-linear behaviour of the seismic waves is discussed, by describing an exact solution derived recently for the elastic waves equation with cubic anharmonicities, its relevance, and its connection to the approximate quasi-plane waves picture. The properties of the seismic activity accompanying a main seismic shock, both like foreshocks and aftershocks, are relegated to forthcoming publications. (author)

  19. Seismic dynamics in advance and after the recent strong earthquakes in Italy and New Zealand

    Science.gov (United States)

    Nekrasova, A.; Kossobokov, V. G.

    2017-12-01

    We consider seismic events as a sequence of avalanches in self-organized system of blocks-and-faults of the Earth lithosphere and characterize earthquake series with the distribution of the control parameter, η = τ × 10B × (5-M) × L C of the Unified Scaling Law for Earthquakes, USLE (where τ is inter-event time, B is analogous to the Gutenberg-Richter b-value, and C is fractal dimension of seismic locus). A systematic analysis of earthquake series in Central Italy and New Zealand, 1993-2017, suggests the existence, in a long-term, of different rather steady levels of seismic activity characterized with near constant values of η, which, in mid-term, intermittently switch at times of transitions associated with the strong catastrophic events. On such a transition, seismic activity, in short-term, may follow different scenarios with inter-event time scaling of different kind, including constant, logarithmic, power law, exponential rise/decay or a mixture of those. The results do not support the presence of universality in seismic energy release. The observed variability of seismic activity in advance and after strong (M6.0+) earthquakes in Italy and significant (M7.0+) earthquakes in New Zealand provides important constraints on modelling realistic earthquake sequences by geophysicists and can be used to improve local seismic hazard assessments including earthquake forecast/prediction methodologies. The transitions of seismic regime in Central Italy and New Zealand started in 2016 are still in progress and require special attention and geotechnical monitoring. It would be premature to make any kind of definitive conclusions on the level of seismic hazard which is evidently high at this particular moment of time in both regions. The study supported by the Russian Science Foundation Grant No.16-17-00093.

  20. The use of radon as an earthquake precursor

    International Nuclear Information System (INIS)

    Ramola, R.C.; Singh, M.; Sandhu, A.S.; Singh, S.; Virk, H.S.

    1990-01-01

    Radon monitoring for earthquake prediction is part of an integral approach since the discovery of coherent and time anomalous radon concentrations prior to, during and after the 1966 Tashkent earthquake. In this paper some studies of groundwater and soil gas radon content in relation to earthquake activities are reviewed. Laboratory experiments and the development of groundwater and soil gas radon monitoring systems are described. In addition, radon monitoring studies conducted at the Guru Nanak Dev University Campus since 1986 are presented in detail. During these studies some anomalous changes in radon concentration were recorded before earthquakes occurred in the region. The anomalous radon increases are independent of meteorological conditions and appear to be caused by strain changes, which precede the earthquake. Anomalous changes in radon concentration before an earthquake suggest that radon monitoring can serve as an additional technique in the earthquake prediction programme in India. (author)

  1. Parallelization of the Coupled Earthquake Model

    Science.gov (United States)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  2. Antioptimization of earthquake exitation and response

    Directory of Open Access Journals (Sweden)

    G. Zuccaro

    1998-01-01

    Full Text Available The paper presents a novel approach to predict the response of earthquake-excited structures. The earthquake excitation is expanded in terms of series of deterministic functions. The coefficients of the series are represented as a point in N-dimensional space. Each available ccelerogram at a certain site is then represented as a point in the above space, modeling the available fragmentary historical data. The minimum volume ellipsoid, containing all points, is constructed. The ellipsoidal models of uncertainty, pertinent to earthquake excitation, are developed. The maximum response of a structure, subjected to the earthquake excitation, within ellipsoidal modeling of the latter, is determined. This procedure of determining least favorable response was termed in the literature (Elishakoff, 1991 as an antioptimization. It appears that under inherent uncertainty of earthquake excitation, antioptimization analysis is a viable alternative to stochastic approach.

  3. The influence of season of the year on the predicted agricultural consequences of accidental releases of radionuclides to atmosphere

    International Nuclear Information System (INIS)

    Simmonds, J.R.

    1985-02-01

    In Europe, because of the seasonal nature of agricultural practices, the consequences for agriculture of an accidental release of radioactive materials to atmosphere are likely to vary depending upon the time of year when the release occurs. The quantification of this variation is complicated by the need to take into account the introduction of countermeasures to restrict the radiation exposure from ingestion of contaminated foods, and by the presence in accidental releases of radionuclides which persist over several seasons. In this study, the effect on agricultural consequences of accidental releases occurring at different times of the year is examined. The consequences are expressed in terms of the amount of produce affected by restrictions on food supplies and the collective radiation dose from ingestion of food. The investigation has been carried out for three hypothetical releases representing a range of releases postulated for pressurised water reactors (PWRs). The effect of season of the year was determined for accidental releases occurring both in a single, defined set of meteorological conditions and for a range of possible meteorological conditions. For the main part of the study, consideration was limited to agricultural production in the UK only, but the effect of extending the analysis beyond the UK boundary was also considered. The results of the study show that considerable variation can occur in agricultural consequences following an accidental release at different times of the year. For the larger releases considered, this variation is reduced due to the effect of the introduction of countermeasures, particularly when consideration is limited to the UK only. Seasonal variation tends to be greater for the results of a deterministic analysis, which uses a single set of constant meteorological conditions, than for the results of a full probabilistic assessment. From the results presented here it is also seen that for many applications of

  4. New geological perspectives on earthquake recurrence models

    International Nuclear Information System (INIS)

    Schwartz, D.P.

    1997-01-01

    In most areas of the world the record of historical seismicity is too short or uncertain to accurately characterize the future distribution of earthquakes of different sizes in time and space. Most faults have not ruptured once, let alone repeatedly. Ultimately, the ability to correctly forecast the magnitude, location, and probability of future earthquakes depends on how well one can quantify the past behavior of earthquake sources. Paleoseismological trenching of active faults, historical surface ruptures, liquefaction features, and shaking-induced ground deformation structures provides fundamental information on the past behavior of earthquake sources. These studies quantify (a) the timing of individual past earthquakes and fault slip rates, which lead to estimates of recurrence intervals and the development of recurrence models and (b) the amount of displacement during individual events, which allows estimates of the sizes of past earthquakes on a fault. When timing and slip per event are combined with information on fault zone geometry and structure, models that define individual rupture segments can be developed. Paleoseismicity data, in the form of timing and size of past events, provide a window into the driving mechanism of the earthquake engine--the cycle of stress build-up and release

  5. Clustered and transient earthquake sequences in mid-continents

    Science.gov (United States)

    Liu, M.; Stein, S. A.; Wang, H.; Luo, G.

    2012-12-01

    Earthquakes result from sudden release of strain energy on faults. On plate boundary faults, strain energy is constantly accumulating from steady and relatively rapid relative plate motion, so large earthquakes continue to occur so long as motion continues on the boundary. In contrast, such steady accumulation of stain energy does not occur on faults in mid-continents, because the far-field tectonic loading is not steadily distributed between faults, and because stress perturbations from complex fault interactions and other stress triggers can be significant relative to the slow tectonic stressing. Consequently, mid-continental earthquakes are often temporally clustered and transient, and spatially migrating. This behavior is well illustrated by large earthquakes in North China in the past two millennia, during which no single large earthquakes repeated on the same fault segments, but moment release between large fault systems was complementary. Slow tectonic loading in mid-continents also causes long aftershock sequences. We show that the recent small earthquakes in the Tangshan region of North China are aftershocks of the 1976 Tangshan earthquake (M 7.5), rather than indicators of a new phase of seismic activity in North China, as many fear. Understanding the transient behavior of mid-continental earthquakes has important implications for assessing earthquake hazards. The sequence of large earthquakes in the New Madrid Seismic Zone (NMSZ) in central US, which includes a cluster of M~7 events in 1811-1812 and perhaps a few similar ones in the past millennium, is likely a transient process, releasing previously accumulated elastic strain on recently activated faults. If so, this earthquake sequence will eventually end. Using simple analysis and numerical modeling, we show that the large NMSZ earthquakes may be ending now or in the near future.

  6. Toward real-time regional earthquake simulation of Taiwan earthquakes

    Science.gov (United States)

    Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.

    2013-12-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  7. The research committee of Chuetsu-oki earthquake influences to Kashiwazaki-Kariwa Nuclear Power Station. How to press-release and take care of expression in articles at press side

    International Nuclear Information System (INIS)

    Hamamoto, Kazuko; Narabayashi, Tadashi; Kobayashi, Masahide; Akizuki, Teruo; Onishi, Hidetoshi

    2009-01-01

    As for the influences of Chuetsu-Oki Earthquake on Kashiwazaki-Kariwa Nuclear Power Station, we can conclude that 'the safety function of the nuclear power station, that is, 'Shutdown', 'Cooling down' and 'Isolating' functioned as per designed even against an earthquake beyond assumption, and fundamental nuclear safety could assured'. Nevertheless, it is said that one of the causes that harmful rumor had spread was due to mass communication media. In the press reports on some nuclear power station when affected by an earthquake and on trouble in the nuclear power station in the future, we will propose that the publication should be really useful for habitants and citizens and be promoted in the positive and expected direction, in order not to make the same mistake as this time. JSME should aim at implement the above-mentioned proposal under cooperation with other academic societies and organizations. (author)

  8. Performance of wire-type Rn detectors operated with gas gain in ambient air in view of its possible application to early earthquake predictions

    CERN Document Server

    Charpak, Georges; Breuil, P; Nappi, E; Martinengo, P; Peskov, V

    2010-01-01

    We describe a detector of alpha particles based on wire type counters (single-wire and multiwire) operating in ambient air at high gas gains (100-1000). The main advantages of these detectors are: low cost, robustness and ability to operate in humid air. The minimum detectable activity achieved with the multiwire detector for an integration time of 1 min is 140 Bq per m3, which is comparable to that featured by commercial devices. Owing to such features the detector is suited for massive application, for example for continuous monitoring of Rn or Po contaminations or, as discussed in the paper, its use in a network of Rn counters in areas affected by earth-quakes in order to verify, on a solid statistical basis, the envisaged correlation between the sudden Rn appearance and a forthcoming earthquake.

  9. Prediction and evaluation of nonlinear site response with potentially liquefiable layers in the area of Nafplion (Peloponnesus, Greece for a repeat of historical earthquakes

    Directory of Open Access Journals (Sweden)

    V. K. Karastathis

    2010-11-01

    Full Text Available We examine the possible non-linear behaviour of potentially liquefiable layers at selected sites located within the expansion area of the town of Nafplion, East Peloponnese, Greece. Input motion is computed for three scenario earthquakes, selected on the basis of historical seismicity data, using a stochastic strong ground motion simulation technique, which takes into account the finite dimensions of the earthquake sources. Site-specific ground acceleration synthetics and soil profiles are then used to evaluate the liquefaction potential at the sites of interest. The activation scenario of the Iria fault, which is the closest one to Nafplion (M=6.4, is found to be the most hazardous in terms of liquefaction initiation. In this scenario almost all the examined sites exhibit liquefaction features at depths of 6–12 m. For scenario earthquakes at two more distant seismic sources (Epidaurus fault – M6.3; Xylokastro fault – M6.7 strong ground motion amplification phenomena by the shallow soft soil layer are expected to be observed.

  10. The 1985 central chile earthquake: a repeat of previous great earthquakes in the region?

    Science.gov (United States)

    Comte, D; Eisenberg, A; Lorca, E; Pardo, M; Ponce, L; Saragoni, R; Singh, S K; Suárez, G

    1986-07-25

    A great earthquake (surface-wave magnitude, 7.8) occurred along the coast of central Chile on 3 March 1985, causing heavy damage to coastal towns. Intense foreshock activity near the epicenter of the main shock occurred for 11 days before the earthquake. The aftershocks of the 1985 earthquake define a rupture area of 170 by 110 square kilometers. The earthquake was forecast on the basis of the nearly constant repeat time (83 +/- 9 years) of great earthquakes in this region. An analysis of previous earthquakes suggests that the rupture lengths of great shocks in the region vary by a factor of about 3. The nearly constant repeat time and variable rupture lengths cannot be reconciled with time- or slip-predictable models of earthquake recurrence. The great earthquakes in the region seem to involve a variable rupture mode and yet, for unknown reasons, remain periodic. Historical data suggest that the region south of the 1985 rupture zone should now be considered a gap of high seismic potential that may rupture in a great earthquake in the next few tens of years.

  11. Fault failure with moderate earthquakes

    Science.gov (United States)

    Johnston, M. J. S.; Linde, A. T.; Gladwin, M. T.; Borcherdt, R. D.

    1987-12-01

    High resolution strain and tilt recordings were made in the near-field of, and prior to, the May 1983 Coalinga earthquake ( ML = 6.7, Δ = 51 km), the August 4, 1985, Kettleman Hills earthquake ( ML = 5.5, Δ = 34 km), the April 1984 Morgan Hill earthquake ( ML = 6.1, Δ = 55 km), the November 1984 Round Valley earthquake ( ML = 5.8, Δ = 54 km), the January 14, 1978, Izu, Japan earthquake ( ML = 7.0, Δ = 28 km), and several other smaller magnitude earthquakes. These recordings were made with near-surface instruments (resolution 10 -8), with borehole dilatometers (resolution 10 -10) and a 3-component borehole strainmeter (resolution 10 -9). While observed coseismic offsets are generally in good agreement with expectations from elastic dislocation theory, and while post-seismic deformation continued, in some cases, with a moment comparable to that of the main shock, preseismic strain or tilt perturbations from hours to seconds (or less) before the main shock are not apparent above the present resolution. Precursory slip for these events, if any occurred, must have had a moment less than a few percent of that of the main event. To the extent that these records reflect general fault behavior, the strong constraint on the size and amount of slip triggering major rupture makes prediction of the onset times and final magnitudes of the rupture zones a difficult task unless the instruments are fortuitously installed near the rupture initiation point. These data are best explained by an inhomogeneous failure model for which various areas of the fault plane have either different stress-slip constitutive laws or spatially varying constitutive parameters. Other work on seismic waveform analysis and synthetic waveforms indicates that the rupturing process is inhomogeneous and controlled by points of higher strength. These models indicate that rupture initiation occurs at smaller regions of higher strength which, when broken, allow runaway catastrophic failure.

  12. Safety and survival in an earthquake

    Science.gov (United States)

    ,

    1969-01-01

    Many earth scientists in this country and abroad are focusing their studies on the search for means of predicting impending earthquakes, but, as yet, an accurate prediction of the time and place of such an event cannot be made. From past experience, however, one can assume that earthquakes will continue to harass mankind and that they will occur most frequently in the areas where they have been relatively common in the past. In the United States, earthquakes can be expected to occur most frequently in the western states, particularly in Alaska, California, Washington, Oregon, Nevada, Utah, and Montana. The danger, however, is not confined to any one part of the country; major earthquakes have occurred at widely scattered locations.

  13. The 2016 Kumamoto earthquake sequence.

    Science.gov (United States)

    Kato, Aitaro; Nakamura, Kouji; Hiyama, Yohei

    2016-01-01

    Beginning in April 2016, a series of shallow, moderate to large earthquakes with associated strong aftershocks struck the Kumamoto area of Kyushu, SW Japan. An M j 7.3 mainshock occurred on 16 April 2016, close to the epicenter of an M j 6.5 foreshock that occurred about 28 hours earlier. The intense seismicity released the accumulated elastic energy by right-lateral strike slip, mainly along two known, active faults. The mainshock rupture propagated along multiple fault segments with different geometries. The faulting style is reasonably consistent with regional deformation observed on geologic timescales and with the stress field estimated from seismic observations. One striking feature of this sequence is intense seismic activity, including a dynamically triggered earthquake in the Oita region. Following the mainshock rupture, postseismic deformation has been observed, as well as expansion of the seismicity front toward the southwest and northwest.

  14. The 2016 Kumamoto earthquake sequence

    Science.gov (United States)

    KATO, Aitaro; NAKAMURA, Kouji; HIYAMA, Yohei

    2016-01-01

    Beginning in April 2016, a series of shallow, moderate to large earthquakes with associated strong aftershocks struck the Kumamoto area of Kyushu, SW Japan. An Mj 7.3 mainshock occurred on 16 April 2016, close to the epicenter of an Mj 6.5 foreshock that occurred about 28 hours earlier. The intense seismicity released the accumulated elastic energy by right-lateral strike slip, mainly along two known, active faults. The mainshock rupture propagated along multiple fault segments with different geometries. The faulting style is reasonably consistent with regional deformation observed on geologic timescales and with the stress field estimated from seismic observations. One striking feature of this sequence is intense seismic activity, including a dynamically triggered earthquake in the Oita region. Following the mainshock rupture, postseismic deformation has been observed, as well as expansion of the seismicity front toward the southwest and northwest. PMID:27725474

  15. Effectiveness of water-air and octanol-air partition coefficients to predict lipophilic flavor release behavior from O/W emulsions.

    Science.gov (United States)

    Tamaru, Shunji; Igura, Noriyuki; Shimoda, Mitsuya

    2018-01-15

    Flavor release from food matrices depends on the partition of volatile flavor compounds between the food matrix and the vapor phase. Thus, we herein investigated the relationship between released flavor concentrations and three different partition coefficients, namely octanol-water, octanol-air, and water-air, which represented the oil, water, and air phases present in emulsions. Limonene, 2-methylpyrazine, nonanal, benzaldehyde, ethyl benzoate, α-terpineol, benzyl alcohol, and octanoic acid were employed. The released concentrations of these flavor compounds from oil-in-water (O/W) emulsions were measured under equilibrium using static headspace gas chromatography. The results indicated that water-air and octanol-air partition coefficients correlated with the logarithms of the released concentrations in the headspace for highly lipophilic flavor compounds. Moreover, the same tendency was observed over various oil volume ratios in the emulsions. Our findings therefore suggest that octanol-air and water-air partition coefficients can be used to predict the released concentration of lipophilic flavor compounds from O/W emulsions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Potential consequences in Norway after a hypothetical accident at Leningrad nuclear power plant. Potential release, fallout and predicted impacts on the environment

    International Nuclear Information System (INIS)

    Nalbandyan, A.; Ytre-Eide, M.A.; Thoerring, H.; Liland, A.; Bartnicki, J.; Balonov, M.

    2012-06-01

    The report describes different hypothetical accident scenarios at the Leningrad nuclear power plant for both RBMK and VVER-1200 reactors. The estimated release is combined with different meteorological scenarios to predict possible fallout of radioactive substances in Norway. For a hypothetical catastrophic accident at an RBMK reactor combined with a meteorological worst case scenario, the consequences in Norway could be considerable. Foodstuffs in many regions would be contaminated above the food intervention levels for radioactive cesium in Norway. (Author)

  17. Potential consequences in Norway after a hypothetical accident at Leningrad nuclear power plant. Potential release, fallout and predicted impacts on the environment

    Energy Technology Data Exchange (ETDEWEB)

    Nalbandyan, A.; Ytre-Eide, M.A.; Thoerring, H.; Liland, A.; Bartnicki, J.; Balonov, M.

    2012-06-15

    The report describes different hypothetical accident scenarios at the Leningrad nuclear power plant for both RBMK and VVER-1200 reactors. The estimated release is combined with different meteorological scenarios to predict possible fallout of radioactive substances in Norway. For a hypothetical catastrophic accident at an RBMK reactor combined with a meteorological worst case scenario, the consequences in Norway could be considerable. Foodstuffs in many regions would be contaminated above the food intervention levels for radioactive cesium in Norway. (Author)

  18. Ionospheric earthquake precursors

    International Nuclear Information System (INIS)

    Bulachenko, A.L.; Oraevskij, V.N.; Pokhotelov, O.A.; Sorokin, V.N.; Strakhov, V.N.; Chmyrev, V.M.

    1996-01-01

    Results of experimental study on ionospheric earthquake precursors, program development on processes in the earthquake focus and physical mechanisms of formation of various type precursors are considered. Composition of experimental cosmic system for earthquake precursors monitoring is determined. 36 refs., 5 figs

  19. Children's Ideas about Earthquakes

    Science.gov (United States)

    Simsek, Canan Lacin

    2007-01-01

    Earthquake, a natural disaster, is among the fundamental problems of many countries. If people know how to protect themselves from earthquake and arrange their life styles in compliance with this, damage they will suffer will reduce to that extent. In particular, a good training regarding earthquake to be received in primary schools is considered…

  20. Retrospective Evaluation of the Five-Year and Ten-Year CSEP-Italy Earthquake Forecasts

    OpenAIRE

    Werner, M. J.; Zechar, J. D.; Marzocchi, W.; Wiemer, S.

    2010-01-01

    On 1 August 2009, the global Collaboratory for the Study of Earthquake Predictability (CSEP) launched a prospective and comparative earthquake predictability experiment in Italy. The goal of the CSEP-Italy experiment is to test earthquake occurrence hypotheses that have been formalized as probabilistic earthquake forecasts over temporal scales that range from days to years. In the first round of forecast submissions, members of the CSEP-Italy Working Group presented eighteen five-year and ten...

  1. Crowdsourced earthquake early warning

    Science.gov (United States)

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.

  2. Case history of an anticipated event: The major (Mw = 7.0) Vrancea, Romania earthquake of 1986 - revisited

    International Nuclear Information System (INIS)

    Marza, V.; Burlacu, B V.; Pantea, A; Malita, Z.

    2002-01-01

    This is a reissue of a paper initially published in the European Seismological Commission Proceedings of the XXI General Assembly held on 23-27 August 1988 in Sofia, Bulgaria, p. 515-523, and released in 1989. We present here an excerpt of the original paper, taking only advantage of the modern digital graphics, removing some 'typing' mistakes or adding some explanatory late notes, in order to remember the conspicuous earthquake prediction research results done by Romanian seismology after the forecasted 1977 Vrancea major event. For the sake of understanding we distinguish between earthquake forecasting (long-term prediction, that is a time-window of years, but less than 20% of the mean return period for the involved magnitude and a lead time of years) and earthquake anticipation (medium-term prediction, i.e. a time-window of a few months and a lead time of months), stages what proved to be feasible for Vrancea seismogenic zone. Analysis and discussion of a variety of precursory seismicity patterns (p.s.p.) belonging to all temporal developmental stages of the preparatory (geo)physical process leading to the killer and damaging major subcrustal Vrancea, Romania, earthquake of August 30, 1986 (epicenter = 45.5 angle N/26.4 angle E; depth 144 km; magnitude(s) m w =7.0, M w =7.3, M L =7.0; I o =VIII 1/2 MSK) are performed and documented, clearly proving that the earthquake would not has been unexpected. The salient features of the Vrancea Seismogenic Zone (VSZ) and its tectonic setting have been presented elsewhere. The seismological data base used in this study is the earthquake master catalogue of Constantinescu and Marza, updated on the basis of the data supplied by the real-time telemetered seismographic network of Romania, centered on VSZ. The contents of the paper is as follows: 1. Introduction; 2. The Vrancea 1986 Major (m w =7.0) Subcrustal Earthquake Related Precursors; 2.1. Regularity Patterns; 2.2. Preseismic Quiescence; 2.3. Hypocentral migration

  3. Intensity earthquake scenario (scenario event - a damaging earthquake with higher probability of occurrence) for the city of Sofia

    Science.gov (United States)

    Aleksandrova, Irena; Simeonova, Stela; Solakov, Dimcho; Popova, Maria

    2014-05-01

    . The usable and realistic ground motion maps for urban areas are generated: - either from the assumption of a "reference earthquake" - or directly, showing values of macroseimic intensity generated by a damaging, real earthquake. In the study, applying deterministic approach, earthquake scenario in macroseismic intensity ("model" earthquake scenario) for the city of Sofia is generated. The deterministic "model" intensity scenario based on assumption of a "reference earthquake" is compared with a scenario based on observed macroseimic effects caused by the damaging 2012 earthquake (MW5.6). The difference between observed (Io) and predicted (Ip) intensities values is analyzed.

  4. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-01-01

    Earthquakes are one of the most destructive natural hazards on our planet Earth. Hugh earthquakes striking offshore may cause devastating tsunamis, as evidenced by the 11 March 2011 Japan (moment magnitude Mw9.0) and the 26 December 2004 Sumatra (Mw9.1) earthquakes. Earthquake prediction (in terms of the precise time, place, and magnitude of a coming earthquake) is arguably unfeasible in the foreseeable future. To mitigate seismic hazards from future earthquakes in earthquake-prone areas, such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular, ground motion simulations for past and future (possible) significant earthquakes have been performed to understand factors that affect ground shaking in populated areas, and to provide ground shaking characteristics and synthetic seismograms for emergency preparation and design of earthquake-resistant structures. These simulation results can guide the development of more rational seismic provisions for leading to safer, more efficient, and economical50pt]Please provide V. Taylor author e-mail ID. structures in earthquake-prone regions.

  5. Prediction of E. coli release from streambed to water column during base flow periods using SWAT model

    Science.gov (United States)

    Microbial water quality in streams is of importance for recreation, irrigation, and other uses. The streambed sediment has been shown to harbor large fecal indicator bacteria (FIB) population that can be released to water column during high-flow events when sediments are resuspended. There have been...

  6. Scaling and spatial complementarity of tectonic earthquake swarms

    KAUST Repository

    Passarelli, Luigi

    2017-11-10

    Tectonic earthquake swarms (TES) often coincide with aseismic slip and sometimes precede damaging earthquakes. In spite of recent progress in understanding the significance and properties of TES at plate boundaries, their mechanics and scaling are still largely uncertain. Here we evaluate several TES that occurred during the past 20 years on a transform plate boundary in North Iceland. We show that the swarms complement each other spatially with later swarms discouraged from fault segments activated by earlier swarms, which suggests efficient strain release and aseismic slip. The fault area illuminated by earthquakes during swarms may be more representative of the total moment release than the cumulative moment of the swarm earthquakes. We use these findings and other published results from a variety of tectonic settings to discuss general scaling properties for TES. The results indicate that the importance of TES in releasing tectonic strain at plate boundaries may have been underestimated.

  7. Ionospheric precursors for crustal earthquakes in Italy

    Directory of Open Access Journals (Sweden)

    L. Perrone

    2010-04-01

    Full Text Available Crustal earthquakes with magnitude 6.0>M≥5.5 observed in Italy for the period 1979–2009 including the last one at L'Aquila on 6 April 2009 were considered to check if the earlier obtained relationships for ionospheric precursors for strong Japanese earthquakes are valid for the Italian moderate earthquakes. The ionospheric precursors are based on the observed variations of the sporadic E-layer parameters (h'Es, fbEs and foF2 at the ionospheric station Rome. Empirical dependencies for the seismo-ionospheric disturbances relating the earthquake magnitude and the epicenter distance are obtained and they have been shown to be similar to those obtained earlier for Japanese earthquakes. The dependences indicate the process of spreading the disturbance from the epicenter towards periphery during the earthquake preparation process. Large lead times for the precursor occurrence (up to 34 days for M=5.8–5.9 tells about a prolong preparation period. A possibility of using the obtained relationships for the earthquakes prediction is discussed.

  8. Seismic damage to structures in the M s6.5 Ludian earthquake

    Science.gov (United States)

    Chen, Hao; Xie, Quancai; Dai, Boyang; Zhang, Haoyu; Chen, Hongfu

    2016-03-01

    On 3 August 2014, the Ludian earthquake struck northwest Yunnan Province with a surface wave magnitude of 6.5. This moderate earthquake unexpectedly caused high fatalities and great economic loss. Four strong motion stations were located in the areas with intensity V, VI, VII and IX, near the epicentre. The characteristics of the ground motion are discussed herein, including 1) ground motion was strong at a period of less than 1.4 s, which covered the natural vibration period of a large number of structures; and 2) the release energy was concentrated geographically. Based on materials collected during emergency building inspections, the damage patterns of adobe, masonry, timber frame and reinforced concrete (RC) frame structures in areas with different intensities are summarised. Earthquake damage matrices of local buildings are also given for fragility evaluation and earthquake damage prediction. It is found that the collapse ratios of RC frame and confined masonry structures based on the new design code are significantly lower than non-seismic buildings. However, the RC frame structures still failed to achieve the `strong column, weak beam' design target. Traditional timber frame structures with a light infill wall showed good aseismic performance.

  9. Radon anomalies prior to earthquakes (1). Review of previous studies

    International Nuclear Information System (INIS)

    Ishikawa, Tetsuo; Tokonami, Shinji; Yasuoka, Yumi; Shinogi, Masaki; Nagahama, Hiroyuki; Omori, Yasutaka; Kawada, Yusuke

    2008-01-01

    The relationship between radon anomalies and earthquakes has been studied for more than 30 years. However, most of the studies dealt with radon in soil gas or in groundwater. Before the 1995 Hyogoken-Nanbu earthquake, an anomalous increase of atmospheric radon was observed at Kobe Pharmaceutical University. The increase was well fitted with a mathematical model related to earthquake fault dynamics. This paper reports the significance of this observation, reviewing previous studies on radon anomaly before earthquakes. Groundwater/soil radon measurements for earthquake prediction began in 1970's in Japan as well as foreign countries. One of the most famous studies in Japan is groundwater radon anomaly before the 1978 Izu-Oshima-kinkai earthquake. We have recognized the significance of radon in earthquake prediction research, but recently its limitation was also pointed out. Some researchers are looking for a better indicator for precursors; simultaneous measurements of radon and other gases are new trials in recent studies. Contrary to soil/groundwater radon, we have not paid much attention to atmospheric radon before earthquakes. However, it might be possible to detect precursors in atmospheric radon before a large earthquake. In the next issues, we will discuss the details of the anomalous atmospheric radon data observed before the Hyogoken-Nanbu earthquake. (author)

  10. SPEEDI: a computer code system for the real-time prediction of radiation dose to the public due to an accidental release

    International Nuclear Information System (INIS)

    Imai, Kazuhiko; Chino, Masamichi; Ishikawa, Hirohiko

    1985-10-01

    SPEEDI, a computer code system for prediction of environmental doses from radioactive materials accidentally released from a nuclear plant has been developed to assist the organizations responsible for an emergency planning. For realistic simulation, have been developed a model which statistically predicts the basic wind data and then calculates the three-dimensional mass consistent wind field by interpolating these predicted data, and a model for calculation of the diffusion of released materials using a combined model of random-walk and PICK methods. These calculation in the system is carried out in conversational mode with a computer so that we may use the system with ease in an emergency. SPEEDI has also versatile files, which make it easy to control the complicated flows of calculation. In order to attain a short computation time, a large-scale computer with performance of 25 MIPS and a vector processor of maximum 250 MFLOPS are used for calculation of the models so that quick responses have been made. Simplified models are also prepared for calculation in a minicomputer widely used by local governments and research institutes, although the precision of calculation as same with the above models can not be expected to obtain. The present report outlines the structure and functions of SPEEDI, methods for prediction of the wind field and the models for calculation of the concentration of released materials in air and on the ground, and the doses to the public. Some of the diffusion models have been compared with the field experiments which had been carried out as a part of the SPEEDI development program. The report also discusses the reliability of the diffusion models on the basis of the compared results, and shows that they can reasonably simulate the diffusion in the internal boundary layer which commonly occurs near the coastal region. (J.P.N.)

  11. Predicted radionuclide release from reactor-related unenclosed solid objects dumped in the Sea of Japan and the Pacific Ocean, east coast of Kamchatka

    International Nuclear Information System (INIS)

    Mount, M.E.; Lynn, N.M.; Warden, J.M.

    1996-06-01

    Between 1978 and 1991 reactor-related solid radioactive waste was dumped by the former Soviet Union as unenclosed objects in the Pacific Ocean, east coast of Kamchatka, and the Sea of Japan. This paper presented estimates for the current (1994) inventory of activation and corrosion products contained in the reactor-related unenclosed solid objects. In addition, simple models derived for prediction of radionuclide release from marine reactors dumped in the Kara Sea are applied to certain of the dumped objects to provide estimates of radionuclide release to the Pacific Ocean, east coast of Kamchatka, and Sea of Japan environments. For the Pacific Ocean, east coast of Kamchatka, total release rates start below 0.01 GBq yr -1 and over 1,000 years, fall to 100 Bq yr -1 . In the Sea of Japan, the total release rate starts just above 1 GBq yr - 1 , dropping off to a level less than 0.1 GBq yr -1 , extending past the year 4,000

  12. Development of a general model to predict the rate of radionuclide release (source term) from a low-level waste shallow land burial facility

    International Nuclear Information System (INIS)

    Sullivan, T.M.; Kempf, C.R.; Suen, C.J.; Mughabghab, S.M.

    1988-01-01

    Federal Code of Regulations 10 CFR 61 requires that any near surface disposal site be capable of being characterized, analyzed, and modeled. The objective of this program is to assist NRC in developing the ability to model a disposal site that conforms to these regulations. In particular, a general computer model capable of predicting the quantity and rate of radionuclide release from a shallow land burial trench, i.e., the source term, is being developed. The framework for this general model has been developed and consists of four basic compartments that represent the major processes that influence release. These compartments are: water flow, container degradation, release from the waste packages, and radionuclide transport. Models for water flow and radionuclide transport rely on the use of the computer codes FEMWATER and FEMWASTE. These codes are generally regarded as being state-of-the-art and required little modification for their application to this project. Models for container degradation and release from waste packages have been specifically developed for this project. This paper provides a brief description of the models being used in the source term project and examples of their use over a range of potential conditions. 13 refs

  13. Experimental study of structural response to earthquakes

    International Nuclear Information System (INIS)

    Clough, R.W.; Bertero, V.V.; Bouwkamp, J.G.; Popov, E.P.

    1975-01-01

    The objectives, methods, and some of the principal results obtained from experimental studies of the behavior of structures subjected to earthquakes are described. Although such investigations are being conducted in many laboratories throughout the world, the information presented deals specifically with projects being carried out at the Earthquake Engineering Research Center (EERC) of the University of California, Berkeley. A primary purpose of these investigations is to obtain detailed information on the inelastic response mechanisms in typical structural systems so that the experimentally observed performance can be compared with computer generated analytical predictions. Only by such comparisons can the mathematical models used in dynamic nonlinear analyses be verified and improved. Two experimental procedures for investigating earthquake structural response are discussed: the earthquake simulator facility which subjects the base of the test structure to acceleration histories similar to those recorded in actual earthquakes, and systems of hydraulic rams which impose specified displacement histories on the test components, equivalent to motions developed in structures subjected to actual'quakes. The general concept and performance of the 20ft square EERC earthquake simulator is described, and the testing of a two story concrete frame building is outlined. Correlation of the experimental results with analytical predictions demonstrates that satisfactory agreement can be obtained only if the mathematical model incorporates a stiffness deterioration mechanism which simulates the cracking and other damage suffered by the structure

  14. A three-dimensional semi-analytical solution for predicting drug release through the orifice of a spherical device.

    Science.gov (United States)

    Simon, Laurent; Ospina, Juan

    2016-07-25

    Three-dimensional solute transport was investigated for a spherical device with a release hole. The governing equation was derived using the Fick's second law. A mixed Neumann-Dirichlet condition was imposed at the boundary to represent diffusion through a small region on the surface of the device. The cumulative percentage of drug released was calculated in the Laplace domain and represented by the first term of an infinite series of Legendre and modified Bessel functions of the first kind. Application of the Zakian algorithm yielded the time-domain closed-form expression. The first-order solution closely matched a numerical solution generated by Mathematica(®). The proposed method allowed computation of the characteristic time. A larger surface pore resulted in a smaller effective time constant. The agreement between the numerical solution and the semi-analytical method improved noticeably as the size of the orifice increased. It took four time constants for the device to release approximately ninety-eight of its drug content. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. A prediction of the UO2 fission gas release data of Bellamy and Rich using a model recently developed by combustion engineering

    International Nuclear Information System (INIS)

    Freeburn, H.R.; Pati, S.R.

    1983-01-01

    The trend in the Light Water Reactor industry to higher discharge burnups of UO 2 fuel rods has initiated the modification of existing fuel rod models to better account for high burnup effects. A model recently developed by Combustion Engineering, Inc. (C-E) for fission gas release from UO 2 fuel recognizes the separate effects of temperature-dependent and temperature-independent release mechanisms. This model accounts for a moderate burnup enhancement that is based on a concept of a saturation inventory existing for the intra- and inter-grannular storage of fission gas within the fuel pellet. The saturation inventory, as modelled, is strongly dependent on the local temperature and the changing grain size of the fuel with burnup. Although the fitting constants of the model were determined solely from more current gas release data from fuel more typical of the C-E product line, the model, nonetheless, provides an excellent prediction of the Bellamy and Rich data over the entire burnup range represented by the data (+-1.6% gas release at a 1σ level). The ability to obtain a good comparison with this data base provides additional support for the use of the particular separation of the effects of thermal diffusion and burnup enhancement on fission gas release that is embodied in the model. Furthermore, the degree of burnup enhancement in the model is believed to be moderate enough to suggest that this high burnup effect should not impede the extension of discharge burnup limits associated with current design fuel rods for Pressurized Water Reactors

  16. Imprecision of dose predictions for radionuclides released to the environment: an application of a Monte Carlo simulation technique

    Energy Technology Data Exchange (ETDEWEB)

    Schwarz, G; Hoffman, F O

    1980-01-01

    An evaluation of the imprecision in dose predictions for radionuclides has been performed using correct dose assessment models and knowledge of model parameter value uncertainties. The propagation of parameter uncertainties is demonstrated using a Monte Carlo technique for elemental iodine 131 transported via the pasture-cow-milk-child pathway. Results indicated that when site-specific information is unavailable, the imprecision inherent in the predictions for this pathway is potentially large. (3 graphs, 25 references, 5 tables)

  17. Fault roughness and strength heterogeneity control earthquake size and stress drop

    KAUST Repository

    Zielke, Olaf

    2017-01-13

    An earthquake\\'s stress drop is related to the frictional breakdown during sliding and constitutes a fundamental quantity of the rupture process. High-speed laboratory friction experiments that emulate the rupture process imply stress drop values that greatly exceed those commonly reported for natural earthquakes. We hypothesize that this stress drop discrepancy is due to fault-surface roughness and strength heterogeneity: an earthquake\\'s moment release and its recurrence probability depend not only on stress drop and rupture dimension but also on the geometric roughness of the ruptured fault and the location of failing strength asperities along it. Using large-scale numerical simulations for earthquake ruptures under varying roughness and strength conditions, we verify our hypothesis, showing that smoother faults may generate larger earthquakes than rougher faults under identical tectonic loading conditions. We further discuss the potential impact of fault roughness on earthquake recurrence probability. This finding provides important information, also for seismic hazard analysis.

  18. Methodology for prediction and estimation of consequences of possible atmospheric releases of hazardous matter: "Kursk"? submarine study

    Science.gov (United States)

    Baklanov, A.; Mahura, A.; Sørensen, J. H.

    2003-03-01

    There are objects with some periods of higher than normal levels of risk of accidental atmospheric releases (nuclear, chemical, biological, etc.). Such accidents or events may occur due to natural hazards, human errors, terror acts, and during transportation of waste or various operations at high risk. A methodology for risk assessment is suggested and it includes two approaches: 1) probabilistic analysis of possible atmospheric transport patterns using long-term trajectory and dispersion modelling, and 2) forecast and evaluation of possible contamination and consequences for the environment and population using operational dispersion modelling. The first approach could be applied during the preparation stage, and the second - during the operation stage. The suggested methodology is applied on an example of the most important phases (lifting, transportation, and decommissioning) of the "Kursk" nuclear submarine operation. It is found that the temporal variability of several probabilistic indicators (fast transport probability fields, maximum reaching distance, maximum possible impact zone, and average integral concentration of 137Cs) showed that the fall of 2001 was the most appropriate time for the beginning of the operation. These indicators allowed to identify the hypothetically impacted geographical regions and territories. In cases of atmospheric transport toward the most populated areas, the forecasts of possible consequences during phases of the high and medium potential risk levels based on a unit hypothetical release are performed. The analysis showed that the possible deposition fractions of 1011 over the Kola Peninsula, and 10-12 - 10-13 for the remote areas of the Scandinavia and Northwest Russia could be observed. The suggested methodology may be used successfully for any potentially dangerous object involving risk of atmospheric release of hazardous materials of nuclear, chemical or biological nature.

  19. Methodology for prediction and estimation of consequences of possible atmospheric releases of hazardous matter: "Kursk" submarine study

    Science.gov (United States)

    Baklanov, A.; Mahura, A.; Sørensen, J. H.

    2003-06-01

    There are objects with some periods of higher than normal levels of risk of accidental atmospheric releases (nuclear, chemical, biological, etc.). Such accidents or events may occur due to natural hazards, human errors, terror acts, and during transportation of waste or various operations at high risk. A methodology for risk assessment is suggested and it includes two approaches: 1) probabilistic analysis of possible atmospheric transport patterns using long-term trajectory and dispersion modelling, and 2) forecast and evaluation of possible contamination and consequences for the environment and population using operational dispersion modelling. The first approach could be applied during the preparation stage, and the second - during the operation stage. The suggested methodology is applied on an example of the most important phases (lifting, transportation, and decommissioning) of the ``Kursk" nuclear submarine operation. It is found that the temporal variability of several probabilistic indicators (fast transport probability fields, maximum reaching distance, maximum possible impact zone, and average integral concentration of 137Cs) showed that the fall of 2001 was the most appropriate time for the beginning of the operation. These indicators allowed to identify the hypothetically impacted geographical regions and territories. In cases of atmospheric transport toward the most populated areas, the forecasts of possible consequences during phases of the high and medium potential risk levels based on a unit hypothetical release (e.g. 1 Bq) are performed. The analysis showed that the possible deposition fractions of 10-11 (Bq/m2) over the Kola Peninsula, and 10-12 - 10-13 (Bq/m2) for the remote areas of the Scandinavia and Northwest Russia could be observed. The suggested methodology may be used successfully for any potentially dangerous object involving risk of atmospheric release of hazardous materials of nuclear, chemical or biological nature.

  20. Methodology for prediction and estimation of consequences of possible atmospheric releases of hazardous matter: 'Kursk' submarine study

    Directory of Open Access Journals (Sweden)

    A. Baklanov

    2003-01-01

    Full Text Available There are objects with some periods of higher than normal levels of risk of accidental atmospheric releases (nuclear, chemical, biological, etc.. Such accidents or events may occur due to natural hazards, human errors, terror acts, and during transportation of waste or various operations at high risk. A methodology for risk assessment is suggested and it includes two approaches: 1 probabilistic analysis of possible atmospheric transport patterns using long-term trajectory and dispersion modelling, and 2 forecast and evaluation of possible contamination and consequences for the environment and population using operational dispersion modelling. The first approach could be applied during the preparation stage, and the second - during the operation stage. The suggested methodology is applied on an example of the most important phases (lifting, transportation, and decommissioning of the ``Kursk" nuclear submarine operation. It is found that the temporal variability of several probabilistic indicators (fast transport probability fields, maximum reaching distance, maximum possible impact zone, and average integral concentration of 137Cs showed that the fall of 2001 was the most appropriate time for the beginning of the operation. These indicators allowed to identify the hypothetically impacted geographical regions and territories. In cases of atmospheric transport toward the most populated areas, the forecasts of possible consequences during phases of the high and medium potential risk levels based on a unit hypothetical release (e.g. 1 Bq are performed. The analysis showed that the possible deposition fractions of 10-11 (Bq/m2 over the Kola Peninsula, and 10-12 - 10-13 (Bq/m2 for the remote areas of the Scandinavia and Northwest Russia could be observed. The suggested methodology may be used successfully for any potentially dangerous object involving risk of atmospheric release of hazardous materials of nuclear, chemical or biological nature.

  1. Encyclopedia of earthquake engineering

    CERN Document Server

    Kougioumtzoglou, Ioannis; Patelli, Edoardo; Au, Siu-Kui

    2015-01-01

    The Encyclopedia of Earthquake Engineering is designed to be the authoritative and comprehensive reference covering all major aspects of the science of earthquake engineering, specifically focusing on the interaction between earthquakes and infrastructure. The encyclopedia comprises approximately 265 contributions. Since earthquake engineering deals with the interaction between earthquake disturbances and the built infrastructure, the emphasis is on basic design processes important to both non-specialists and engineers so that readers become suitably well-informed without needing to deal with the details of specialist understanding. The content of this encyclopedia provides technically inclined and informed readers about the ways in which earthquakes can affect our infrastructure and how engineers would go about designing against, mitigating and remediating these effects. The coverage ranges from buildings, foundations, underground construction, lifelines and bridges, roads, embankments and slopes. The encycl...

  2. The 2012 Mw5.6 earthquake in Sofia seismogenic zone - is it a slow earthquake

    Science.gov (United States)

    Raykova, Plamena; Solakov, Dimcho; Slavcheva, Krasimira; Simeonova, Stela; Aleksandrova, Irena

    2017-04-01

    Recently our understanding of tectonic faulting has been shaken by the discoveries of seismic tremor, low frequency earthquakes, slow slip events, and other models of fault slip. These phenomenas represent models of failure that were thought to be non-existent and theoretically impossible only a few years ago. Slow earthquakes are seismic phenomena in which the rupture of geological faults in the earth's crust occurs gradually without creating strong tremors. Despite the growing number of observations of slow earthquakes their origin remains unresolved. Studies show that the duration of slow earthquakes ranges from a few seconds to a few hundred seconds. The regular earthquakes with which most people are familiar release a burst of built-up stress in seconds, slow earthquakes release energy in ways that do little damage. This study focus on the characteristics of the Mw5.6 earthquake occurred in Sofia seismic zone on May 22nd, 2012. The Sofia area is the most populated, industrial and cultural region of Bulgaria that faces considerable earthquake risk. The Sofia seismic zone is located in South-western Bulgaria - the area with pronounce tectonic activity and proved crustal movement. In 19th century the city of Sofia (situated in the centre of the Sofia seismic zone) has experienced two strong earthquakes with epicentral intensity of 10 MSK. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK64).The 2012 quake occurs in an area characterized by a long quiescence (of 95 years) for moderate events. Moreover, a reduced number of small earthquakes have also been registered in the recent past. The Mw5.6 earthquake is largely felt on the territory of Bulgaria and neighbouring countries. No casualties and severe injuries have been reported. Mostly moderate damages were observed in the cities of Pernik and Sofia and their surroundings. These observations could be assumed indicative for a

  3. Well bore Flow Treatment Used to Predict Radioactive Brine Releases to the Surface from Future Drilling Penetrations into the Waste Isolation Pilot Plant (WIPP), New Mexico, USA

    International Nuclear Information System (INIS)

    Brien, D.G.O.; Stoelzel, D.M.; Hadgu, T.

    1999-01-01

    The Waste Isolation Pilot Plant (WIPP) is the U.S. Department of Energy's (DOE) mined geologic repository in southeastern New Mexico, USA.This site is designed for the permanent burial of transuranic radioactive waste generated by defense related activities.The waste produces gases when exposed to brine. This gas generation may result in increased pressures over time. Therefore, a future driller that unknowingly penetrates through the site may experience a blowout. This paper describes the methodology used to predict the resultant volumes of contaminated brine released

  4. Stress drops of induced and tectonic earthquakes in the central United States are indistinguishable.

    Science.gov (United States)

    Huang, Yihe; Ellsworth, William L; Beroza, Gregory C

    2017-08-01

    Induced earthquakes currently pose a significant hazard in the central United States, but there is considerable uncertainty about the severity of their ground motions. We measure stress drops of 39 moderate-magnitude induced and tectonic earthquakes in the central United States and eastern North America. Induced earthquakes, more than half of which are shallower than 5 km, show a comparable median stress drop to tectonic earthquakes in the central United States that are dominantly strike-slip but a lower median stress drop than that of tectonic earthquakes in the eastern North America that are dominantly reverse-faulting. This suggests that ground motion prediction equations developed for tectonic earthquakes can be applied to induced earthquakes if the effects of depth and faulting style are properly considered. Our observation leads to the notion that, similar to tectonic earthquakes, induced earthquakes are driven by tectonic stresses.

  5. Earthquake at 40 feet

    Science.gov (United States)

    Miller, G. J.

    1976-01-01

    The earthquake that struck the island of Guam on November 1, 1975, at 11:17 a.m had many unique aspects-not the least of which was the experience of an earthquake of 6.25 Richter magnitude while at 40 feet. My wife Bonnie, a fellow diver, Greg Guzman, and I were diving at Gabgab Beach in teh outer harbor of Apra Harbor, engaged in underwater phoyography when the earthquake struck. 

  6. Earthquakes and economic growth

    OpenAIRE

    Fisker, Peter Simonsen

    2012-01-01

    This study explores the economic consequences of earthquakes. In particular, it is investigated how exposure to earthquakes affects economic growth both across and within countries. The key result of the empirical analysis is that while there are no observable effects at the country level, earthquake exposure significantly decreases 5-year economic growth at the local level. Areas at lower stages of economic development suffer harder in terms of economic growth than richer areas. In addition,...

  7. Earthquakes, detecting and understanding them

    International Nuclear Information System (INIS)

    2008-05-01

    The signatures at the surface of the Earth is continually changing on a geological timescale. The tectonic plates, which make up this surface, are moving in relation to each other. On human timescale, these movements are the result of earthquakes, which suddenly, release energy accumulated over a period of time. The vibrations they produce propagate through the interior of the Earth: these are seismic waves. However, other phenomena can generate seismic waves, such as volcanoes, quarry blasts, etc. The surf of the ocean waves on the coasts, the wind in the trees and human activity (industry and road traffic) all contribute to the 'seismic background noise'. Sensors are able to detect signals from events which are then discriminated, analyzed and located. Earthquakes and active volcanoes are not distributed randomly over the surface of the globe: they mainly coincide with mountain chains and ocean trenches and ridges. 'An earthquake results from the abrupt release of the energy accumulated by movements and rubbing of different plates'. The study of the propagation of seismic waves has allowed to determine the outline of the plates inside the Earth and has highlighted their movements. There are seven major plates which are colliding, diverging or sliding past each other. Each year the continents move several centimeters with respect to one another. This process, known as 'continental drift', was finally explained by plate tectonics. The initial hypothesis for this science dates from the beginning of the 20. century, but it was not confirmed until the 1960's. It explains that convection inside the Earth is the source of the forces required for these movements. This science, as well as explaining these great movements, has provided a coherent, unifying and quantitative framework, which unites the explanations for all the geophysical phenomena under one mechanism. (authors)

  8. Wrightwood and the earthquake cycle: What a long recurrence record tells us about how faults work

    Science.gov (United States)

    Weldon, R.; Scharer, K.; Fumal, T.; Biasi, G.

    2004-01-01

    The concept of the earthquake cycle is so well established that one often hears statements in the popular media like, "the Big One is overdue" and "the longer it waits, the bigger it will be." Surprisingly, data to critically test the variability in recurrence intervals, rupture displacements, and relationships between the two are almost nonexistent. To generate a long series of earthquake intervals and offsets, we have conducted paleoseismic investigations across the San Andreas fault near the town of Wrightwood, California, excavating 45 trenches over 18 years, and can now provide some answers to basic questions about recurrence behavior of large earthquakes. To date, we have characterized at least 30 prehistoric earthquakes in a 6000-yr-long record, complete for the past 1500 yr and for the interval 3000-1500 B.C. For the past 1500 yr, the mean recurrence interval is 105 yr (31-165 yr for individual intervals) and the mean slip is 3.2 m (0.7-7 m per event). The series is slightly more ordered than random and has a notable cluster of events, during which strain was released at 3 times the long-term average rate. Slip associated with an earthquake is not well predicted by the interval preceding it, and only the largest two earthquakes appear to affect the time interval to the next earthquake. Generally, short intervals tend to coincide with large displacements and long intervals with small displacements. The most significant correlation we find is that earthquakes are more frequent following periods of net strain accumulation spanning multiple seismic cycles. The extent of paleoearthquake ruptures may be inferred by correlating event ages between different sites along the San Andreas fault. Wrightwood and other nearby sites experience rupture that could be attributed to overlap of relatively independent segments that each behave in a more regular manner. However, the data are equally consistent with a model in which the irregular behavior seen at Wrightwood

  9. The 2007 Mentawai earthquake sequence on the Sumatra megathrust

    Science.gov (United States)

    Konca, A.; Avouac, J.; Sladen, A.; Meltzner, A. J.; Kositsky, A. P.; Sieh, K.; Fang, P.; Li, Z.; Galetzka, J.; Genrich, J.; Chlieh, M.; Natawidjaja, D. H.; Bock, Y.; Fielding, E. J.; Helmberger, D. V.

    2008-12-01

    The Sumatra Megathrust has recently produced a flurry of large interplate earthquakes starting with the giant Mw 9.15, Aceh earthquake of 2004. All of these earthquakes occurred within the area monitored by the Sumatra Geodetic Array (SuGAr), which provided exceptional records of near-field co-seismic and postseismic ground displacements. The most recent of these major earthquakes, an Mw 8.4 earthquake and an Mw 7.9 earthquake twelve hours later, occurred in the Mentawai islands area where devastating historical earthquakes had happened in 1797 and 1833. The 2007 earthquake sequence provides an exceptional opportunity to understand the variability of the earthquakes along megathrusts and their relation to interseismic coupling. The InSAR, GPS and teleseismic modeling shows that 2007 earthquakes ruptured a fraction of the strongly coupled Mentawai patch of the megathrust, which is also only a fraction of the 1833 rupture area. It also released a much smaller moment than the one released in 1833, or than the deficit of moment that has accumulated since. Both earthquakes of 2007 consist of 2 sub-events which are 50 to 100 km apart from each other. On the other hand, the northernmost slip patch of 8.4 and southern slip patch of 7.9 earthquakes abut each other, but they ruptured 12 hours apart. Sunda megathrust earthquakes of recent years include a rupture of a strongly coupled patch that closely mimics a prior rupture of that patch and which is well correlated with the interseismic coupling pattern (Nias-Simeulue section), as well as a rupture sequence of a strongly coupled patch that differs substantially in the details from its most recent predecessors (Mentawai section). We conclude that (1) seismic asperities are probably persistent features which arise form heterogeneous strain build up in the interseismic period; and (2) the same portion of a megathrust can rupture in different ways depending on whether asperities break as isolated events or cooperate to produce

  10. Imprecision of dose predictions for radionuclides released to the environment: an application of a Monte Carlo simulation technique

    Energy Technology Data Exchange (ETDEWEB)

    Schwarz, G; Hoffman, F O

    1980-01-01

    An evaluation of the imprecision in dose predictions has been performed using current dose assessment models and present knowledge of the variability or uncertainty in model parameter values. The propagation of parameter uncertainties is demonstrated using a Monte Carlo technique for elemental /sup 131/I transported via the pasture-cow-milk-child pathway. The results indicate that when site-specific information is not available the imprecision inherent in the predictions for this pathway is potentially large. Generally, the 99th percentile in thyroid dose for children was predicted to be approximately an order of magnitude greater than the median value. The potential consequences of the imprecision in dose for radiation protection purposes are discussed.

  11. Our response to the earthquake at Onagawa Nuclear Power Station

    International Nuclear Information System (INIS)

    Hirakawa, Tomoshi

    2008-01-01

    When the Miyagi Offshore earthquake occurred on August 16, 2005, all three units at the Onagawa NPS were shut down automatically according to the Strong Seismic Acceleration' signal. Our inspection after the earthquake confirmed there was no damage to the equipment of the nuclear power plants, but the analysis of the response spectrum observed at the bedrock showed the earthquake had exceeded the 'design-basis earthquake', at certain periods, so that we implemented a review of the seismic safety of plant facilities. In the review, the ground motion of Miyagi Offshore Earthquake which are predicted to occur in the near future were reexamined based on the observation data, and then 'The Ground Motion for Safety Check' surpassing the supposed ground motion of the largest earthquake was established. The seismic safety of plant facilities, important for safety, was assured. At present, No.1 to No.3 units at Onagawa NPS have returned to normal operation. (author)

  12. Seismic-electromagnetic precursors of Romania's Vrancea earthquakes

    International Nuclear Information System (INIS)

    Enescu, B.D.; Enescu, C.; Constantin, A. P.

    1999-01-01

    Diagrams were plotted from electromagnetic data that were recorded at Muntele Rosu Observatory during December 1996 to January 1997, and December 1997 to September 1998. The times when Vrancea earthquakes of magnitudes M ≥ 3.9 occurred within these periods are marked on the diagrams.The parameters of the earthquakes are given in a table which also includes information on the magnetic and electric anomalies (perturbations) preceding these earthquakes. The magnetic data prove that Vrancea earthquakes are preceded by magnetic perturbations that may be regarded as their short-term precursors. Perturbations, which could likewise be seen as short-term precursors of Vrancea earthquakes, are also noticed in the electric records. Still, a number of electric data do cast a doubt on their forerunning nature. Some suggestions are made in the end of the paper on how electromagnetic research should go ahead to be of use for Vrancea earthquake prediction. (authors)

  13. Cooperative earthquake research between the United States and the People's Republic of China

    Energy Technology Data Exchange (ETDEWEB)

    Russ, D.P.; Johnson, L.E.

    1986-01-01

    This paper describes cooperative research by scientists of the US and the People's Republic of China (PRC) which has resulted in important new findings concerning the fundamental characteristics of earthquakes and new insight into mitigating earthquake hazards. There have been over 35 projects cooperatively sponsored by the Earthquake Studies Protocol in the past 5 years. The projects are organized into seven annexes, including investigations in earthquake prediction, intraplate faults and earthquakes, earthquake engineering and hazards investigation, deep crustal structure, rock mechanics, seismology, and data exchange. Operational earthquake prediction experiments are currently being developed at two primary sites: western Yunnan Province near the town of Xiaguan, where there are several active faults, and the northeast China plain, where the devastating 1976 Tangshan earthquake occurred.

  14. Critical behavior in earthquake energy dissipation

    Science.gov (United States)

    Wanliss, James; Muñoz, Víctor; Pastén, Denisse; Toledo, Benjamín; Valdivia, Juan Alejandro

    2017-09-01

    We explore bursty multiscale energy dissipation from earthquakes flanked by latitudes 29° S and 35.5° S, and longitudes 69.501° W and 73.944° W (in the Chilean central zone). Our work compares the predictions of a theory of nonequilibrium phase transitions with nonstandard statistical signatures of earthquake complex scaling behaviors. For temporal scales less than 84 hours, time development of earthquake radiated energy activity follows an algebraic arrangement consistent with estimates from the theory of nonequilibrium phase transitions. There are no characteristic scales for probability distributions of sizes and lifetimes of the activity bursts in the scaling region. The power-law exponents describing the probability distributions suggest that the main energy dissipation takes place due to largest bursts of activity, such as major earthquakes, as opposed to smaller activations which contribute less significantly though they have greater relative occurrence. The results obtained provide statistical evidence that earthquake energy dissipation mechanisms are essentially "scale-free", displaying statistical and dynamical self-similarity. Our results provide some evidence that earthquake radiated energy and directed percolation belong to a similar universality class.

  15. The biowaiver extension for BCS class III drugs: the effect of dissolution rate on the bioequivalence of BCS class III immediate-release drugs predicted by computer simulation.

    Science.gov (United States)

    Tsume, Yasuhiro; Amidon, Gordon L

    2010-08-02

    The Biopharmaceutical Classification System (BCS) guidance issued by the FDA allows waivers for in vivo bioavailability and bioequivalence studies for immediate-release (IR) solid oral dosage forms only for BCS class I drugs. However, a number of drugs within BCS class III have been proposed to be eligible for biowaivers. The World Health Organization (WHO) has shortened the requisite dissolution time of BCS class III drugs on their Essential Medicine List (EML) from 30 to 15 min for extended biowaivers; however, the impact of the shorter dissolution time on AUC(0-inf) and C(max) is unknown. The objectives of this investigation were to assess the ability of gastrointestinal simulation software to predict the oral absorption of the BCS class I drugs propranolol and metoprolol and the BCS class III drugs cimetidine, atenolol, and amoxicillin, and to perform in silico bioequivalence studies to assess the feasibility of extending biowaivers to BCS class III drugs. The drug absorption from the gastrointestinal tract was predicted using physicochemical and pharmacokinetic properties of test drugs provided by GastroPlus (version 6.0). Virtual trials with a 200 mL dose volume at different drug release rates (T(85%) = 15 to 180 min) were performed to predict the oral absorption (C(max) and AUC(0-inf)) of the above drugs. Both BCS class I drugs satisfied bioequivalence with regard to the release rates up to 120 min. The results with BCS class III drugs demonstrated bioequivalence using the prolonged release rate, T(85%) = 45 or 60 min, indicating that the dissolution standard for bioequivalence is dependent on the intestinal membrane permeability and permeability profile throughout the gastrointestinal tract. The results of GastroPlus simulations indicate that the dissolution rate of BCS class III drugs could be prolonged to the point where dissolution, rather than permeability, would control the overall absorption. For BCS class III drugs with intestinal absorption patterns

  16. Physical models and codes for prediction of activity release from defective fuel rods under operation conditions and in leakage tests during refuelling

    International Nuclear Information System (INIS)

    Likhanskii, V.; Evdokimov, I.; Khoruzhii, O.; Sorokin, A.; Novikov, V.

    2003-01-01

    It is appropriate to use the dependences, based on physical models, in the design-analytical codes for improving of reliability of defective fuel rod detection and for determination of defect characteristics by activity measuring in the primary coolant. In the paper the results on development of some physical models and integral mechanistic codes, assigned for prediction of defective fuel rod behaviour are presented. The analysis of mass transfer and mass exchange between fuel rod and coolant showed that the rates of these processes depends on many factors, such as coolant turbulent flow, pressure, effective hydraulic diameter of defect, fuel rod geometric parameters. The models, which describe these dependences, have been created. The models of thermomechanical fuel behaviour, stable gaseous FP release were modified and new computer code RTOP-CA was created thereupon for description of defective fuel rod behaviour and activity release into the primary coolant. The model of fuel oxidation in in-pile conditions, which includes radiolysis and RTOP-LT after validation of physical models are planned to be used for prediction of defective fuel rods behaviour

  17. OMG Earthquake! Can Twitter improve earthquake response?

    Science.gov (United States)

    Earle, P. S.; Guy, M.; Ostrum, C.; Horvath, S.; Buckmaster, R. A.

    2009-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment its earthquake response products and the delivery of hazard information. The goal is to gather near real-time, earthquake-related messages (tweets) and provide geo-located earthquake detections and rough maps of the corresponding felt areas. Twitter and other social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources. People local to an event often publish information within seconds via these technologies. In contrast, depending on the location of the earthquake, scientific alerts take between 2 to 20 minutes. Examining the tweets following the March 30, 2009, M4.3 Morgan Hill earthquake shows it is possible (in some cases) to rapidly detect and map the felt area of an earthquake using Twitter responses. Within a minute of the earthquake, the frequency of “earthquake” tweets rose above the background level of less than 1 per hour to about 150 per minute. Using the tweets submitted in the first minute, a rough map of the felt area can be obtained by plotting the tweet locations. Mapping the tweets from the first six minutes shows observations extending from Monterey to Sacramento, similar to the perceived shaking region mapped by the USGS “Did You Feel It” system. The tweets submitted after the earthquake also provided (very) short first-impression narratives from people who experienced the shaking. Accurately assessing the potential and robustness of a Twitter-based system is difficult because only tweets spanning the previous seven days can be searched, making a historical study impossible. We have, however, been archiving tweets for several months, and it is clear that significant limitations do exist. The main drawback is the lack of quantitative information

  18. Radon, gas geochemistry, groundwater, and earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    King, Chi-Yu [Power Reactor and Nuclear Fuel Development Corp., Tono Geoscience Center, Toki, Gifu (Japan)

    1998-12-31

    Radon monitoring in groundwater, soil air, and atmosphere has been continued in many seismic areas of the world for earthquake-prediction and active-fault studies. Some recent measurements of radon and other geochemical and hydrological parameters have been made for sufficiently long periods, with reliable instruments, and together with measurements of meteorological variables and solid-earth tides. The resultant data are useful in better distinguishing earthquake-related changes from various background noises. Some measurements have been carried out in areas where other geophysical measurements are being made also. Comparative studies of various kinds of geophysical data are helpful in ascertaining the reality of the earthquake-related and fault-related anomalies and in understanding the underlying mechanisms. Spatial anomalies of radon and other terrestrial gasses have been observed for many active faults. Such observations indicate that gas concentrations are very much site dependent, particularly on fault zones where terrestrial fluids may move vertically. Temporal anomalies have been reliably observed before and after some recent earthquakes, including the 1995 Kobe earthquake, and the general pattern of anomaly occurrence remains the same as observed before: They are recorded at only relatively few sensitive sites, which can be at much larger distances than expected from existing earthquake-source models. The sensitivity of a sensitive site is also found to be changeable with time. These results clearly show the inadequacy of the existing dilatancy-fluid diffusion and elastic-dislocation models for earthquake sources to explain earthquake-related geochemical and geophysical changes recorded at large distances. (J.P.N.)

  19. United States earthquake early warning system: how theory and analysis can save America before the big one happens

    OpenAIRE

    Rockabrand, Ryan

    2017-01-01

    Approved for public release; distribution is unlimited The United States is extremely vulnerable to catastrophic earthquakes. More than 143 million Americans may be threatened by damaging earthquakes in the next 50 years. This thesis argues that the United States is unprepared for the most catastrophic earthquakes the country faces today. Earthquake early warning systems are a major solution in practice to reduce economic risk, to protect property and the environment, and to save lives. Ot...

  20. Real-time earthquake data feasible

    Science.gov (United States)

    Bush, Susan

    Scientists agree that early warning devices and monitoring of both Hurricane Hugo and the Mt. Pinatubo volcanic eruption saved thousands of lives. What would it take to develop this sort of early warning and monitoring system for earthquake activity?Not all that much, claims a panel assigned to study the feasibility, costs, and technology needed to establish a real-time earthquake monitoring (RTEM) system. The panel, drafted by the National Academy of Science's Committee on Seismology, has presented its findings in Real-Time Earthquake Monitoring. The recently released report states that “present technology is entirely capable of recording and processing data so as to provide real-time information, enabling people to mitigate somewhat the earthquake disaster.” RTEM systems would consist of two parts—an early warning system that would give a few seconds warning before severe shaking, and immediate postquake information within minutes of the quake that would give actual measurements of the magnitude. At this time, however, this type of warning system has not been addressed at the national level for the United States and is not included in the National Earthquake Hazard Reduction Program, according to the report.

  1. A 'new generation' earthquake catalogue

    Directory of Open Access Journals (Sweden)

    E. Boschi

    2000-06-01

    Full Text Available In 1995, we published the first release of the Catalogo dei Forti Terremoti in Italia, 461 a.C. - 1980, in Italian (Boschi et al., 1995. Two years later this was followed by a second release, again in Italian, that included more earthquakes, more accurate research and a longer time span (461 B.C. to 1990 (Boschi et al., 1997. Aware that the record of Italian historical seismicity is probably the most extensive of the whole world, and hence that our catalogue could be of interest for a wider interna-tional readership, Italian was clearly not the appropriate language to share this experience with colleagues from foreign countries. Three years after publication of the second release therefore, and after much additional research and fine tuning of methodologies and algorithms, I am proud to introduce this third release in English. All the tools and accessories have been translated along with the texts describing the development of the underlying research strategies and current contents. The English title is Catalogue of Strong Italian Earthquakes, 461 B.C. to 1997. This Preface briefly describes the scientific context within which the Catalogue of Strong Italian Earthquakes was conceived and progressively developed. The catalogue is perhaps the most impor-tant outcome of a well-established joint project between the Istituto Nazionale di Geofisica, the leading Italian institute for basic and applied research in seismology and solid earth geophysics, and SGA (Storia Geofisica Ambiente, a private firm specialising in the historical investigation and systematisation of natural phenomena. In her contribution "Method of investigation, typology and taxonomy of the basic data: navigating between seismic effects and historical contexts", Emanuela Guidoboni outlines the general framework of modern historical seismology, its complex relation with instrumental seismology on the one hand and historical research on the other. This presentation also highlights

  2. Ergodicity and Phase Transitions and Their Implications for Earthquake Forecasting.

    Science.gov (United States)

    Klein, W.

    2017-12-01

    Forecasting earthquakes or even predicting the statistical distribution of events on a given fault is extremely difficult. One reason for this difficulty is the large number of fault characteristics that can affect the distribution and timing of events. The range of stress transfer, the level of noise, and the nature of the friction force all influence the type of the events and the values of these parameters can vary from fault to fault and also vary with time. In addition, the geometrical structure of the faults and the correlation of events on different faults plays an important role in determining the event size and their distribution. Another reason for the difficulty is that the important fault characteristics are not easily measured. The noise level, fault structure, stress transfer range, and the nature of the friction force are extremely difficult, if not impossible to ascertain. Given this lack of information, one of the most useful approaches to understanding the effect of fault characteristics and the way they interact is to develop and investigate models of faults and fault systems.In this talk I will present results obtained from a series of models of varying abstraction and compare them with data from actual faults. We are able to provide a physical basis for several observed phenomena such as the earthquake cycle, thefact that some faults display Gutenburg-Richter scaling and others do not, and that some faults exhibit quasi-periodic characteristic events and others do not. I will also discuss some surprising results such as the fact that some faults are in thermodynamic equilibrium depending on the stress transfer range and the noise level. An example of an important conclusion that can be drawn from this work is that the statistical distribution of earthquake events can vary from fault to fault and that an indication of an impending large event such as accelerating moment release may be relevant on some faults but not on others.

  3. Two critical tests for the Critical Point earthquake

    Science.gov (United States)

    Tzanis, A.; Vallianatos, F.

    2003-04-01

    It has been credibly argued that the earthquake generation process is a critical phenomenon culminating with a large event that corresponds to some critical point. In this view, a great earthquake represents the end of a cycle on its associated fault network and the beginning of a new one. The dynamic organization of the fault network evolves as the cycle progresses and a great earthquake becomes more probable, thereby rendering possible the prediction of the cycle’s end by monitoring the approach of the fault network toward a critical state. This process may be described by a power-law time-to-failure scaling of the cumulative seismic release rate. Observational evidence has confirmed the power-law scaling in many cases and has empirically determined that the critical exponent in the power law is typically of the order n=0.3. There are also two theoretical predictions for the value of the critical exponent. Ben-Zion and Lyakhovsky (Pure appl. geophys., 159, 2385-2412, 2002) give n=1/3. Rundle et al. (Pure appl. geophys., 157, 2165-2182, 2000) show that the power-law activation associated with a spinodal instability is essentially identical to the power-law acceleration of Benioff strain observed prior to earthquakes; in this case n=0.25. More recently, the CP model has gained support from the development of more dependable models of regional seismicity with realistic fault geometry that show accelerating seismicity before large events. Essentially, these models involve stress transfer to the fault network during the cycle such, that the region of accelerating seismicity will scale with the size of the culminating event, as for instance in Bowman and King (Geophys. Res. Let., 38, 4039-4042, 2001). It is thus possible to understand the observed characteristics of distributed accelerating seismicity in terms of a simple process of increasing tectonic stress in a region already subjected to stress inhomogeneities at all scale lengths. Then, the region of

  4. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    Science.gov (United States)

    Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  5. Seismic methodology in determining basis earthquake for nuclear installation

    International Nuclear Information System (INIS)

    Ameli Zamani, Sh.

    2008-01-01

    Design basis earthquake ground motions for nuclear installations should be determined to assure the design purpose of reactor safety: that reactors should be built and operated to pose no undue risk to public health and safety from earthquake and other hazards. Regarding the influence of seismic hazard to a site, large numbers of earthquake ground motions can be predicted considering possible variability among the source, path, and site parameters. However, seismic safety design using all predicted ground motions is practically impossible. In the determination of design basis earthquake ground motions it is therefore important to represent the influences of the large numbers of earthquake ground motions derived from the seismic ground motion prediction methods for the surrounding seismic sources. Viewing the relations between current design basis earthquake ground motion determination and modem earthquake ground motion estimation, a development of risk-informed design basis earthquake ground motion methodology is discussed for insight into the on going modernization of the Examination Guide for Seismic Design on NPP

  6. A prospective earthquake forecast experiment in the western Pacific

    Science.gov (United States)

    Eberhard, David A. J.; Zechar, J. Douglas; Wiemer, Stefan

    2012-09-01

    Since the beginning of 2009, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquake predictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of Mw≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquake predictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.

  7. Measuring the effectiveness of earthquake forecasting in insurance strategies

    Science.gov (United States)

    Mignan, A.; Muir-Wood, R.

    2009-04-01

    Given the difficulty of judging whether the skill of a particular methodology of earthquake forecasts is offset by the inevitable false alarms and missed predictions, it is important to find a means to weigh the successes and failures according to a common currency. Rather than judge subjectively the relative costs and benefits of predictions, we develop a simple method to determine if the use of earthquake forecasts can increase the profitability of active financial risk management strategies employed in standard insurance procedures. Three types of risk management transactions are employed: (1) insurance underwriting, (2) reinsurance purchasing and (3) investment in CAT bonds. For each case premiums are collected based on modelled technical risk costs and losses are modelled for the portfolio in force at the time of the earthquake. A set of predetermined actions follow from the announcement of any change in earthquake hazard, so that, for each earthquake forecaster, the financial performance of an active risk management strategy can be compared with the equivalent passive strategy in which no notice is taken of earthquake forecasts. Overall performance can be tracked through time to determine which strategy gives the best long term financial performance. This will be determined by whether the skill in forecasting the location and timing of a significant earthquake (where loss is avoided) is outweighed by false predictions (when no premium is collected). This methodology is to be tested in California, where catastrophe modeling is reasonably mature and where a number of researchers issue earthquake forecasts.

  8. Thermal Radiation Anomalies Associated with Major Earthquakes

    Science.gov (United States)

    Ouzounov, Dimitar; Pulinets, Sergey; Kafatos, Menas C.; Taylor, Patrick

    2017-01-01

    Recent developments of remote sensing methods for Earth satellite data analysis contribute to our understanding of earthquake related thermal anomalies. It was realized that the thermal heat fluxes over areas of earthquake preparation is a result of air ionization by radon (and other gases) and consequent water vapor condensation on newly formed ions. Latent heat (LH) is released as a result of this process and leads to the formation of local thermal radiation anomalies (TRA) known as OLR (outgoing Longwave radiation, Ouzounov et al, 2007). We compare the LH energy, obtained by integrating surface latent heat flux (SLHF) over the area and time with released energies associated with these events. Extended studies of the TRA using the data from the most recent major earthquakes allowed establishing the main morphological features. It was also established that the TRA are the part of more complex chain of the short-term pre-earthquake generation, which is explained within the framework of a lithosphere-atmosphere coupling processes.

  9. Earthquake magnitude estimation using the τ c and P d method for earthquake early warning systems

    Science.gov (United States)

    Jin, Xing; Zhang, Hongcai; Li, Jun; Wei, Yongxiang; Ma, Qiang

    2013-10-01

    Earthquake early warning (EEW) systems are one of the most effective ways to reduce earthquake disaster. Earthquake magnitude estimation is one of the most important and also the most difficult parts of the entire EEW system. In this paper, based on 142 earthquake events and 253 seismic records that were recorded by the KiK-net in Japan, and aftershocks of the large Wenchuan earthquake in Sichuan, we obtained earthquake magnitude estimation relationships using the τ c and P d methods. The standard variances of magnitude calculation of these two formulas are ±0.65 and ±0.56, respectively. The P d value can also be used to estimate the peak ground motion of velocity, then warning information can be released to the public rapidly, according to the estimation results. In order to insure the stability and reliability of magnitude estimation results, we propose a compatibility test according to the natures of these two parameters. The reliability of the early warning information is significantly improved though this test.

  10. Earthquakes and Schools

    Science.gov (United States)

    National Clearinghouse for Educational Facilities, 2008

    2008-01-01

    Earthquakes are low-probability, high-consequence events. Though they may occur only once in the life of a school, they can have devastating, irreversible consequences. Moderate earthquakes can cause serious damage to building contents and non-structural building systems, serious injury to students and staff, and disruption of building operations.…

  11. Bam Earthquake in Iran

    CERN Multimedia

    2004-01-01

    Following their request for help from members of international organisations, the permanent Mission of the Islamic Republic of Iran has given the following bank account number, where you can donate money to help the victims of the Bam earthquake. Re: Bam earthquake 235 - UBS 311264.35L Bubenberg Platz 3001 BERN

  12. Tradable Earthquake Certificates

    NARCIS (Netherlands)

    Woerdman, Edwin; Dulleman, Minne

    2018-01-01

    This article presents a market-based idea to compensate for earthquake damage caused by the extraction of natural gas and applies it to the case of Groningen in the Netherlands. Earthquake certificates give homeowners a right to yearly compensation for both property damage and degradation of living

  13. 50 Hz hippocampal stimulation in refractory epilepsy: Higher level of basal glutamate predicts greater release of glutamate.

    Science.gov (United States)

    Cavus, Idil; Widi, Gabriel A; Duckrow, Robert B; Zaveri, Hitten; Kennard, Jeremy T; Krystal, John; Spencer, Dennis D

    2016-02-01

    The effect of electrical stimulation on brain glutamate release in humans is unknown. Glutamate is elevated at baseline in the epileptogenic hippocampus of patients with refractory epilepsy, and increases during spontaneous seizures. We examined the effect of 50 Hz stimulation on glutamate release and its relationship to interictal levels in the hippocampus of patients with epilepsy. In addition, we measured basal and stimulated glutamate levels in a subset of these patients where stimulation elicited a seizure. Subjects (n = 10) were patients with medically refractory epilepsy who were undergoing intracranial electroencephalography (EEG) evaluation in an epilepsy monitoring unit. Electrical stimulation (50 Hz) was delivered through implanted hippocampal electrodes (n = 11), and microdialysate samples were collected every 2 min. Basal glutamate, changes in glutamate efflux with stimulation, and the relationships between peak stimulation-associated glutamate concentrations, basal zero-flow levels, and stimulated seizures were examined. Stimulation of epileptic hippocampi in patients with refractory epilepsy caused increases in glutamate efflux (p = 0.005, n = 10), and 4 of ten patients experienced brief stimulated seizures. Stimulation-induced increases in glutamate were not observed during the evoked seizures, but rather were related to the elevation in interictal basal glutamate (R(2) = 0.81, p = 0.001). The evoked-seizure group had lower basal glutamate levels than the no-seizure group (p = 0.04), with no stimulation-induced change in glutamate efflux (p = 0.47, n = 4). Conversely, increased glutamate was observed following stimulation in the no-seizure group (p = 0.005, n = 7). Subjects with an atrophic hippocampus had higher basal glutamate levels (p = 0.03, n = 7) and higher stimulation-induced glutamate efflux. Electrical stimulation of the epileptic hippocampus either increased extracellular glutamate efflux or induced seizures. The magnitude of stimulated

  14. Historic Eastern Canadian earthquakes

    International Nuclear Information System (INIS)

    Asmis, G.J.K.; Atchinson, R.J.

    1981-01-01

    Nuclear power plants licensed in Canada have been designed to resist earthquakes: not all plants, however, have been explicitly designed to the same level of earthquake induced forces. Understanding the nature of strong ground motion near the source of the earthquake is still very tentative. This paper reviews historical and scientific accounts of the three strongest earthquakes - St. Lawrence (1925), Temiskaming (1935), Cornwall (1944) - that have occurred in Canada in 'modern' times, field studies of near-field strong ground motion records and their resultant damage or non-damage to industrial facilities, and numerical modelling of earthquake sources and resultant wave propagation to produce accelerograms consistent with the above historical record and field studies. It is concluded that for future construction of NPP's near-field strong motion must be explicitly considered in design

  15. The Earth's revolution, Moon phase, Syzygy astronomy events, their effect in disturbances of the Earth's geomagnetic field, and the ``Magnetic Storm Double Time Method'' for predicting the occurrence time, magnitude and epicenter location of earthquakes

    Science.gov (United States)

    Chen, I. W.

    2003-04-01

    An increasing number of geomagnetic observation stations were established and operated in China since 1966 to the 1980s (and until present), effectively covering a large area of the nation. Close relativity between magnetic storms and earthquakes, as well as close relativity between the regional differences of magnetic disturbance recorded by these stations and the epicenter location of earthquakes, was discovered and observed by Tie-zheng Zhang during1966 - 1969. On such basis during 1969/1970, Zhang developed the “Magnetic Storm Double Time Method” for predicting the occurrence time, magnitude and epicenter location of EQs. By this method,.Zhang successfully predicted the Yunnan Tonghai Ms7.7 EQ Jan. 5, 1970 (occurrence date only), the Bohai ML5.2 EQ, Feb. 12, 1970 and other EQs, including the Haicheng Ms7.3 EQ Feb. 4, 1975, and the Tangshan Ms7.8 EQ July 28, 1976. On the basis of this method, Z.P. Shen developed the “Geomagnetic Deflection Angle Double Time Method” in 1970, and later developed the “Magnetic Storm - Moon Phase Double Time Method” in 1990s. With this method, Shen is able to predict the occurrence dates of most of the strongest EQs Ms37.5 on the Earth since 1991. Zhang also discovered that strong EQs often correspond with a number of sets of magnetic storms. Z.Q. Ren discovered close relativity exists between Syzygy astronomy events and such sets of magnetic storm as well as the occurrence dates of strong EQs. Computerized calculation of historical magnetic storm and EQ data proves the effectiveness of this method. Over 3,000 days of geomagnetic isoline images are computer processed by the Author from over 400,000 geomagnetic field data obtained by Zhang from over 100 geomagnetic observation stations during 1966 - 1984. Clear relativity is shown between the Earth’s revolution, Moon phases, Syzygy astronomy events related to the Earth, and their disturbance effect on the Earth’s geomagnetic field and the occurrence of EQs.

  16. Theoretical prediction of energy release rate for interface crack initiation by thermal stress in environmental barrier coatings for ceramics

    International Nuclear Information System (INIS)

    Kawai, E; Umeno, Y

    2017-01-01

    As weight reduction of turbines for aircraft engines is demanded to improve fuel consumption and curb emission of carbon dioxide, silicon carbide (SiC) fiber reinforced SiC matrix composites (SiC/SiC) are drawing enormous attention as high-pressure turbine materials. For preventing degradation of SiC/SiC, environmental barrier coatings (EBC) for ceramics are deposited on the composites. The purpose of this study is to establish theoretical guidelines for structural design which ensures the mechanical reliability of EBC. We conducted finite element method (FEM) analysis to calculate energy release rates (ERRs) for interface crack initiation due to thermal stress in EBC consisting of Si-based bond coat, Mullite and Ytterbium (Yb)-silicate layers on a SiC/SiC substrate. In the FEM analysis, the thickness of one EBC layer was changed from 25 μm to 200 μm while the thicknesses of the other layers were fixed at 25 μm, 50 μm and 100 μm. We compared ERRs obtained by the FEM analysis and a simple theory for interface crack in a single-layered structure where ERR is estimated as nominal strain energy in the coating layers multiplied by a constant factor (independent of layer thicknesses). We found that, unlike the case of single-layered structures, the multiplication factor is no longer a constant but is determined by the combination of consisting coating layer thicknesses. (paper)

  17. Theoretical prediction of energy release rate for interface crack initiation by thermal stress in environmental barrier coatings for ceramics

    Science.gov (United States)

    Kawai, E.; Umeno, Y.

    2017-05-01

    As weight reduction of turbines for aircraft engines is demanded to improve fuel consumption and curb emission of carbon dioxide, silicon carbide (SiC) fiber reinforced SiC matrix composites (SiC/SiC) are drawing enormous attention as high-pressure turbine materials. For preventing degradation of SiC/SiC, environmental barrier coatings (EBC) for ceramics are deposited on the composites. The purpose of this study is to establish theoretical guidelines for structural design which ensures the mechanical reliability of EBC. We conducted finite element method (FEM) analysis to calculate energy release rates (ERRs) for interface crack initiation due to thermal stress in EBC consisting of Si-based bond coat, Mullite and Ytterbium (Yb)-silicate layers on a SiC/SiC substrate. In the FEM analysis, the thickness of one EBC layer was changed from 25 μm to 200 μm while the thicknesses of the other layers were fixed at 25 μm, 50 μm and 100 μm. We compared ERRs obtained by the FEM analysis and a simple theory for interface crack in a single-layered structure where ERR is estimated as nominal strain energy in the coating layers multiplied by a constant factor (independent of layer thicknesses). We found that, unlike the case of single-layered structures, the multiplication factor is no longer a constant but is determined by the combination of consisting coating layer thicknesses.

  18. Earthquake chemical precursors in groundwater: a review

    Science.gov (United States)

    Paudel, Shukra Raj; Banjara, Sushant Prasad; Wagle, Amrita; Freund, Friedemann T.

    2018-03-01

    We review changes in groundwater chemistry as precursory signs for earthquakes. In particular, we discuss pH, total dissolved solids (TDS), electrical conductivity, and dissolved gases in relation to their significance for earthquake prediction or forecasting. These parameters are widely believed to vary in response to seismic and pre-seismic activity. However, the same parameters also vary in response to non-seismic processes. The inability to reliably distinguish between changes caused by seismic or pre-seismic activities from changes caused by non-seismic activities has impeded progress in earthquake science. Short-term earthquake prediction is unlikely to be achieved, however, by pH, TDS, electrical conductivity, and dissolved gas measurements alone. On the other hand, the production of free hydroxyl radicals (•OH), subsequent reactions such as formation of H2O2 and oxidation of As(III) to As(V) in groundwater, have distinctive precursory characteristics. This study deviates from the prevailing mechanical mantra. It addresses earthquake-related non-seismic mechanisms, but focused on the stress-induced electrification of rocks, the generation of positive hole charge carriers and their long-distance propagation through the rock column, plus on electrochemical processes at the rock-water interface.

  19. Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake

    Science.gov (United States)

    Durukal, E.; Sesetyan, K.; Erdik, M.

    2009-04-01

    The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing

  20. Initial Earthquake Centrifuge Model Experiments for the Study of Liquefaction

    National Research Council Canada - National Science Library

    Steedman, R

    1998-01-01

    .... These are intended to gather data suitable for the development of improved design approaches for the prediction of liquefaction under earthquake loading using the new centrifuge facility at the WES...

  1. Estimation of Natural Frequencies During Earthquakes

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Rytter, A

    1997-01-01

    This paper presents two different recursive prediction error method (RPEM} implementations of multivariate Auto-Regressive Moving- Average (ARMAV) models for identification of a time variant civil engineering structure subject to an earthquake. The two techniques are tested on measurements made...

  2. Predictors of psychological resilience amongst medical students following major earthquakes.

    Science.gov (United States)

    Carter, Frances; Bell, Caroline; Ali, Anthony; McKenzie, Janice; Boden, Joseph M; Wilkinson, Timothy; Bell, Caroline

    2016-05-06

    To identify predictors of self-reported psychological resilience amongst medical students following major earthquakes in Canterbury in 2010 and 2011. Two hundred and fifty-three medical students from the Christchurch campus, University of Otago, were invited to participate in an electronic survey seven months following the most severe earthquake. Students completed the Connor-Davidson Resilience Scale, the Depression, Anxiety and Stress Scale, the Post-traumatic Disorder Checklist, the Work and Adjustment Scale, and the Eysenck Personality Questionnaire. Likert scales and other questions were also used to assess a range of variables including demographic and historical variables (eg, self-rated resilience prior to the earthquakes), plus the impacts of the earthquakes. The response rate was 78%. Univariate analyses identified multiple variables that were significantly associated with higher resilience. Multiple linear regression analyses produced a fitted model that was able to explain 35% of the variance in resilience scores. The best predictors of higher resilience were: retrospectively-rated personality prior to the earthquakes (higher extroversion and lower neuroticism); higher self-rated resilience prior to the earthquakes; not being exposed to the most severe earthquake; and less psychological distress following the earthquakes. Psychological resilience amongst medical students following major earthquakes was able to be predicted to a moderate extent.

  3. Influence of predictive contamination to agricultural products due to dry and wet processes during an accidental release of radionuclides

    International Nuclear Information System (INIS)

    Hwang, Won Tae; Kim, Eun Han; Suh, Kyung Suk; Jeong, Hyo Joon; Han, Moon Hee; Lee, Chang Woo

    2003-01-01

    The influence of predictive contamination to agricultural products due to the wet processes as well as dry processes from radioactive air concentration during a nuclear emergency is comprehensively analyzed. The previous dynamic food chain model DYNACON considering Korean agricultural and environmental conditions, in which the initial input parameter was radionuclide concentrations on the ground, is improved so as to evaluate radioactive contamination to agricultural products from either radioactive air concentrations or radionuclide concentrations on the ground. As for the results, wet deposition is a more dominant mechanism than dry deposition in contamination on the ground. While, the contamination levels of agricultural products are strongly dependent on radionuclide and precipitation when the deposition of radionuclides occurs. It means that the contamination levels of agricultural products are determined from which is the more dominant process between deposition on the ground and interception to agricultural plants

  4. Adaptively smoothed seismicity earthquake forecasts for Italy

    Directory of Open Access Journals (Sweden)

    Yan Y. Kagan

    2010-11-01

    Full Text Available We present a model for estimation of the probabilities of future earthquakes of magnitudes m ≥ 4.95 in Italy. This model is a modified version of that proposed for California, USA, by Helmstetter et al. [2007] and Werner et al. [2010a], and it approximates seismicity using a spatially heterogeneous, temporally homogeneous Poisson point process. The temporal, spatial and magnitude dimensions are entirely decoupled. Magnitudes are independently and identically distributed according to a tapered Gutenberg-Richter magnitude distribution. We have estimated the spatial distribution of future seismicity by smoothing the locations of past earthquakes listed in two Italian catalogs: a short instrumental catalog, and a longer instrumental and historic catalog. The bandwidth of the adaptive spatial kernel is estimated by optimizing the predictive power of the kernel estimate of the spatial earthquake density in retrospective forecasts. When available and reliable, we used small earthquakes of m ≥ 2.95 to reveal active fault structures and 29 probable future epicenters. By calibrating the model with these two catalogs of different durations to create two forecasts, we intend to quantify the loss (or gain of predictability incurred when only a short, but recent, data record is available. Both forecasts were scaled to five and ten years, and have been submitted to the Italian prospective forecasting experiment of the global Collaboratory for the Study of Earthquake Predictability (CSEP. An earlier forecast from the model was submitted by Helmstetter et al. [2007] to the Regional Earthquake Likelihood Model (RELM experiment in California, and with more than half of the five-year experimental period over, the forecast has performed better than the others.

  5. Earthquakes, November-December 1977

    Science.gov (United States)

    Person, W.J.

    1978-01-01

    Two major earthquakes occurred in the last 2 months of the year. A magnitude 7.0 earthquake struck San Juan Province, Argentina, on November 23, causing fatalities and damage. The second major earthquake was a magnitude 7.0 in the Bonin Islands region, an unpopulated area. On December 19, Iran experienced a destructive earthquake, which killed over 500.

  6. Earthquakes, September-October 1986

    Science.gov (United States)

    Person, W.J.

    1987-01-01

    There was one great earthquake (8.0 and above) during this reporting period in the South Pacific in the Kermadec Islands. There were no major earthquakes (7.0-7.9) but earthquake-related deaths were reported in Greece and in El Salvador. There were no destrcutive earthquakes in the United States.

  7. Bend Faulting at the Edge of a Flat Slab: The 2017 Mw7.1 Puebla-Morelos, Mexico Earthquake

    Science.gov (United States)

    Melgar, Diego; Pérez-Campos, Xyoli; Ramirez-Guzman, Leonardo; Spica, Zack; Espíndola, Victor Hugo; Hammond, William C.; Cabral-Cano, Enrique

    2018-03-01

    We present results of a slip model from joint inversion of strong motion and static Global Positioning System data for the Mw7.1 Puebla-Morelos earthquake. We find that the earthquake nucleates at the bottom of the oceanic crust or within the oceanic mantle with most of the moment release occurring within the oceanic mantle. Given its location at the edge of the flat slab, the earthquake is likely the result of bending stresses occurring at the transition from flat slab subduction to steeply dipping subduction. The event strikes obliquely to the slab, we find a good agreement between the seafloor fabric offshore the source region and the strike of the earthquake. We argue that the event likely reactivated a fault first created during seafloor formation. We hypothesize that large bending-related events at the edge of the flat slab are more likely in areas of low misalignment between the seafloor fabric and the slab strike where reactivation of preexisting structures is favored. This hypothesis predicts decreased likelihood of bending-related events northwest of the 2017 source region but also suggests that they should be more likely southeast of the 2017 source region.

  8. Earthquake hazard assessment and small earthquakes

    International Nuclear Information System (INIS)

    Reiter, L.

    1987-01-01

    The significance of small earthquakes and their treatment in nuclear power plant seismic hazard assessment is an issue which has received increased attention over the past few years. In probabilistic studies, sensitivity studies showed that the choice of the lower bound magnitude used in hazard calculations can have a larger than expected effect on the calculated hazard. Of particular interest is the fact that some of the difference in seismic hazard calculations between the Lawrence Livermore National Laboratory (LLNL) and Electric Power Research Institute (EPRI) studies can be attributed to this choice. The LLNL study assumed a lower bound magnitude of 3.75 while the EPRI study assumed a lower bound magnitude of 5.0. The magnitudes used were assumed to be body wave magnitudes or their equivalents. In deterministic studies recent ground motion recordings of small to moderate earthquakes at or near nuclear power plants have shown that the high frequencies of design response spectra may be exceeded. These exceedances became important issues in the licensing of the Summer and Perry nuclear power plants. At various times in the past particular concerns have been raised with respect to the hazard and damage potential of small to moderate earthquakes occurring at very shallow depths. In this paper a closer look is taken at these issues. Emphasis is given to the impact of lower bound magnitude on probabilistic hazard calculations and the historical record of damage from small to moderate earthquakes. Limited recommendations are made as to how these issues should be viewed

  9. Prevention of strong earthquakes: Goal or utopia?

    Science.gov (United States)

    Mukhamediev, Sh. A.

    2010-11-01

    In the present paper, we consider ideas suggesting various kinds of industrial impact on the close-to-failure block of the Earth’s crust in order to break a pending strong earthquake (PSE) into a number of smaller quakes or aseismic slips. Among the published proposals on the prevention of a forthcoming strong earthquake, methods based on water injection and vibro influence merit greater attention as they are based on field observations and the results of laboratory tests. In spite of this, the cited proofs are, for various reasons, insufficient to acknowledge the proposed techniques as highly substantiated; in addition, the physical essence of these methods has still not been fully understood. First, the key concept of the methods, namely, the release of the accumulated stresses (or excessive elastic energy) in the source region of a forthcoming strong earthquake, is open to objection. If we treat an earthquake as a phenomenon of a loss in stability, then, the heterogeneities of the physicomechanical properties and stresses along the existing fault or its future trajectory, rather than the absolute values of stresses, play the most important role. In the present paper, this statement is illustrated by the classical examples of stable and unstable fractures and by the examples of the calculated stress fields, which were realized in the source regions of the tsunamigenic earthquakes of December 26, 2004 near the Sumatra Island and of September 29, 2009 near the Samoa Island. Here, just before the earthquakes, there were no excessive stresses in the source regions. Quite the opposite, the maximum shear stresses τmax were close to their minimum value, compared to τmax in the adjacent territory. In the present paper, we provide quantitative examples that falsify the theory of the prevention of PSE in its current form. It is shown that the measures for the prevention of PSE, even when successful for an already existing fault, can trigger or accelerate a catastrophic

  10. The Challenge of Centennial Earthquakes to Improve Modern Earthquake Engineering

    International Nuclear Information System (INIS)

    Saragoni, G. Rodolfo

    2008-01-01

    The recent commemoration of the centennial of the San Francisco and Valparaiso 1906 earthquakes has given the opportunity to reanalyze their damages from modern earthquake engineering perspective. These two earthquakes plus Messina Reggio Calabria 1908 had a strong impact in the birth and developing of earthquake engineering. The study of the seismic performance of some up today existing buildings, that survive centennial earthquakes, represent a challenge to better understand the limitations of our in use earthquake design methods. Only Valparaiso 1906 earthquake, of the three considered centennial earthquakes, has been repeated again as the Central Chile, 1985, Ms = 7.8 earthquake. In this paper a comparative study of the damage produced by 1906 and 1985 Valparaiso earthquakes is done in the neighborhood of Valparaiso harbor. In this study the only three centennial buildings of 3 stories that survived both earthquakes almost undamaged were identified. Since for 1985 earthquake accelerogram at El Almendral soil conditions as well as in rock were recoded, the vulnerability analysis of these building is done considering instrumental measurements of the demand. The study concludes that good performance of these buildings in the epicentral zone of large earthquakes can not be well explained by modern earthquake engineering methods. Therefore, it is recommended to use in the future of more suitable instrumental parameters, such as the destructiveness potential factor, to describe earthquake demand

  11. Sun, Moon and Earthquakes

    Science.gov (United States)

    Kolvankar, V. G.

    2013-12-01

    During a study conducted to find the effect of Earth tides on the occurrence of earthquakes, for small areas [typically 1000km X1000km] of high-seismicity regions, it was noticed that the Sun's position in terms of universal time [GMT] shows links to the sum of EMD [longitude of earthquake location - longitude of Moon's foot print on earth] and SEM [Sun-Earth-Moon angle]. This paper provides the details of this relationship after studying earthquake data for over forty high-seismicity regions of the world. It was found that over 98% of the earthquakes for these different regions, examined for the period 1973-2008, show a direct relationship between the Sun's position [GMT] and [EMD+SEM]. As the time changes from 00-24 hours, the factor [EMD+SEM] changes through 360 degree, and plotting these two variables for earthquakes from different small regions reveals a simple 45 degree straight-line relationship between them. This relationship was tested for all earthquakes and earthquake sequences for magnitude 2.0 and above. This study conclusively proves how Sun and the Moon govern all earthquakes. Fig. 12 [A+B]. The left-hand figure provides a 24-hour plot for forty consecutive days including the main event (00:58:23 on 26.12.2004, Lat.+3.30, Long+95.980, Mb 9.0, EQ count 376). The right-hand figure provides an earthquake plot for (EMD+SEM) vs GMT timings for the same data. All the 376 events including the main event faithfully follow the straight-line curve.

  12. Tectonic feedback and the earthquake cycle

    Science.gov (United States)

    Lomnitz, Cinna

    1985-09-01

    The occurrence of cyclical instabilities along plate boundaries at regular intervals suggests that the process of earthquake causation differs in some respects from the model of elastic rebound in its simplest forms. The model of tectonic feedback modifies the concept of this original model in that it provides a physical interaction between the loading rate and the state of strain on the fault. Two examples are developed: (a) Central Chile, and (b) Mexico. The predictions of earthquake hazards for both types of models are compared.

  13. Predicted radionuclide release from marine reactors dumped in the Kara Sea. Report of the source term working group of the international arctic seas assessment project (IASAP)

    International Nuclear Information System (INIS)

    1997-04-01

    The present report summarizes the work carried out by the Source Term Working Group of IASAP during 1994-1996. The report is based on the studies concerning the initial and current radionuclide inventories, operational history and construction of the reactors carried out by Y. Sivintsev of the Russian Research Center ''Kurchatov Institute'', Moscow and E. Yefimov of the Institute of Physics and Power Engineering, Obninsk, Russian Federation. The working group convened five times and evaluated the results of the studies and developed models for prediction of potential releases to the environment. The calculations were carried out at the Royal Naval College, Greenwich, UK, by N. Lynn, J. Warden and S. Timms and at the Lawrence Livermore National Laboratory, California, USA, by M. Mount. 31 refs, 36 figs, 18 tabs

  14. A Modified Split Hopkinson Pressure Bar Approach for Mimicking Dynamic Oscillatory Stress Fluctuations During Earthquake Rupture

    Science.gov (United States)

    Braunagel, M. J.; Griffith, W. A.

    2017-12-01

    Past experimental work has demonstrated that rock failure at high strain rates occurs by fragmentation rather than discrete fracture and is accompanied by a dramatic increase in rock strength. However, these observations are difficult to reconcile with the assertion that pulverized rocks in fault zones are the product of impulsive stresses during the passage of earthquake ruptures, as the distance from the principal slip zones of some pulverized rock is too great to exceed fragmentation transition. One potential explanation to this paradox that has been suggested is that repeated loading over the course of multiple earthquake ruptures may gradually reduce the pulverization threshold, in terms of both strain rate and strength. We propose that oscillatory loading during a single earthquake rupture may further lower these pulverization thresholds, and that traditional dynamic experimental approaches, such as the Split Hopkinson Pressure Bar (SHPB) wherein load is applied as a single, smooth, sinusoidal compressive wave, may not reflect natural loading conditions. To investigate the effects of oscillatory compressive loading expected during earthquake rupture propagation, we develop a controlled cyclic loading model on a SHPB apparatus utilizing two striker bars connected by an elastic spring. Unlike traditional SHPB experiments that utilize a gas gun to fire a projectile bar and generate a single compressive wave on impact with the incident bar, our modified striker bar assembly oscillates while moving down the gun barrel and generates two separate compressive pulses separated by a lag time. By modeling the modified assembly as a mass-spring-mass assembly accelerating due to the force of the released gas, we can predict the compression time of the spring upon impact and therefore the time delay between the generation of the first and second compressive waves. This allows us to predictably control load cycles with durations of only a few hundred microseconds. Initial

  15. Relaxation creep model of impending earthquake

    Energy Technology Data Exchange (ETDEWEB)

    Morgounov, V. A. [Russian Academy of Sciences, Institute of Physics of the Earth, Moscow (Russian Federation)

    2001-04-01

    The alternative view of the current status and perspective of seismic prediction studies is discussed. In the problem of the ascertainment of the uncertainty relation Cognoscibility-Unpredictability of Earthquakes, priorities of works on short-term earthquake prediction are defined due to the advantage that the final stage of nucleation of earthquake is characterized by a substantial activation of the process while its strain rate increases by the orders of magnitude and considerably increased signal-to-noise ratio. Based on the creep phenomenon under stress relaxation conditions, a model is proposed to explain different images of precursors of impending tectonic earthquakes. The onset of tertiary creep appears to correspond to the onset of instability and inevitably fails unless it unloaded. At this stage, the process acquires the self-regulating character to the greatest extent the property of irreversibility, one of the important components of prediction reliability. Data in situ suggest a principal possibility to diagnose the process of preparation by ground measurements of acoustic and electromagnetic emission in the rocks under constant strain in the condition of self-relaxed stress until the moment of fracture are discussed in context. It was obtained that electromagnetic emission precedes but does not accompany the phase of macrocrak development.

  16. VLF/LF Radio Sounding of Ionospheric Perturbations Associated with Earthquakes

    Directory of Open Access Journals (Sweden)

    Masashi Hayakawa

    2007-07-01

    Full Text Available It is recently recognized that the ionosphere is very sensitive to seismic effects,and the detection of ionospheric perturbations associated with earthquakes, seems to bevery promising for short-term earthquake prediction. We have proposed a possible use ofVLF/LF (very low frequency (3-30 kHz /low frequency (30-300 kHz radio sounding ofthe seismo-ionospheric perturbations. A brief history of the use of subionospheric VLF/LFpropagation for the short-term earthquake prediction is given, followed by a significantfinding of ionospheric perturbation for the Kobe earthquake in 1995. After showingprevious VLF/LF results, we present the latest VLF/LF findings; One is the statisticalcorrelation of the ionospheric perturbation with earthquakes and the second is a case studyfor the Sumatra earthquake in December, 2004, indicating the spatical scale and dynamicsof ionospheric perturbation for this earthquake.

  17. Earthquake Ground Motion Selection

    Science.gov (United States)

    2012-05-01

    Nonlinear analyses of soils, structures, and soil-structure systems offer the potential for more accurate characterization of geotechnical and structural response under strong earthquake shaking. The increasing use of advanced performance-based desig...

  18. 1988 Spitak Earthquake Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...

  19. Earthquake risk assessment of building structures

    International Nuclear Information System (INIS)

    Ellingwood, Bruce R.

    2001-01-01

    During the past two decades, probabilistic risk analysis tools have been applied to assess the performance of new and existing building structural systems. Structural design and evaluation of buildings and other facilities with regard to their ability to withstand the effects of earthquakes requires special considerations that are not normally a part of such evaluations for other occupancy, service and environmental loads. This paper reviews some of these special considerations, specifically as they pertain to probability-based codified design and reliability-based condition assessment of existing buildings. Difficulties experienced in implementing probability-based limit states design criteria for earthquake are summarized. Comparisons of predicted and observed building damage highlight the limitations of using current deterministic approaches for post-earthquake building condition assessment. The importance of inherent randomness and modeling uncertainty in forecasting building performance is examined through a building fragility assessment of a steel frame with welded connections that was damaged during the Northridge Earthquake of 1994. The prospects for future improvements in earthquake-resistant design procedures based on a more rational probability-based treatment of uncertainty are examined

  20. Subsurface structure and physical properties; interim report for fiscal 2001 on frontiers in monitoring science and technology for earthquake environments

    International Nuclear Information System (INIS)

    2000-08-01

    This report includes a final comment made and released by the advisory committee set up for fiscal 2001 by Japan Nuclear Cycle Development Institute (JNC) for inquiry into research progress in subsurface structure and physical properties at Tono Geoscience Center. The appendices contain the membership of the committee, the investigation procedure of the committee, the research subjects and objectives, the experimental results obtained and data included in the interim report including earthquake prediction study and its application for rock mechanics, geochemical and hydrological measurements and monitoring groundwater behaviors, and other materials submitted at the committee for the investigation. (S. Ohno)

  1. Testing the accelerating moment release (AMR) hypothesis in areas of high stress

    Science.gov (United States)

    Guilhem, Aurélie; Bürgmann, Roland; Freed, Andrew M.; Ali, Syed Tabrez

    2013-11-01

    Several retrospective analyses have proposed that significant increases in moment release occurred prior to many large earthquakes of recent times. However, the finding of accelerating moment release (AMR) strongly depends on the choice of three parameters: (1) magnitude range, (2) area being considered surrounding the events and (3) the time period prior to the large earthquakes. Consequently, the AMR analysis has been criticized as being a posteriori data-fitting exercise with no new predictive power. As AMR has been hypothesized to relate to changes in the state of stress around the eventual epicentre, we compare here AMR results to models of stress accumulation in California. Instead of assuming a complete stress drop on all surrounding fault segments implied by a back-slip stress lobe method, we consider that stress evolves dynamically, punctuated by the occurrence of earthquakes, and governed by the elastic and viscous properties of the lithosphere. We study the seismicity of southern California and extract events for AMR calculations following the systematic approach employed in previous studies. We present several sensitivity tests of the method, as well as grid-search analyses over the region between 1955 and 2005 using fixed magnitude range, radius of the search area and period of time. The results are compared to the occurrence of large events and to maps of Coulomb stress changes. The Coulomb stress maps are compiled using the coseismic stress from all M > 7.0 earthquakes since 1812, their subsequent post-seismic relaxation, and the interseismic strain accumulation. We find no convincing correlation of seismicity rate changes in recent decades with areas of high stress that would support the AMR hypothesis. Furthermore, this indicates limited utility for practical earthquake hazard analysis in southern California, and possibly other regions.

  2. Electromagnetic Manifestation of Earthquakes

    OpenAIRE

    Uvarov Vladimir

    2017-01-01

    In a joint analysis of the results of recording the electrical component of the natural electromagnetic field of the Earth and the catalog of earthquakes in Kamchatka in 2013, unipolar pulses of constant amplitude associated with earthquakes were identified, whose activity is closely correlated with the energy of the electromagnetic field. For the explanation, a hypothesis about the cooperative character of these impulses is proposed.

  3. Electromagnetic Manifestation of Earthquakes

    Directory of Open Access Journals (Sweden)

    Uvarov Vladimir

    2017-01-01

    Full Text Available In a joint analysis of the results of recording the electrical component of the natural electromagnetic field of the Earth and the catalog of earthquakes in Kamchatka in 2013, unipolar pulses of constant amplitude associated with earthquakes were identified, whose activity is closely correlated with the energy of the electromagnetic field. For the explanation, a hypothesis about the cooperative character of these impulses is proposed.

  4. Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake

    Science.gov (United States)

    Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten

    2014-05-01

    In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.

  5. The 2016 Central Italy Earthquake: an Overview

    Science.gov (United States)

    Amato, A.

    2016-12-01

    The M6 central Italy earthquake occurred on the seismic backbone of the Italy, just in the middle of the highest hazard belt. The shock hit suddenly during the night of August 24, when people were asleep; no foreshocks occurred before the main event. The earthquake ruptured from 10 km to the surface, and produced a more than 17,000 aftershocks (Oct. 19) spread on a 40x20 km2 area elongated NW-SE. It is geologically very similar to previous recent events of the Apennines. Both the 2009 L'Aquila earthquake to the south and the 1997 Colfiorito to the north, were characterized by the activation of adjacent fault segments. Despite its magnitude and the well known seismic hazard of the region, the earthquake produced extensive damage and 297 fatalities. The town of Amatrice, that paid the highest toll, was classified in zone 1 (the highest) since 1915, but the buildings in this and other villages revealed highly vulnerable. In contrast, in the town of Norcia, that also experienced strong ground shaking, no collapses occurred, most likely due to the retrofitting carried out after an earthquake in 1979. Soon after the quake, the INGV Crisis Unit convened at night in the Rome headquarters, in order to coordinate the activities. The first field teams reached the epicentral area at 7 am with the portable seismic stations installed to monitor the aftershocks; other teams followed to map surface faults, damage, to measure GPS sites, to install instruments for site response studies, and so on. The INGV Crisis Unit includes the Press office and the INGVterremoti team, in order to manage and coordinate the communication towards the Civil Protection Dept. (DPC), the media and the web. Several tens of reports and updates have been delivered in the first month of the sequence to DPC. Also due to the controversial situation arisen from the L'Aquila earthquake and trials, particular attention was given to the communication: continuous and timely information has been released to

  6. Charles Darwin's earthquake reports

    Science.gov (United States)

    Galiev, Shamil

    2010-05-01

    As it is the 200th anniversary of Darwin's birth, 2009 has also been marked as 170 years since the publication of his book Journal of Researches. During the voyage Darwin landed at Valdivia and Concepcion, Chile, just before, during, and after a great earthquake, which demolished hundreds of buildings, killing and injuring many people. Land was waved, lifted, and cracked, volcanoes awoke and giant ocean waves attacked the coast. Darwin was the first geologist to observe and describe the effects of the great earthquake during and immediately after. These effects sometimes repeated during severe earthquakes; but great earthquakes, like Chile 1835, and giant earthquakes, like Chile 1960, are rare and remain completely unpredictable. This is one of the few areas of science, where experts remain largely in the dark. Darwin suggested that the effects were a result of ‘ …the rending of strata, at a point not very deep below the surface of the earth…' and ‘…when the crust yields to the tension, caused by its gradual elevation, there is a jar at the moment of rupture, and a greater movement...'. Darwin formulated big ideas about the earth evolution and its dynamics. These ideas set the tone for the tectonic plate theory to come. However, the plate tectonics does not completely explain why earthquakes occur within plates. Darwin emphasised that there are different kinds of earthquakes ‘...I confine the foregoing observations to the earthquakes on the coast of South America, or to similar ones, which seem generally to have been accompanied by elevation of the land. But, as we know that subsidence has gone on in other quarters of the world, fissures must there have been formed, and therefore earthquakes...' (we cite the Darwin's sentences following researchspace. auckland. ac. nz/handle/2292/4474). These thoughts agree with results of the last publications (see Nature 461, 870-872; 636-639 and 462, 42-43; 87-89). About 200 years ago Darwin gave oneself airs by the

  7. Investigation of the relationship between ionospheric foF2 and earthquakes

    Science.gov (United States)

    Karaboga, Tuba; Canyilmaz, Murat; Ozcan, Osman

    2018-04-01

    Variations of the ionospheric F2 region critical frequency (foF2) have been investigated statistically before earthquakes during 1980-2008 periods in Japan area. Ionosonde data was taken from Kokubunji station which is in the earthquake preparation zone for all earthquakes. Standard Deviations and Inter-Quartile Range methods are applied to the foF2 data. It is observed that there are anomalous variations in foF2 before earthquakes. These variations can be regarded as ionospheric precursors and may be used for earthquake prediction.

  8. New characteristics of intensity assessment of Sichuan Lushan "4.20" M s7.0 earthquake

    Science.gov (United States)

    Sun, Baitao; Yan, Peilei; Chen, Xiangzhao

    2014-08-01

    The post-earthquake rapid accurate assessment of macro influence of seismic ground motion is of significance for earthquake emergency relief, post-earthquake reconstruction and scientific research. The seismic intensity distribution map released by the Lushan earthquake field team of the China Earthquake Administration (CEA) five days after the strong earthquake ( M7.0) occurred in Lushan County of Sichuan Ya'an City at 8:02 on April 20, 2013 provides a scientific basis for emergency relief, economic loss assessment and post-earthquake reconstruction. In this paper, the means for blind estimation of macroscopic intensity, field estimation of macro intensity, and review of intensity, as well as corresponding problems are discussed in detail, and the intensity distribution characteristics of the Lushan "4.20" M7.0 earthquake and its influential factors are analyzed, providing a reference for future seismic intensity assessments.

  9. Sensitivity of tsunami wave profiles and inundation simulations to earthquake slip and fault geometry for the 2011 Tohoku earthquake

    KAUST Repository

    Goda, Katsuichiro; Mai, Paul Martin; Yasuda, Tomohiro; Mori, Nobuhito

    2014-01-01

    In this study, we develop stochastic random-field slip models for the 2011 Tohoku earthquake and conduct a rigorous sensitivity analysis of tsunami hazards with respect to the uncertainty of earthquake slip and fault geometry. Synthetic earthquake slip distributions generated from the modified Mai-Beroza method captured key features of inversion-based source representations of the mega-thrust event, which were calibrated against rich geophysical observations of this event. Using original and synthesised earthquake source models (varied for strike, dip, and slip distributions), tsunami simulations were carried out and the resulting variability in tsunami hazard estimates was investigated. The results highlight significant sensitivity of the tsunami wave profiles and inundation heights to the coastal location and the slip characteristics, and indicate that earthquake slip characteristics are a major source of uncertainty in predicting tsunami risks due to future mega-thrust events.

  10. Sensitivity of tsunami wave profiles and inundation simulations to earthquake slip and fault geometry for the 2011 Tohoku earthquake

    KAUST Repository

    Goda, Katsuichiro

    2014-09-01

    In this study, we develop stochastic random-field slip models for the 2011 Tohoku earthquake and conduct a rigorous sensitivity analysis of tsunami hazards with respect to the uncertainty of earthquake slip and fault geometry. Synthetic earthquake slip distributions generated from the modified Mai-Beroza method captured key features of inversion-based source representations of the mega-thrust event, which were calibrated against rich geophysical observations of this event. Using original and synthesised earthquake source models (varied for strike, dip, and slip distributions), tsunami simulations were carried out and the resulting variability in tsunami hazard estimates was investigated. The results highlight significant sensitivity of the tsunami wave profiles and inundation heights to the coastal location and the slip characteristics, and indicate that earthquake slip characteristics are a major source of uncertainty in predicting tsunami risks due to future mega-thrust events.

  11. Statistical validation of earthquake related observations

    Science.gov (United States)

    Kossobokov, V. G.

    2011-12-01

    The confirmed fractal nature of earthquakes and their distribution in space and time implies that many traditional estimations of seismic hazard (from term-less to short-term ones) are usually based on erroneous assumptions of easy tractable or, conversely, delicately-designed models. The widespread practice of deceptive modeling considered as a "reasonable proxy" of the natural seismic process leads to seismic hazard assessment of unknown quality, which errors propagate non-linearly into inflicted estimates of risk and, eventually, into unexpected societal losses of unacceptable level. The studies aimed at forecast/prediction of earthquakes must include validation in the retro- (at least) and, eventually, in prospective tests. In the absence of such control a suggested "precursor/signal" remains a "candidate", which link to target seismic event is a model assumption. Predicting in advance is the only decisive test of forecast/predictions and, therefore, the score-card of any "established precursor/signal" represented by the empirical probabilities of alarms and failures-to-predict achieved in prospective testing must prove statistical significance rejecting the null-hypothesis of random coincidental occurrence in advance target earthquakes. We reiterate suggesting so-called "Seismic Roulette" null-hypothesis as the most adequate undisturbed random alternative accounting for the empirical spatial distribution of earthquakes: (i) Consider a roulette wheel with as many sectors as the number of earthquake locations from a sample catalog representing seismic locus, a sector per each location and (ii) make your bet according to prediction (i.e., determine, which locations are inside area of alarm, and put one chip in each of the corresponding sectors); (iii) Nature turns the wheel; (iv) accumulate statistics of wins and losses along with the number of chips spent. If a precursor in charge of prediction exposes an imperfection of Seismic Roulette then, having in mind

  12. Nowcasting Earthquakes and Tsunamis

    Science.gov (United States)

    Rundle, J. B.; Turcotte, D. L.

    2017-12-01

    The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk

  13. Excel, Earthquakes, and Moneyball: exploring Cascadia earthquake probabilities using spreadsheets and baseball analogies

    Science.gov (United States)

    Campbell, M. R.; Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2017-12-01

    getting a hit is N%" or "the probability of an earthquake is N%" involves specifying the assumptions made. Different plausible assumptions yield a wide range of estimates. In both seismology and sports, how to better predict future performance remains an important question.

  14. Experimental evidence that thrust earthquake ruptures might open faults.

    Science.gov (United States)

    Gabuchian, Vahe; Rosakis, Ares J; Bhat, Harsha S; Madariaga, Raúl; Kanamori, Hiroo

    2017-05-18

    Many of Earth's great earthquakes occur on thrust faults. These earthquakes predominantly occur within subduction zones, such as the 2011 moment magnitude 9.0 eathquake in Tohoku-Oki, Japan, or along large collision zones, such as the 1999 moment magnitude 7.7 earthquake in Chi-Chi, Taiwan. Notably, these two earthquakes had a maximum slip that was very close to the surface. This contributed to the destructive tsunami that occurred during the Tohoku-Oki event and to the large amount of structural damage caused by the Chi-Chi event. The mechanism that results in such large slip near the surface is poorly understood as shallow parts of thrust faults are considered to be frictionally stable. Here we use earthquake rupture experiments to reveal the existence of a torquing mechanism of thrust fault ruptures near the free surface that causes them to unclamp and slip large distances. Complementary numerical modelling of the experiments confirms that the hanging-wall wedge undergoes pronounced rotation in one direction as the earthquake rupture approaches the free surface, and this torque is released as soon as the rupture breaks the free surface, resulting in the unclamping and violent 'flapping' of the hanging-wall wedge. Our results imply that the shallow extent of the seismogenic zone of a subducting interface is not fixed and can extend up to the trench during great earthquakes through a torquing mechanism.

  15. Earthquakes of Garhwal Himalaya region of NW Himalaya, India: A study of relocated earthquakes and their seismogenic source and stress

    Science.gov (United States)

    R, A. P.; Paul, A.; Singh, S.

    2017-12-01

    Since the continent-continent collision 55 Ma, the Himalaya has accommodated 2000 km of convergence along its arc. The strain energy is being accumulated at a rate of 37-44 mm/yr and releases at time as earthquakes. The Garhwal Himalaya is located at the western side of a Seismic Gap, where a great earthquake is overdue atleast since 200 years. This seismic gap (Central Seismic Gap: CSG) with 52% probability for a future great earthquake is located between the rupture zones of two significant/great earthquakes, viz. the 1905 Kangra earthquake of M 7.8 and the 1934 Bihar-Nepal earthquake of M 8.0; and the most recent one, the 2015 Gorkha earthquake of M 7.8 is in the eastern side of this seismic gap (CSG). The Garhwal Himalaya is one of the ideal locations of the Himalaya where all the major Himalayan structures and the Himalayan Seimsicity Belt (HSB) can ably be described and studied. In the present study, we are presenting the spatio-temporal analysis of the relocated local micro-moderate earthquakes, recorded by a seismicity monitoring network, which is operational since, 2007. The earthquake locations are relocated using the HypoDD (double difference hypocenter method for earthquake relocations) program. The dataset from July, 2007- September, 2015 have been used in this study to estimate their spatio-temporal relationships, moment tensor (MT) solutions for the earthquakes of M>3.0, stress tensors and their interactions. We have also used the composite focal mechanism solutions for small earthquakes. The majority of the MT solutions show thrust type mechanism and located near the mid-crustal-ramp (MCR) structure of the detachment surface at 8-15 km depth beneath the outer lesser Himalaya and higher Himalaya regions. The prevailing stress has been identified to be compressional towards NNE-SSW, which is the direction of relative plate motion between the India and Eurasia continental plates. The low friction coefficient estimated along with the stress inversions

  16. Quantifying slip balance in the earthquake cycle: Coseismic slip model constrained by interseismic coupling

    KAUST Repository

    Wang, Lifeng

    2015-11-11

    The long-term slip on faults has to follow, on average, the plate motion, while slip deficit is accumulated over shorter time scales (e.g., between the large earthquakes). Accumulated slip deficits eventually have to be released by earthquakes and aseismic processes. In this study, we propose a new inversion approach for coseismic slip, taking interseismic slip deficit as prior information. We assume a linear correlation between coseismic slip and interseismic slip deficit, and invert for the coefficients that link the coseismic displacements to the required strain accumulation time and seismic release level of the earthquake. We apply our approach to the 2011 M9 Tohoku-Oki earthquake and the 2004 M6 Parkfield earthquake. Under the assumption that the largest slip almost fully releases the local strain (as indicated by borehole measurements, Lin et al., 2013), our results suggest that the strain accumulated along the Tohoku-Oki earthquake segment has been almost fully released during the 2011 M9 rupture. The remaining slip deficit can be attributed to the postseismic processes. Similar conclusions can be drawn for the 2004 M6 Parkfield earthquake. We also estimate the required time of strain accumulation for the 2004 M6 Parkfield earthquake to be ~25 years (confidence interval of [17, 43] years), consistent with the observed average recurrence time of ~22 years for M6 earthquakes in Parkfield. For the Tohoku-Oki earthquake, we estimate the recurrence time of~500-700 years. This new inversion approach for evaluating slip balance can be generally applied to any earthquake for which dense geodetic measurements are available.

  17. Quantifying slip balance in the earthquake cycle: Coseismic slip model constrained by interseismic coupling

    KAUST Repository

    Wang, Lifeng; Hainzl, Sebastian; Mai, Paul Martin

    2015-01-01

    The long-term slip on faults has to follow, on average, the plate motion, while slip deficit is accumulated over shorter time scales (e.g., between the large earthquakes). Accumulated slip deficits eventually have to be released by earthquakes and aseismic processes. In this study, we propose a new inversion approach for coseismic slip, taking interseismic slip deficit as prior information. We assume a linear correlation between coseismic slip and interseismic slip deficit, and invert for the coefficients that link the coseismic displacements to the required strain accumulation time and seismic release level of the earthquake. We apply our approach to the 2011 M9 Tohoku-Oki earthquake and the 2004 M6 Parkfield earthquake. Under the assumption that the largest slip almost fully releases the local strain (as indicated by borehole measurements, Lin et al., 2013), our results suggest that the strain accumulated along the Tohoku-Oki earthquake segment has been almost fully released during the 2011 M9 rupture. The remaining slip deficit can be attributed to the postseismic processes. Similar conclusions can be drawn for the 2004 M6 Parkfield earthquake. We also estimate the required time of strain accumulation for the 2004 M6 Parkfield earthquake to be ~25 years (confidence interval of [17, 43] years), consistent with the observed average recurrence time of ~22 years for M6 earthquakes in Parkfield. For the Tohoku-Oki earthquake, we estimate the recurrence time of~500-700 years. This new inversion approach for evaluating slip balance can be generally applied to any earthquake for which dense geodetic measurements are available.

  18. Prospective testing of Coulomb short-term earthquake forecasts

    Science.gov (United States)

    Jackson, D. D.; Kagan, Y. Y.; Schorlemmer, D.; Zechar, J. D.; Wang, Q.; Wong, K.

    2009-12-01

    Earthquake induced Coulomb stresses, whether static or dynamic, suddenly change the probability of future earthquakes. Models to estimate stress and the resulting seismicity changes could help to illuminate earthquake physics and guide appropriate precautionary response. But do these models have improved forecasting power compared to empirical statistical models? The best answer lies in prospective testing in which a fully specified model, with no subsequent parameter adjustments, is evaluated against future earthquakes. The Center of Study of Earthquake Predictability (CSEP) facilitates such prospective testing of earthquake forecasts, including several short term forecasts. Formulating Coulomb stress models for formal testing involves several practical problems, mostly shared with other short-term models. First, earthquake probabilities must be calculated after each “perpetrator” earthquake but before the triggered earthquakes, or “victims”. The time interval between a perpetrator and its victims may be very short, as characterized by the Omori law for aftershocks. CSEP evaluates short term models daily, and allows daily updates of the models. However, lots can happen in a day. An alternative is to test and update models on the occurrence of each earthquake over a certain magnitude. To make such updates rapidly enough and to qualify as prospective, earthquake focal mechanisms, slip distributions, stress patterns, and earthquake probabilities would have to be made by computer without human intervention. This scheme would be more appropriate for evaluating scientific ideas, but it may be less useful for practical applications than daily updates. Second, triggered earthquakes are imperfectly recorded following larger events because their seismic waves are buried in the coda of the earlier event. To solve this problem, testing methods need to allow for “censoring” of early aftershock data, and a quantitative model for detection threshold as a function of

  19. Indoor radon and earthquake

    International Nuclear Information System (INIS)

    Saghatelyan, E.; Petrosyan, L.; Aghbalyan, Yu.; Baburyan, M.; Araratyan, L.

    2004-01-01

    For the first time on the basis of the Spitak earthquake of December 1988 (Armenia, December 1988) experience it is found out that the earthquake causes intensive and prolonged radon splashes which, rapidly dispersing in the open space of close-to-earth atmosphere, are contrastingly displayed in covered premises (dwellings, schools, kindergartens) even if they are at considerable distance from the earthquake epicenter, and this multiplies the radiation influence on the population. The interval of splashes includes the period from the first fore-shock to the last after-shock, i.e. several months. The area affected by radiation is larger vs. Armenia's territory. The scale of this impact on population is 12 times higher than the number of people injured in Spitak, Leninakan and other settlements (toll of injured - 25 000 people, radiation-induced diseases in people - over 300 000). The influence of radiation directly correlates with the earthquake force. Such a conclusion is underpinned by indoor radon monitoring data for Yerevan since 1987 (120 km from epicenter) 5450 measurements and multivariate analysis with identification of cause-and-effect linkages between geo dynamics of indoor radon under stable and conditions of Earth crust, behavior of radon in different geological mediums during earthquakes, levels of room radon concentrations and effective equivalent dose of radiation impact of radiation dose on health and statistical data on public health provided by the Ministry of Health. The following hitherto unexplained facts can be considered as consequences of prolonged radiation influence on human organism: long-lasting state of apathy and indifference typical of the population of Armenia during the period of more than a year after the earthquake, prevalence of malignant cancer forms in disaster zones, dominating lung cancer and so on. All urban territories of seismically active regions are exposed to the threat of natural earthquake-provoked radiation influence

  20. Toward real-time regional earthquake simulation II: Real-time Online earthquake Simulation (ROS) of Taiwan earthquakes

    Science.gov (United States)

    Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh

    2014-06-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  1. Surface rupturing earthquakes repeated in the 300 years along the ISTL active fault system, central Japan

    Science.gov (United States)

    Katsube, Aya; Kondo, Hisao; Kurosawa, Hideki

    2017-06-01

    Surface rupturing earthquakes produced by intraplate active faults generally have long recurrence intervals of a few thousands to tens of thousands of years. We here report the first evidence for an extremely short recurrence interval of 300 years for surface rupturing earthquakes on an intraplate system in Japan. The Kamishiro fault of the Itoigawa-Shizuoka Tectonic Line (ISTL) active fault system generated a Mw 6.2 earthquake in 2014. A paleoseismic trench excavation across the 2014 surface rupture showed the evidence for the 2014 event and two prior paleoearthquakes. The slip of the penultimate earthquake was similar to that of 2014 earthquake, and its timing was constrained to be after A.D. 1645. Judging from the timing, the damaged area, and the amount of slip, the penultimate earthquake most probably corresponds to a historical earthquake in A.D. 1714. The recurrence interval of the two most recent earthquakes is thus extremely short compared with intervals on other active faults known globally. Furthermore, the slip repetition during the last three earthquakes is in accordance with the time-predictable recurrence model rather than the characteristic earthquake model. In addition, the spatial extent of the 2014 surface rupture accords with the distribution of a serpentinite block, suggesting that the relatively low coefficient of friction may account for the unusually frequent earthquakes. These findings would affect long-term forecast of earthquake probability and seismic hazard assessment on active faults.

  2. Global observation of Omori-law decay in the rate of triggered earthquakes

    Science.gov (United States)

    Parsons, T.

    2001-12-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 events in El Salvador. In this study, earthquakes with M greater than 7.0 from the Harvard CMT catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near the main shocks are associated with calculated shear stress increases, while ~39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, triggered earthquakes obey an Omori-law rate decay that lasts between ~7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main-shock centroid. Earthquakes triggered by smaller quakes (foreshocks) also obey Omori's law, which is one of the few time-predictable patterns evident in the global occurrence of earthquakes. These observations indicate that earthquake probability calculations which include interactions from previous shocks should incorporate a transient Omori-law decay with time. In addition, a very simple model using the observed global rate change with time and spatial distribution of triggered earthquakes can be applied to immediately assess the likelihood of triggered earthquakes following large events, and can be in place until more sophisticated analyses are conducted.

  3. Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes

    NARCIS (Netherlands)

    Cheong, S.A.; Tan, T.L.; Chen, C.-C.; Chang, W.-L.; Liu, Z.; Chew, L.Y.; Sloot, P.M.A.; Johnson, N.F.

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting

  4. Earthquake forecasting test for Kanto district to reduce vulnerability of urban mega earthquake disasters

    Science.gov (United States)

    Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.

    2012-12-01

    Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project on earthquake predictability research. The final goal of this project is to search for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined CSEP and started the Japanese testing center called as CSEP-Japan. This testing center provides an open access to researchers contributing earthquake forecast models applied to Japan. Now more than 100 earthquake forecast models were submitted on the prospective experiment. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by CSEP. The total number of experiments was implemented for approximately 300 rounds. These results provide new knowledge concerning statistical forecasting models. We started a study for constructing a 3-dimensional earthquake forecasting model for Kanto district in Japan based on CSEP experiments under the Special Project for Reducing Vulnerability for Urban Mega Earthquake Disasters. Because seismicity of the area ranges from shallower part to a depth of 80 km due to subducting Philippine Sea plate and Pacific plate, we need to study effect of depth distribution. We will develop models for forecasting based on the results of 2-D modeling. We defined the 3D - forecasting area in the Kanto region with test classes of 1 day, 3 months, 1 year and 3 years, and magnitudes from 4.0 to 9.0 as in CSEP-Japan. In the first step of the study, we will install RI10K model (Nanjo, 2011) and the HISTETAS models (Ogata, 2011) to know if those models have good performance as in the 3 months 2-D CSEP-Japan experiments in the Kanto region before the 2011 Tohoku event (Yokoi et al., in preparation). We use CSEP

  5. Assigning probability gain for precursors of four large Chinese earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Cao, T.; Aki, K.

    1983-03-10

    We extend the concept of probability gain associated with a precursor (Aki, 1981) to a set of precursors which may be mutually dependent. Making use of a new formula, we derive a criterion for selecting precursors from a given data set in order to calculate the probability gain. The probabilities per unit time immediately before four large Chinese earthquakes are calculated. They are approximately 0.09, 0.09, 0.07 and 0.08 per day for 1975 Haicheng (M = 7.3), 1976 Tangshan (M = 7.8), 1976 Longling (M = 7.6), and Songpan (M = 7.2) earthquakes, respectively. These results are encouraging because they suggest that the investigated precursory phenomena may have included the complete information for earthquake prediction, at least for the above earthquakes. With this method, the step-by-step approach to prediction used in China may be quantified in terms of the probability of earthquake occurrence. The ln P versus t curve (where P is the probability of earthquake occurrence at time t) shows that ln P does not increase with t linearly but more rapidly as the time of earthquake approaches.

  6. Rupture, waves and earthquakes.

    Science.gov (United States)

    Uenishi, Koji

    2017-01-01

    Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but "extraordinary" phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable.

  7. CREATING THE KULTUK POLYGON FOR EARTHQUAKE PREDICTION: VARIATIONS OF (234U/238U AND 87SR/86SR IN GROUNDWATER FROM ACTIVE FAULTS AT THE WESTERN SHORE OF LAKE BAIKAL

    Directory of Open Access Journals (Sweden)

    S. V. Rasskazov

    2015-01-01

    Full Text Available Introduction. Determinations of (234U/238U in groundwater samples are used for monitoring current deformations in active faults (parentheses denote activity ratio units. The cyclic equilibrium of activity ratio 234U/238U≈≈(234U/238U≈γ≈1 corresponds to the atomic ratio ≈5.47×10–5. This parameter may vary due to higher contents of 234U nuclide in groundwater as a result of rock deformation. This effect discovered by P.I. Chalov and V.V. Cherdyntsev was described in [Cherdyntsev, 1969, 1973; Chalov, 1975; Chalov et al., 1990; Faure, 1989]. In 1970s and 1980s, only quite laborious methods were available for measuring uranium isotopic ratios. Today it is possible to determine concentrations and isotopic ration of uranium by express analytical techniques using inductively coupled plasma mass spectrometry (ICP‐MS [Halicz et al., 2000; Shen et al., 2002; Cizdziel et al., 2005; Chebykin et al., 2007]. Sets of samples canbe efficiently analysed by ICP‐MS, and regularly collected uranium isotope values can be systematized at a new quality level for the purposes of earthquake prediction. In this study of (234U/238U in groundwater at the Kultuk polygon, we selected stations of the highest sensitivity, which can ensure proper monitoring of the tectonic activity of the Obruchev and Main Sayan faults. These two faults that limit the Sharyzhalgai block of the crystalline basement of the Siberian craton in the south are conjugated in the territory of the Kultuk polygon (Fig 1. Forty sets of samples taken from 27 June 2012 to 28 January 2014 were analysed, and data on 170 samples are discussed in this paper.Methods. Isotope compositions of uranium and strontium were determined by methods described in [Chebykin et al., 2007; Pin et al., 1992] with modifications. Analyses of uranium by ISP‐MS technique were performed using an Agilent 7500ce quadrapole mass spectrometer of the Ultramicroanalysis Collective Use Centre; analyses of

  8. Testing earthquake source inversion methodologies

    KAUST Repository

    Page, Morgan T.; Mai, Paul Martin; Schorlemmer, Danijel

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data

  9. Earthquakes; May-June 1982

    Science.gov (United States)

    Person, W.J.

    1982-01-01

    There were four major earthquakes (7.0-7.9) during this reporting period: two struck in Mexico, one in El Salvador, and one in teh Kuril Islands. Mexico, El Salvador, and China experienced fatalities from earthquakes.

  10. Extending the ISC-GEM Global Earthquake Instrumental Catalogue

    Science.gov (United States)

    Di Giacomo, Domenico; Engdhal, Bob; Storchak, Dmitry; Villaseñor, Antonio; Harris, James

    2015-04-01

    After a 27-month project funded by the GEM Foundation (www.globalquakemodel.org), in January 2013 we released the ISC-GEM Global Instrumental Earthquake Catalogue (1900 2009) (www.isc.ac.uk/iscgem/index.php) as a special product to use for seismic hazard studies. The new catalogue was necessary as improved seismic hazard studies necessitate that earthquake catalogues are homogeneous (to the largest extent possible) over time in their fundamental parameters, such as location and magnitude. Due to time and resource limitation, the ISC-GEM catalogue (1900-2009) included earthquakes selected according to the following time-variable cut-off magnitudes: Ms=7.5 for earthquakes occurring before 1918; Ms=6.25 between 1918 and 1963; and Ms=5.5 from 1964 onwards. Because of the importance of having a reliable seismic input for seismic hazard studies, funding from GEM and two commercial companies in the US and UK allowed us to start working on the extension of the ISC-GEM catalogue both for earthquakes that occurred beyond 2009 and for earthquakes listed in the International Seismological Summary (ISS) which fell below the cut-off magnitude of 6.25. This extension is part of a four-year program that aims at including in the ISC-GEM catalogue large global earthquakes that occurred before the beginning of the ISC Bulletin in 1964. In this contribution we present the updated ISC GEM catalogue, which will include over 1000 more earthquakes that occurred in 2010 2011 and several hundreds more between 1950 and 1959. The catalogue extension between 1935 and 1949 is currently underway. The extension of the ISC-GEM catalogue will also be helpful for regional cross border seismic hazard studies as the ISC-GEM catalogue should be used as basis for cross-checking the consistency in location and magnitude of those earthquakes listed both in the ISC GEM global catalogue and regional catalogues.

  11. The Manchester earthquake swarm of October 2002

    Science.gov (United States)

    Baptie, B.; Ottemoeller, L.

    2003-04-01

    An earthquake sequence started in the Greater Manchester area of the United Kingdom on October 19, 2002. This has continued to the time of writing and has consisted of more than 100 discrete earthquakes. Three temporary seismograph stations were installed to supplement existing permanent stations and to better understand the relationship between the seismicity and local geology. Due to the urban location, these were experienced by a large number of people. The largest event on October 21 had a magnitude ML 3.9. The activity appears to be an earthquake swarm, since there is no clear distinction between a main shock and aftershocks. However, most of the energy during the sequence was actually released in two earthquakes separated by a few seconds in time, on October 21 at 11:42. Other examples of swarm activity in the UK include Comrie (1788-1801, 1839-46), Glenalmond (1970-72), Doune (1997) and Blackford (1997-98, 2000-01) in central Scotland, Constantine (1981, 1986, 1992-4) in Cornwall, and Johnstonbridge (mid1980s) and Dumfries (1991,1999). The clustering of these events in time and space does suggest that there is a causal relationship between the events of the sequence. Joint hypocenter determination was used to simultaneously locate the swarm earthquakes, determine station corrections and improve the relative locations. It seems likely that all events in the sequence originate from a relatively small source volume. This is supported by the similarities in source mechanism and waveform signals between the various events. Focal depths were found to be very shallow and of the order of about 2-3 km. Source mechanisms determined for the largest of the events show strike-slip solutions along either northeast-southwest or northwest-southeast striking fault planes. The surface expression of faults in the epicentral area is generally northwest-southeast, suggesting that this is the more likely fault plane.

  12. Magnitude Estimation for Large Earthquakes from Borehole Recordings

    Science.gov (United States)

    Eshaghi, A.; Tiampo, K. F.; Ghofrani, H.; Atkinson, G.

    2012-12-01

    We present a simple and fast method for magnitude determination technique for earthquake and tsunami early warning systems based on strong ground motion prediction equations (GMPEs) in Japan. This method incorporates borehole strong motion records provided by the Kiban Kyoshin network (KiK-net) stations. We analyzed strong ground motion data from large magnitude earthquakes (5.0 ≤ M ≤ 8.1) with focal depths < 50 km and epicentral distances of up to 400 km from 1996 to 2010. Using both peak ground acceleration (PGA) and peak ground velocity (PGV) we derived GMPEs in Japan. These GMPEs are used as the basis for regional magnitude determination. Predicted magnitudes from PGA values (Mpga) and predicted magnitudes from PGV values (Mpgv) were defined. Mpga and Mpgv strongly correlate with the moment magnitude of the event, provided sufficient records for each event are available. The results show that Mpgv has a smaller standard deviation in comparison to Mpga when compared with the estimated magnitudes and provides a more accurate early assessment of earthquake magnitude. We test this new method to estimate the magnitude of the 2011 Tohoku earthquake and we present the results of this estimation. PGA and PGV from borehole recordings allow us to estimate the magnitude of this event 156 s and 105 s after the earthquake onset, respectively. We demonstrate that the incorporation of borehole strong ground-motion records immediately available after the occurrence of large earthquakes significantly increases the accuracy of earthquake magnitude estimation and the associated improvement in earthquake and tsunami early warning systems performance. Moment magnitude versus predicted magnitude (Mpga and Mpgv).

  13. Fast rise times and the physical mechanism of deep earthquakes

    Science.gov (United States)

    Houston, H.; Williams, Q.

    1991-01-01

    A systematic global survey of the rise times and stress drops of deep and intermediate earthquakes is reported. When the rise times are scaled to the seismic moment release of the events, their average is nearly twice as fast for events deeper than about 450 km as for shallower events.

  14. Sensing the earthquake

    Science.gov (United States)

    Bichisao, Marta; Stallone, Angela

    2017-04-01

    Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.

  15. Turkish Children's Ideas about Earthquakes

    Science.gov (United States)

    Simsek, Canan Lacin

    2007-01-01

    Earthquake, a natural disaster, is among the fundamental problems of many countries. If people know how to protect themselves from earthquake and arrange their life styles in compliance with this, damage they will suffer will reduce to that extent. In particular, a good training regarding earthquake to be received in primary schools is considered…

  16. Earthquakes, May-June 1991

    Science.gov (United States)

    Person, W.J.

    1992-01-01

    One major earthquake occurred during this reporting period. This was a magntidue 7.1 in Indonesia (Minahassa Peninsula) on June 20. Earthquake-related deaths were reported in the Western Caucasus (Georgia, USSR) on May 3 and June 15. One earthquake-related death was also reported El Salvador on June 21. 

  17. Organizational changes at Earthquakes & Volcanoes

    Science.gov (United States)

    Gordon, David W.

    1992-01-01

    Primary responsibility for the preparation of Earthquakes & Volcanoes within the Geological Survey has shifted from the Office of Scientific Publications to the Office of Earthquakes, Volcanoes, and Engineering (OEVE). As a consequence of this reorganization, Henry Spall has stepepd down as Science Editor for Earthquakes & Volcanoes(E&V).

  18. The 1976 Tangshan earthquake

    Science.gov (United States)

    Fang, Wang

    1979-01-01

    The Tangshan earthquake of 1976 was one of the largest earthquakes in recent years. It occurred on July 28 at 3:42 a.m, Beijing (Peking) local time, and had magnitude 7.8, focal depth of 15 kilometers, and an epicentral intensity of XI on the New Chinese Seismic Intensity Scale; it caused serious damage and loss of life in this densely populated industrial city. Now, with the help of people from all over China, the city of Tangshan is being rebuild. 

  19. [Earthquakes in El Salvador].

    Science.gov (United States)

    de Ville de Goyet, C

    2001-02-01

    The Pan American Health Organization (PAHO) has 25 years of experience dealing with major natural disasters. This piece provides a preliminary review of the events taking place in the weeks following the major earthquakes in El Salvador on 13 January and 13 February 2001. It also describes the lessons that have been learned over the last 25 years and the impact that the El Salvador earthquakes and other disasters have had on the health of the affected populations. Topics covered include mass-casualties management, communicable diseases, water supply, managing donations and international assistance, damages to the health-facilities infrastructure, mental health, and PAHO's role in disasters.

  20. A Geodynamic Study of Active Crustal Deformation and Earthquakes in North China

    Science.gov (United States)

    Yang, Y.; Liu, M.

    2005-12-01

    North China is part of the Archaean Sino-Korean craton, yet today it is a region of intense crustal deformation and earthquakes, including 21 M >=7.0 events since 512 AD. More than half of the large events occurred within the Fen-Wei rift system surrounding the stable Ordos plateau; the largest events (M >=7.3) show a sequential southward migration along the rift. However, since 1695 the Fen-Wei rift has became seismically dormant, while seismicity seems having shifted eastward to the North China plain, marked by the 1996 Tangshan earthquake (M=7.8). We have developed a 3D viscoelastic geodynamic model to study the cause of seismicity and its spatial-temporal pattern in North China. Constrained by crustal kinematics from GPS and neotectonic data, the model shows high deviatoric stress in the North China crust, resulting mainly from compression of the expanding Tibetan Plateau and resistance from the stable Siberian block. Within North China seismicity is largely controlled by lateral heterogeneity of lithospheric structures, which explains the concentration of seismicity in the Fen-Wei rift. Our results show that stress triggering may have contributed to the sequential migration of large events along the rift, and the release and migration of stress and strain energy from these large events may partially explain the intense seismicity in the North China plain in the past 300 years. Comparing the predicted long-term spatial pattern of strain energy with seismic energy release provides some insights of potential earthquake risks in North China.

  1. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    Science.gov (United States)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  2. Statistical aspects and risks of human-caused earthquakes

    Science.gov (United States)

    Klose, C. D.

    2013-12-01

    The seismological community invests ample human capital and financial resources to research and predict risks associated with earthquakes. Industries such as the insurance and re-insurance sector are equally interested in using probabilistic risk models developed by the scientific community to transfer risks. These models are used to predict expected losses due to naturally occurring earthquakes. But what about the risks associated with human-caused earthquakes? Such risk models are largely absent from both industry and academic discourse. In countries around the world, informed citizens are becoming increasingly aware and concerned that this economic bias is not sustainable for long-term economic growth, environmental and human security. Ultimately, citizens look to their government officials to hold industry accountable. In the Netherlands, for example, the hydrocarbon industry is held accountable for causing earthquakes near Groningen. In Switzerland, geothermal power plants were shut down or suspended because they caused earthquakes in canton Basel and St. Gallen. The public and the private non-extractive industry needs access to information about earthquake risks in connection with sub/urban geoengineeing activities, including natural gas production through fracking, geothermal energy production, carbon sequestration, mining and water irrigation. This presentation illuminates statistical aspects of human-caused earthquakes with respect to different geologic environments. Statistical findings are based on the first catalog of human-caused earthquakes (in Klose 2013). Findings are discussed which include the odds to die during a medium-size earthquake that is set off by geomechanical pollution. Any kind of geoengineering activity causes this type of pollution and increases the likelihood of triggering nearby faults to rupture.

  3. Dynamic strains for earthquake source characterization

    Science.gov (United States)

    Barbour, Andrew J.; Crowell, Brendan W

    2017-01-01

    Strainmeters measure elastodynamic deformation associated with earthquakes over a broad frequency band, with detection characteristics that complement traditional instrumentation, but they are commonly used to study slow transient deformation along active faults and at subduction zones, for example. Here, we analyze dynamic strains at Plate Boundary Observatory (PBO) borehole strainmeters (BSM) associated with 146 local and regional earthquakes from 2004–2014, with magnitudes from M 4.5 to 7.2. We find that peak values in seismic strain can be predicted from a general regression against distance and magnitude, with improvements in accuracy gained by accounting for biases associated with site–station effects and source–path effects, the latter exhibiting the strongest influence on the regression coefficients. To account for the influence of these biases in a general way, we include crustal‐type classifications from the CRUST1.0 global velocity model, which demonstrates that high‐frequency strain data from the PBO BSM network carry information on crustal structure and fault mechanics: earthquakes nucleating offshore on the Blanco fracture zone, for example, generate consistently lower dynamic strains than earthquakes around the Sierra Nevada microplate and in the Salton trough. Finally, we test our dynamic strain prediction equations on the 2011 M 9 Tohoku‐Oki earthquake, specifically continuous strain records derived from triangulation of 137 high‐rate Global Navigation Satellite System Earth Observation Network stations in Japan. Moment magnitudes inferred from these data and the strain model are in agreement when Global Positioning System subnetworks are unaffected by spatial aliasing.

  4. Thermal regime of the lithosphere and prediction of seismic hazard in the Caspian region

    International Nuclear Information System (INIS)

    Levin, L.E.; Solodilov, L.N.; Kondorskaya, N.V.; Gasanov, A.G; Panahi, B.M.

    2002-01-01

    Full text : Prediction of seicmicity is one of elements of ecology hazard warning. In this collective research, it is elaborated in three directions : quantitative estimate of regional faults by level of seismic activity; ascertainment of space position of earthquake risk zones, determination of high seismic potential sites for the period of the next 3-5 years. During elaboration of prediction, it takes into account that peculiar feature all over the is determined by relationship of about 90 percent of earthquake hypocenters and released energy of seismic waves with elactic-brittle ayer of the lithosphere. Concetration of earthquakes epicenters is established predominantly in zones of complex structure of elastic-brittle layer where gradient of it thickness is 20-30 km. Directions of hypocenters migration in the plastic-viscous layer reveal a space position of seismic dangerous zones. All this provides a necessity for generalization of data on location of earthquakes epicenters; determination of their magnitudes, space position of regional faults and heat flow with calculation of thermal regime being made for clarification of the lithosphere and elastic-brittle thickness variations separately. General analysis includes a calculation of released seismic wave energy and determination of peculiar features of its distribution in the entire region and also studies of hypocenters migration in the plastic-viscous layer of the litosphere in time.

  5. a Collaborative Cyberinfrastructure for Earthquake Seismology

    Science.gov (United States)

    Bossu, R.; Roussel, F.; Mazet-Roux, G.; Lefebvre, S.; Steed, R.

    2013-12-01

    One of the challenges in real time seismology is the prediction of earthquake's impact. It is particularly true for moderate earthquake (around magnitude 6) located close to urbanised areas, where the slightest uncertainty in event location, depth, magnitude estimates, and/or misevaluation of propagation characteristics, site effects and buildings vulnerability can dramatically change impact scenario. The Euro-Med Seismological Centre (EMSC) has developed a cyberinfrastructure to collect observations from eyewitnesses in order to provide in-situ constraints on actual damages. This cyberinfrastructure takes benefit of the natural convergence of earthquake's eyewitnesses on EMSC website (www.emsc-csem.org), the second global earthquake information website within tens of seconds of the occurrence of a felt event. It includes classical crowdsourcing tools such as online questionnaires available in 39 languages, and tools to collect geolocated pics. It also comprises information derived from the real time analysis of the traffic on EMSC website, a method named flashsourcing; In case of a felt earthquake, eyewitnesses reach EMSC website within tens of seconds to find out the cause of the shaking they have just been through. By analysing their geographical origin through their IP address, we automatically detect felt earthquakes and in some cases map the damaged areas through the loss of Internet visitors. We recently implemented a Quake Catcher Network (QCN) server in collaboration with Stanford University and the USGS, to collect ground motion records performed by volunteers and are also involved in a project to detect earthquakes from ground motions sensors from smartphones. Strategies have been developed for several social media (Facebook, Twitter...) not only to distribute earthquake information, but also to engage with the Citizens and optimise data collection. A smartphone application is currently under development. We will present an overview of this

  6. Earthquake Safety Tips in the Classroom

    Science.gov (United States)

    Melo, M. O.; Maciel, B. A. P. C.; Neto, R. P.; Hartmann, R. P.; Marques, G.; Gonçalves, M.; Rocha, F. L.; Silveira, G. M.

    2014-12-01

    The catastrophes induced by earthquakes are among the most devastating ones, causing an elevated number of human losses and economic damages. But, we have to keep in mind that earthquakes don't kill people, buildings do. Earthquakes can't be predicted and the only way of dealing with their effects is to teach the society how to be prepared for them, and how to deal with their consequences. In spite of being exposed to moderate and large earthquakes, most of the Portuguese are little aware of seismic risk, mainly due to the long recurrence intervals between strong events. The acquisition of safe and correct attitudes before, during and after an earthquake is relevant for human security. Children play a determinant role in the establishment of a real and long-lasting "culture of prevention", both through action and new attitudes. On the other hand, when children assume correct behaviors, their relatives often change their incorrect behaviors to mimic the correct behaviors of their kids. In the framework of a Parents-in-Science initiative, we started with bi-monthly sessions for children aged 5 - 6 years old and 9 - 10 years old. These sessions, in which parents, teachers and high-school students participate, became part of the school's permanent activities. We start by a short introduction to the Earth and to earthquakes by story telling and by using simple science activities to trigger children curiosity. With safety purposes, we focus on how crucial it is to know basic information about themselves and to define, with their families, an emergency communications plan, in case family members are separated. Using a shaking table we teach them how to protect themselves during an earthquake. We then finish with the preparation on an individual emergency kit. This presentation will highlight the importance of encouraging preventive actions in order to reduce the impact of earthquakes on society. This project is developed by science high-school students and teachers, in

  7. Earthquake Culture: A Significant Element in Earthquake Disaster Risk Assessment and Earthquake Disaster Risk Management

    OpenAIRE

    Ibrion, Mihaela

    2018-01-01

    This book chapter brings to attention the dramatic impact of large earthquake disasters on local communities and society and highlights the necessity of building and enhancing the earthquake culture. Iran was considered as a research case study and fifteen large earthquake disasters in Iran were investigated and analyzed over more than a century-time period. It was found that the earthquake culture in Iran was and is still conditioned by many factors or parameters which are not integrated and...

  8. Performance of HEPA filters at LLNL following the 1980 and 1989 earthquakes

    International Nuclear Information System (INIS)

    Bergman, W.; Elliott, J.; Wilson, K.

    1995-01-01

    The Lawrence Livermore National Laboratory has experienced two significant earthquakes for which data is available to assess the ability of HEPA filters to withstand seismic conditions. A 5.9 magnitude earthquake with an epicenter 10 miles from LLNL struck on January 24, l980. Estimates of the peak ground accelerations ranged from 0.2 to 0.3 g. A 7.0 magnitude earthquake with an epicenter about 50 miles from LLNL struck on October 17, 1989. Measurements of the ground accelerations at LLNL averaged 0.1 g. The results from the in-place filter tests obtained after each of the earthquakes were compiled and studied to determine if the earthquakes had caused filter leakage. Our study showed that only the 1980 earthquake resulted in a small increase in the number of HEPA filters developing leaks. In the 12 months following the 1980 and 1989 earthquakes, the in-place filter tests showed 8.0% and 4.1% of all filters respectively developed leaks. The average percentage of filters developing leaks from 1980 to 1993 was 3.3%+/-1.7%. The increase in the filter leaks is significant for the 1980 earthquake, but not for the 1989 earthquake. No contamination was detected following the earthquakes that would suggest transient releases from the filtration system

  9. Performance of HEPA filters at LLNL following the 1980 and 1989 earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, W.; Elliott, J.; Wilson, K. [Lawrence Livermore National Laboratory, CA (United States)

    1995-02-01

    The Lawrence Livermore National Laboratory has experienced two significant earthquakes for which data is available to assess the ability of HEPA filters to withstand seismic conditions. A 5.9 magnitude earthquake with an epicenter 10 miles from LLNL struck on January 24, l980. Estimates of the peak ground accelerations ranged from 0.2 to 0.3 g. A 7.0 magnitude earthquake with an epicenter about 50 miles from LLNL struck on October 17, 1989. Measurements of the ground accelerations at LLNL averaged 0.1 g. The results from the in-place filter tests obtained after each of the earthquakes were compiled and studied to determine if the earthquakes had caused filter leakage. Our study showed that only the 1980 earthquake resulted in a small increase in the number of HEPA filters developing leaks. In the 12 months following the 1980 and 1989 earthquakes, the in-place filter tests showed 8.0% and 4.1% of all filters respectively developed leaks. The average percentage of filters developing leaks from 1980 to 1993 was 3.3%+/-1.7%. The increase in the filter leaks is significant for the 1980 earthquake, but not for the 1989 earthquake. No contamination was detected following the earthquakes that would suggest transient releases from the filtration system.

  10. Combining multiple earthquake models in real time for earthquake early warning

    Science.gov (United States)

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  11. Identification of radon anomalies related to earthquakes

    International Nuclear Information System (INIS)

    Ozdas, M.; Inceoglu, F.; Rahman, C.; Yaprak, G.

    2009-01-01

    Put of many proposed earthquake precursors, temporal radon variation in soil is classified as one of a few promising geochemical signals that may be used for earthquake prediction. However, to use radon variation in soil gas as a reliable earthquake precursor, it must be realized that radon changes are controlled not only by deeper phenomena such as earthquake, but they are also controlled by meteorological parameters such as precipitation, barometric pressure, air temperature and etc. Further studies are required to differentiate the changes in the measured radon concentration caused by tectonic disturbances from the meteorological parameters. In the current study, temporal radon variations in soil gas along active faults in Alasehir of Gediz Graben Systems have been continuously monitored by LR-115 nuclear track detectors for two years. Additionally, the meteorological parameters such as barometric pressure, rainfall and air temperature at the monitoring site have been observed during the same period. Accordingly, regression analysis have been applied to the collected data to identify the radon anomalies due to the seismic activities from those of meteorological conditions.

  12. Changes in groundwater chemistry before two consecutive earthquakes in Iceland

    KAUST Repository

    Skelton, Alasdair

    2014-09-21

    Groundwater chemistry has been observed to change before earthquakes and is proposed as a precursor signal. Such changes include variations in radon count rates1, 2, concentrations of dissolved elements3, 4, 5 and stable isotope ratios4, 5. Changes in seismic wave velocities6, water levels in boreholes7, micro-seismicity8 and shear wave splitting9 are also thought to precede earthquakes. Precursor activity has been attributed to expansion of rock volume7, 10, 11. However, most studies of precursory phenomena lack sufficient data to rule out other explanations unrelated to earthquakes12. For example, reproducibility of a precursor signal has seldom been shown and few precursors have been evaluated statistically. Here we analyse the stable isotope ratios and dissolved element concentrations of groundwater taken from a borehole in northern Iceland between 2008 and 2013. We find that the chemistry of the groundwater changed four to six months before two greater than magnitude 5 earthquakes that occurred in October 2012 and April 2013. Statistical analyses indicate that the changes in groundwater chemistry were associated with the earthquakes. We suggest that the changes were caused by crustal dilation associated with stress build-up before each earthquake, which caused different groundwater components to mix. Although the changes we detect are specific for the site in Iceland, we infer that similar processes may be active elsewhere, and that groundwater chemistry is a promising target for future studies on the predictability of earthquakes.

  13. Surface Rupture Effects on Earthquake Moment-Area Scaling Relations

    Science.gov (United States)

    Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro

    2017-09-01

    Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment ( M 0) and rupture area ( A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0- A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0- A scaling relations for strike-slip earthquakes.

  14. Changes in groundwater chemistry before two consecutive earthquakes in Iceland

    KAUST Repository

    Skelton, Alasdair; André n, Margareta; Kristmannsdó ttir, Hrefna; Stockmann, Gabrielle; Mö rth, Carl-Magnus; Sveinbjö rnsdó ttir, Á rny; Jonsson, Sigurjon; Sturkell, Erik; Guð rú nardó ttir, Helga Rakel; Hjartarson, Hreinn; Siegmund, Heike; Kockum, Ingrid

    2014-01-01

    Groundwater chemistry has been observed to change before earthquakes and is proposed as a precursor signal. Such changes include variations in radon count rates1, 2, concentrations of dissolved elements3, 4, 5 and stable isotope ratios4, 5. Changes in seismic wave velocities6, water levels in boreholes7, micro-seismicity8 and shear wave splitting9 are also thought to precede earthquakes. Precursor activity has been attributed to expansion of rock volume7, 10, 11. However, most studies of precursory phenomena lack sufficient data to rule out other explanations unrelated to earthquakes12. For example, reproducibility of a precursor signal has seldom been shown and few precursors have been evaluated statistically. Here we analyse the stable isotope ratios and dissolved element concentrations of groundwater taken from a borehole in northern Iceland between 2008 and 2013. We find that the chemistry of the groundwater changed four to six months before two greater than magnitude 5 earthquakes that occurred in October 2012 and April 2013. Statistical analyses indicate that the changes in groundwater chemistry were associated with the earthquakes. We suggest that the changes were caused by crustal dilation associated with stress build-up before each earthquake, which caused different groundwater components to mix. Although the changes we detect are specific for the site in Iceland, we infer that similar processes may be active elsewhere, and that groundwater chemistry is a promising target for future studies on the predictability of earthquakes.

  15. The EM Earthquake Precursor

    Science.gov (United States)

    Jones, K. B., II; Saxton, P. T.

    2013-12-01

    Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After the 1989 Loma Prieta Earthquake, American earthquake investigators predetermined magnetometer use and a minimum earthquake magnitude necessary for EM detection. This action was set in motion, due to the extensive damage incurred and public outrage concerning earthquake forecasting; however, the magnetometers employed, grounded or buried, are completely subject to static and electric fields and have yet to correlate to an identifiable precursor. Secondly, there is neither a networked array for finding any epicentral locations, nor have there been any attempts to find even one. This methodology needs dismissal, because it is overly complicated, subject to continuous change, and provides no response time. As for the minimum magnitude threshold, which was set at M5, this is simply higher than what modern technological advances have gained. Detection can now be achieved at approximately M1, which greatly improves forecasting chances. A propagating precursor has now been detected in both the field and laboratory. Field antenna testing conducted outside the NE Texas town of Timpson in February, 2013, detected three strong EM sources along with numerous weaker signals. The antenna had mobility, and observations were noted for recurrence, duration, and frequency response. Next, two

  16. Simulated earthquake ground motions

    International Nuclear Information System (INIS)

    Vanmarcke, E.H.; Gasparini, D.A.

    1977-01-01

    The paper reviews current methods for generating synthetic earthquake ground motions. Emphasis is on the special requirements demanded of procedures to generate motions for use in nuclear power plant seismic response analysis. Specifically, very close agreement is usually sought between the response spectra of the simulated motions and prescribed, smooth design response spectra. The features and capabilities of the computer program SIMQKE, which has been widely used in power plant seismic work are described. Problems and pitfalls associated with the use of synthetic ground motions in seismic safety assessment are also pointed out. The limitations and paucity of recorded accelerograms together with the widespread use of time-history dynamic analysis for obtaining structural and secondary systems' response have motivated the development of earthquake simulation capabilities. A common model for synthesizing earthquakes is that of superposing sinusoidal components with random phase angles. The input parameters for such a model are, then, the amplitudes and phase angles of the contributing sinusoids as well as the characteristics of the variation of motion intensity with time, especially the duration of the motion. The amplitudes are determined from estimates of the Fourier spectrum or the spectral density function of the ground motion. These amplitudes may be assumed to be varying in time or constant for the duration of the earthquake. In the nuclear industry, the common procedure is to specify a set of smooth response spectra for use in aseismic design. This development and the need for time histories have generated much practical interest in synthesizing earthquakes whose response spectra 'match', or are compatible with a set of specified smooth response spectra

  17. The HayWired Earthquake Scenario—Earthquake Hazards

    Science.gov (United States)

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  18. How "lucky" we are that the Fukushima disaster occurred in early spring: predictions on the contamination levels from various fission products released from the accident and updates on the risk assessment for solid and thyroid cancers.

    Science.gov (United States)

    Evangeliou, Nikolaos; Balkanski, Yves; Cozic, Anne; Møller, Anders Pape

    2014-12-01

    The present paper studies how a random event (earthquake) and the subsequent disaster in Japan affect transport and deposition of fallout and the resulting health consequences. Therefore, except for the original accident in March 2011, three additional scenarios are assessed assuming that the same releases took place in winter 2010, summer 2011 and autumn 2011 in order to cover a full range of annual seasonality. This is also the first study where a large number of fission products released from the accident are used to assess health risks with the maximum possible efficiency. Xenon-133 and (137)Cs are directly estimated within the model, whereas 15 other radionuclides are calculated indirectly using reported isotopic ratios. As much as 85% of the released (137)Cs would be deposited in continental regions worldwide if the accident occurred in winter 2010, 22% in spring 2011 (when it actually happened), 55% in summer 2011 and 48% if it occurred during autumn 2011. Solid cancer incidents and mortalities from Fukushima are estimated to be between 160 and 880 and from 110 to 640 close to previous estimations. By adding thyroid cancers, the total number rises from 230 to 850 for incidents and from 120 to 650 for mortalities. Fatalities due to worker exposure and mandatory evacuation have been reported to be around 610 increasing total estimated mortalities to 730-1260. These estimates are 2.8 times higher than previously reported ones for radiocaesium and (131)I and 16% higher than those reported based on radiocaesium only. Total expected fatalities from Fukushima are 32% lower than in the winter scenario, 5% that in the summer scenario and 30% lower than in the autumn scenario. Nevertheless, cancer fatalities are expected to be less than 5% of those from the tsunami (~20,000). Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Earthquake precursory events around epicenters and local active faults

    Science.gov (United States)

    Valizadeh Alvan, H.; Mansor, S. B.; Haydari Azad, F.

    2013-05-01

    The chain of underground events which are triggered by seismic activities and physical/chemical interactions prior to a shake in the earth's crust may produce surface and above surface phenomena. During the past decades many researchers have been carried away to seek the possibility of short term earthquake prediction using remote sensing data. Currently, there are several theories about the preparation stages of earthquakes most of which stress on raises in heat and seismic waves as the main signs of an impending earthquakes. Their differences only lie in the secondary phenomena which are triggered by these events. In any case, with the recent advances in remote sensing sensors and techniques now we are able to provide wider, more accurate monitoring of land, ocean and atmosphere. Among all theoretical factors, changes in Surface Latent Heat Flux (SLHF), Sea & Land Surface Temperature (SST & LST) and surface chlorophyll-a are easier to record from earth observing satellites. SLHF is the amount of energy exchange in the form of water vapor between the earth's surface and atmosphere. Abnormal variations in this factor have been frequently reported as an earthquake precursor during the past years. The accumulated stress in the earth's crust during the preparation phase of earthquakes is said to be the main cause of temperature anomalies weeks to days before the main event and subsequent shakes. Chemical and physical interactions in the presence of underground water lead to higher water evaporation prior to inland earthquakes. In case of oceanic earthquakes, higher temperature at the ocean beds may lead to higher amount of Chl-a on the sea surface. On the other hand, it has been also said that the leak of Radon gas which occurs as rocks break during earthquake preparation causes the formation of airborne ions and higher Air Temperature (AT). We have chosen to perform a statistical, long-term, and short-term approach by considering the reoccurrence intervals of past

  20. Assessment of precast beam-column using capacity demand response spectrum subject to design basis earthquake and maximum considered earthquake

    Science.gov (United States)

    Ghani, Kay Dora Abd.; Tukiar, Mohd Azuan; Hamid, Nor Hayati Abdul

    2017-08-01

    Malaysia is surrounded by the tectonic feature of the Sumatera area which consists of two seismically active inter-plate boundaries, namely the Indo-Australian and the Eurasian Plates on the west and the Philippine Plates on the east. Hence, Malaysia experiences tremors from far distant earthquake occurring in Banda Aceh, Nias Island, Padang and other parts of Sumatera Indonesia. In order to predict the safety of precast buildings in Malaysia under near field ground motion the response spectrum analysis could be used for dealing with future earthquake whose specific nature is unknown. This paper aimed to develop of capacity demand response spectrum subject to Design Basis Earthquake (DBE) and Maximum Considered Earthquake (MCE) in order to assess the performance of precast beam column joint. From the capacity-demand response spectrum analysis, it can be concluded that the precast beam-column joints would not survive when subjected to earthquake excitation with surface-wave magnitude, Mw, of more than 5.5 Scale Richter (Type 1 spectra). This means that the beam-column joint which was designed using the current code of practice (BS8110) would be severely damaged when subjected to high earthquake excitation. The capacity-demand response spectrum analysis also shows that the precast beam-column joints in the prototype studied would be severely damaged when subjected to Maximum Considered Earthquake (MCE) with PGA=0.22g having a surface-wave magnitude of more than 5.5 Scale Richter, or Type 1 spectra.

  1. The earthquakes of the Baltic shield

    International Nuclear Information System (INIS)

    Slunga, R.

    1990-06-01

    More than 200 earthquakes in the Baltic Shield area in the size range ML 0.6-4.5 have been studied by dense regional seismic networks. The analysis includes focal depths, dynamic source parameters, and fault plane solutions. In southern Sweden a long part of the Protogene zone marks a change in the seismic activity. The focal depths indicate three crustal layers: Upper crust (0-18 km in southern Sweden, 0-13 km in northern Sweden), middle crust down to 35 km, and the quiet lower crust. The fault plane solutions show that strike-slip is dominating. Along the Tornquist line significant normal faulting occurs. The stresses released by the earthquakes show a remarkable consistency with a regional principle compression N60W. This indicates that plate-tectonic processes are more important than the land uplift. The spatial distribution is consistent with a model where the earthquakes are breakdowns of asperities on normally stably sliding faults. The aseismic sliding is estimated to be 2000 times more extensive than the seismic sliding. Southern Sweden is estimated to deform horizontally at a rate of 1 mm/year or more. (orig.)

  2. Geodetic Finite-Fault-based Earthquake Early Warning Performance for Great Earthquakes Worldwide

    Science.gov (United States)

    Ruhl, C. J.; Melgar, D.; Grapenthin, R.; Allen, R. M.

    2017-12-01

    GNSS-based earthquake early warning (EEW) algorithms estimate fault-finiteness and unsaturated moment magnitude for the largest, most damaging earthquakes. Because large events are infrequent, algorithms are not regularly exercised and insufficiently tested on few available datasets. The Geodetic Alarm System (G-larmS) is a GNSS-based finite-fault algorithm developed as part of the ShakeAlert EEW system in the western US. Performance evaluations using synthetic earthquakes offshore Cascadia showed that G-larmS satisfactorily recovers magnitude and fault length, providing useful alerts 30-40 s after origin time and timely warnings of ground motion for onshore urban areas. An end-to-end test of the ShakeAlert system demonstrated the need for GNSS data to accurately estimate ground motions in real-time. We replay real data from several subduction-zone earthquakes worldwide to demonstrate the value of GNSS-based EEW for the largest, most damaging events. We compare predicted ground acceleration (PGA) from first-alert-solutions with those recorded in major urban areas. In addition, where applicable, we compare observed tsunami heights to those predicted from the G-larmS solutions. We show that finite-fault inversion based on GNSS-data is essential to achieving the goals of EEW.

  3. Historical earthquake research in Austria

    Science.gov (United States)

    Hammerl, Christa

    2017-12-01

    Austria has a moderate seismicity, and on average the population feels 40 earthquakes per year or approximately three earthquakes per month. A severe earthquake with light building damage is expected roughly every 2 to 3 years in Austria. Severe damage to buildings ( I 0 > 8° EMS) occurs significantly less frequently, the average period of recurrence is about 75 years. For this reason the historical earthquake research has been of special importance in Austria. The interest in historical earthquakes in the past in the Austro-Hungarian Empire is outlined, beginning with an initiative of the Austrian Academy of Sciences and the development of historical earthquake research as an independent research field after the 1978 "Zwentendorf plebiscite" on whether the nuclear power plant will start up. The applied methods are introduced briefly along with the most important studies and last but not least as an example of a recently carried out case study, one of the strongest past earthquakes in Austria, the earthquake of 17 July 1670, is presented. The research into historical earthquakes in Austria concentrates on seismic events of the pre-instrumental period. The investigations are not only of historical interest, but also contribute to the completeness and correctness of the Austrian earthquake catalogue, which is the basis for seismic hazard analysis and as such benefits the public, communities, civil engineers, architects, civil protection, and many others.

  4. Seismogeodesy for rapid earthquake and tsunami characterization

    Science.gov (United States)

    Bock, Y.

    2016-12-01

    Rapid estimation of earthquake magnitude and fault mechanism is critical for earthquake and tsunami warning systems. Traditionally, the monitoring of earthquakes and tsunamis has been based on seismic networks for estimating earthquake magnitude and slip, and tide gauges and deep-ocean buoys for direct measurement of tsunami waves. These methods are well developed for ocean basin-wide warnings but are not timely enough to protect vulnerable populations and infrastructure from the effects of local tsunamis, where waves may arrive within 15-30 minutes of earthquake onset time. Direct measurements of displacements by GPS networks at subduction zones allow for rapid magnitude and slip estimation in the near-source region, that are not affected by instrumental limitations and magnitude saturation experienced by local seismic networks. However, GPS displacements by themselves are too noisy for strict earthquake early warning (P-wave detection). Optimally combining high-rate GPS and seismic data (in particular, accelerometers that do not clip), referred to as seismogeodesy, provides a broadband instrument that does not clip in the near field, is impervious to magnitude saturation, and provides accurate real-time static and dynamic displacements and velocities in real time. Here we describe a NASA-funded effort to integrate GPS and seismogeodetic observations as part of NOAA's Tsunami Warning Centers in Alaska and Hawaii. It consists of a series of plug-in modules that allow for a hierarchy of rapid seismogeodetic products, including automatic P-wave picking, hypocenter estimation, S-wave prediction, magnitude scaling relationships based on P-wave amplitude (Pd) and peak ground displacement (PGD), finite-source CMT solutions and fault slip models as input for tsunami warnings and models. For the NOAA/NASA project, the modules are being integrated into an existing USGS Earthworm environment, currently limited to traditional seismic data. We are focused on a network of

  5. Earthquake hazard evaluation for Switzerland

    International Nuclear Information System (INIS)

    Ruettener, E.

    1995-01-01

    Earthquake hazard analysis is of considerable importance for Switzerland, a country with moderate seismic activity but high economic values at risk. The evaluation of earthquake hazard, i.e. the determination of return periods versus ground motion parameters, requires a description of earthquake occurrences in space and time. In this study the seismic hazard for major cities in Switzerland is determined. The seismic hazard analysis is based on historic earthquake records as well as instrumental data. The historic earthquake data show considerable uncertainties concerning epicenter location and epicentral intensity. A specific concept is required, therefore, which permits the description of the uncertainties of each individual earthquake. This is achieved by probability distributions for earthquake size and location. Historical considerations, which indicate changes in public earthquake awareness at various times (mainly due to large historical earthquakes), as well as statistical tests have been used to identify time periods of complete earthquake reporting as a function of intensity. As a result, the catalog is judged to be complete since 1878 for all earthquakes with epicentral intensities greater than IV, since 1750 for intensities greater than VI, since 1600 for intensities greater than VIII, and since 1300 for intensities greater than IX. Instrumental data provide accurate information about the depth distribution of earthquakes in Switzerland. In the Alps, focal depths are restricted to the uppermost 15 km of the crust, whereas below the northern Alpine foreland earthquakes are distributed throughout the entire crust (30 km). This depth distribution is considered in the final hazard analysis by probability distributions. (author) figs., tabs., refs

  6. Ionospheric precursors to large earthquakes: A case study of the 2011 Japanese Tohoku Earthquake

    Science.gov (United States)

    Carter, B. A.; Kellerman, A. C.; Kane, T. A.; Dyson, P. L.; Norman, R.; Zhang, K.

    2013-09-01

    Researchers have reported ionospheric electron distribution abnormalities, such as electron density enhancements and/or depletions, that they claimed were related to forthcoming earthquakes. In this study, the Tohoku earthquake is examined using ionosonde data to establish whether any otherwise unexplained ionospheric anomalies were detected in the days and hours prior to the event. As the choices for the ionospheric baseline are generally different between previous works, three separate baselines for the peak plasma frequency of the F2 layer, foF2, are employed here; the running 30-day median (commonly used in other works), the International Reference Ionosphere (IRI) model and the Thermosphere Ionosphere Electrodynamic General Circulation Model (TIE-GCM). It is demonstrated that the classification of an ionospheric perturbation is heavily reliant on the baseline used, with the 30-day median, the IRI and the TIE-GCM generally underestimating, approximately describing and overestimating the measured foF2, respectively, in the 1-month period leading up to the earthquake. A detailed analysis of the ionospheric variability in the 3 days before the earthquake is then undertaken, where a simultaneous increase in foF2 and the Es layer peak plasma frequency, foEs, relative to the 30-day median was observed within 1 h before the earthquake. A statistical search for similar simultaneous foF2 and foEs increases in 6 years of data revealed that this feature has been observed on many other occasions without related seismic activity. Therefore, it is concluded that one cannot confidently use this type of ionospheric perturbation to predict an impending earthquake. It is suggested that in order to achieve significant progress in our understanding of seismo-ionospheric coupling, better account must be taken of other known sources of ionospheric variability in addition to solar and geomagnetic activity, such as the thermospheric coupling.

  7. Identified EM Earthquake Precursors

    Science.gov (United States)

    Jones, Kenneth, II; Saxton, Patrick

    2014-05-01

    Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After a number of custom rock experiments, two hypotheses were formed which could answer the EM wave model. The first hypothesis concerned a sufficient and continuous electron movement either by surface or penetrative flow, and the second regarded a novel approach to radio transmission. Electron flow along fracture surfaces was determined to be inadequate in creating strong EM fields, because rock has a very high electrical resistance making it a high quality insulator. Penetrative flow could not be corroborated as well, because it was discovered that rock was absorbing and confining electrons to a very thin skin depth. Radio wave transmission and detection worked with every single test administered. This hypothesis was reviewed for propagating, long-wave generation with sufficient amplitude, and the capability of penetrating solid rock. Additionally, fracture spaces, either air or ion-filled, can facilitate this concept from great depths and allow for surficial detection. A few propagating precursor signals have been detected in the field occurring with associated phases using custom-built loop antennae. Field testing was conducted in Southern California from 2006-2011, and outside the NE Texas town of Timpson in February, 2013. The antennae have mobility and observations were noted for

  8. Information Theoric Framework for the Earthquake Recurrence Models : Methodica Firma Per Terra Non-Firma

    International Nuclear Information System (INIS)

    Esmer, Oezcan

    2006-01-01

    This paper first evaluates the earthquake prediction method (1999 ) used by US Geological Survey as the lead example and reviews also the recent models. Secondly, points out the ongoing debate on the predictability of earthquake recurrences and lists the main claims of both sides. The traditional methods and the 'frequentist' approach used in determining the earthquake probabilities cannot end the complaints that the earthquakes are unpredictable. It is argued that the prevailing 'crisis' in seismic research corresponds to the Pre-Maxent Age of the current situation. The period of Kuhnian 'Crisis' should give rise to a new paradigm based on the Information-Theoric framework including the inverse problem, Maxent and Bayesian methods. Paper aims to show that the information- theoric methods shall provide the required 'Methodica Firma' for the earthquake prediction models

  9. Pain after earthquake

    Directory of Open Access Journals (Sweden)

    Angeletti Chiara

    2012-06-01

    Full Text Available Abstract Introduction On 6 April 2009, at 03:32 local time, an Mw 6.3 earthquake hit the Abruzzi region of central Italy causing widespread damage in the City of L Aquila and its nearby villages. The earthquake caused 308 casualties and over 1,500 injuries, displaced more than 25,000 people and induced significant damage to more than 10,000 buildings in the L'Aquila region. Objectives This observational retrospective study evaluated the prevalence and drug treatment of pain in the five weeks following the L'Aquila earthquake (April 6, 2009. Methods 958 triage documents were analysed for patients pain severity, pain type, and treatment efficacy. Results A third of pain patients reported pain with a prevalence of 34.6%. More than half of pain patients reported severe pain (58.8%. Analgesic agents were limited to available drugs: anti-inflammatory agents, paracetamol, and weak opioids. Reduction in verbal numerical pain scores within the first 24 hours after treatment was achieved with the medications at hand. Pain prevalence and characterization exhibited a biphasic pattern with acute pain syndromes owing to trauma occurring in the first 15 days after the earthquake; traumatic pain then decreased and re-surged at around week five, owing to rebuilding efforts. In the second through fourth week, reports of pain occurred mainly owing to relapses of chronic conditions. Conclusions This study indicates that pain is prevalent during natural disasters, may exhibit a discernible pattern over the weeks following the event, and current drug treatments in this region may be adequate for emergency situations.

  10. Fault lubrication during earthquakes.

    Science.gov (United States)

    Di Toro, G; Han, R; Hirose, T; De Paola, N; Nielsen, S; Mizoguchi, K; Ferri, F; Cocco, M; Shimamoto, T

    2011-03-24

    The determination of rock friction at seismic slip rates (about 1 m s(-1)) is of paramount importance in earthquake mechanics, as fault friction controls the stress drop, the mechanical work and the frictional heat generated during slip. Given the difficulty in determining friction by seismological methods, elucidating constraints are derived from experimental studies. Here we review a large set of published and unpublished experiments (∼300) performed in rotary shear apparatus at slip rates of 0.1-2.6 m s(-1). The experiments indicate a significant decrease in friction (of up to one order of magnitude), which we term fault lubrication, both for cohesive (silicate-built, quartz-built and carbonate-built) rocks and non-cohesive rocks (clay-rich, anhydrite, gypsum and dolomite gouges) typical of crustal seismogenic sources. The available mechanical work and the associated temperature rise in the slipping zone trigger a number of physicochemical processes (gelification, decarbonation and dehydration reactions, melting and so on) whose products are responsible for fault lubrication. The similarity between (1) experimental and natural fault products and (2) mechanical work measures resulting from these laboratory experiments and seismological estimates suggests that it is reasonable to extrapolate experimental data to conditions typical of earthquake nucleation depths (7-15 km). It seems that faults are lubricated during earthquakes, irrespective of the fault rock composition and of the specific weakening mechanism involved.

  11. Housing Damage Following Earthquake

    Science.gov (United States)

    1989-01-01

    An automobile lies crushed under the third story of this apartment building in the Marina District after the Oct. 17, 1989, Loma Prieta earthquake. The ground levels are no longer visible because of structural failure and sinking due to liquefaction. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: J.K. Nakata, U.S. Geological Survey.

  12. Do Earthquakes Shake Stock Markets?

    Science.gov (United States)

    Ferreira, Susana; Karali, Berna

    2015-01-01

    This paper examines how major earthquakes affected the returns and volatility of aggregate stock market indices in thirty-five financial markets over the last twenty years. Results show that global financial markets are resilient to shocks caused by earthquakes even if these are domestic. Our analysis reveals that, in a few instances, some macroeconomic variables and earthquake characteristics (gross domestic product per capita, trade openness, bilateral trade flows, earthquake magnitude, a tsunami indicator, distance to the epicenter, and number of fatalities) mediate the impact of earthquakes on stock market returns, resulting in a zero net effect. However, the influence of these variables is market-specific, indicating no systematic pattern across global capital markets. Results also demonstrate that stock market volatility is unaffected by earthquakes, except for Japan.

  13. Earthquake engineering for nuclear facilities

    CERN Document Server

    Kuno, Michiya

    2017-01-01

    This book is a comprehensive compilation of earthquake- and tsunami-related technologies and knowledge for the design and construction of nuclear facilities. As such, it covers a wide range of fields including civil engineering, architecture, geotechnical engineering, mechanical engineering, and nuclear engineering, for the development of new technologies providing greater resistance against earthquakes and tsunamis. It is crucial both for students of nuclear energy courses and for young engineers in nuclear power generation industries to understand the basics and principles of earthquake- and tsunami-resistant design of nuclear facilities. In Part I, "Seismic Design of Nuclear Power Plants", the design of nuclear power plants to withstand earthquakes and tsunamis is explained, focusing on buildings, equipment's, and civil engineering structures. In Part II, "Basics of Earthquake Engineering", fundamental knowledge of earthquakes and tsunamis as well as the dynamic response of structures and foundation ground...

  14. Earthquake Risk Mitigation in the Tokyo Metropolitan area

    Science.gov (United States)

    Hirata, N.; Sakai, S.; Kasahara, K.; Nakagawa, S.; Nanjo, K.; Panayotopoulos, Y.; Tsuruoka, H.

    2010-12-01

    Seismic disaster risk mitigation in urban areas constitutes a challenge through collaboration of scientific, engineering, and social-science fields. Examples of collaborative efforts include research on detailed plate structure with identification of all significant faults, developing dense seismic networks; strong ground motion prediction, which uses information on near-surface seismic site effects and fault models; earthquake resistant and proof structures; and cross-discipline infrastructure for effective risk mitigation just after catastrophic events. Risk mitigation strategy for the next greater earthquake caused by the Philippine Sea plate (PSP) subducting beneath the Tokyo metropolitan area is of major concern because it caused past mega-thrust earthquakes, such as the 1703 Genroku earthquake (magnitude M8.0) and the 1923 Kanto earthquake (M7.9) which had 105,000 fatalities. A M7 or greater (M7+) earthquake in this area at present has high potential to produce devastating loss of life and property with even greater global economic repercussions. The Central Disaster Management Council of Japan estimates that the M7+ earthquake will cause 11,000 fatalities and 112 trillion yen (about 1 trillion US$) economic loss. This earthquake is evaluated to occur with a probability of 70% in 30 years by the Earthquake Research Committee of Japan. In order to mitigate disaster for greater Tokyo, the Special Project for Earthquake Disaster Mitigation in the Tokyo Metropolitan Area (2007-2011) was launched in collaboration with scientists, engineers, and social-scientists in nationwide institutions. The results that are obtained in the respective fields will be integrated until project termination to improve information on the strategy assessment for seismic risk mitigation in the Tokyo metropolitan area. In this talk, we give an outline of our project as an example of collaborative research on earthquake risk mitigation. Discussion is extended to our effort in progress and

  15. Retrospective evaluation of the five-year and ten-year CSEP-Italy earthquake forecasts

    Directory of Open Access Journals (Sweden)

    Stefan Wiemer

    2010-11-01

    Full Text Available On August 1, 2009, the global Collaboratory for the Study of Earthquake Predictability (CSEP launched a prospective and comparative earthquake predictability experiment in Italy. The goal of this CSEP-Italy experiment is to test earthquake occurrence hypotheses that have been formalized as probabilistic earthquake forecasts over temporal scales that range from days to years. In the first round of forecast submissions, members of the CSEP-Italy Working Group presented 18 five-year and ten-year earthquake forecasts to the European CSEP Testing Center at ETH Zurich. We have considered here the twelve time-independent earthquake forecasts among this set, and evaluated them with respect to past seismicity data from two Italian earthquake catalogs. We present the results of the tests that measure the consistencies of the forecasts according to past observations. As well as being an evaluation of the time-independent forecasts submitted, this exercise provides insight into a number of important issues in predictability experiments with regard to the specification of the forecasts, the performance of the tests, and the trade-off between robustness of results and experiment duration. We conclude with suggestions for the design of future earthquake predictability experiments.

  16. Report by the 'Mega-earthquakes and mega-tsunamis' subgroup

    International Nuclear Information System (INIS)

    Friedel, Jacques; Courtillot, Vincent; Dercourt, Jean; Jaupart, Claude; Le Pichon, Xavier; Poirier, Jean-Paul; Salencon, Jean; Tapponnier, Paul; Dautray, Robert; Carpentier, Alain; Taquet, Philippe; Blanchet, Rene; Le Mouel, Jean-Louis; BARD, Pierre-Yves; Bernard, Pascal; Montagner, Jean-Paul; Armijo, Rolando; Shapiro, Nikolai; Tait, Steve; Cara, Michel; Madariaga, Raul; Pecker, Alain; Schindele, Francois; Douglas, John

    2011-06-01

    This report comprises a presentation of scientific data on subduction earthquakes, on tsunamis and on the Tohoku earthquake. It proposes a detailed description of the French situation (in the West Indies, in metropolitan France, and in terms of soil response), and a discussion of social and economic issues (governance, seismic regulation and nuclear safety, para-seismic protection of constructions). The report is completed by other large documents: presentation of data on the Japanese earthquake, discussion on prediction and governance errors in the management of earthquake mitigation in Japan, discussions on tsunami prevention, on needs of research on accelerometers, and on the seismic risk in France

  17. Earthquake resistant design of structures

    International Nuclear Information System (INIS)

    Choi, Chang Geun; Kim, Gyu Seok; Lee, Dong Geun

    1990-02-01

    This book tells of occurrence of earthquake and damage analysis of earthquake, equivalent static analysis method, application of equivalent static analysis method, dynamic analysis method like time history analysis by mode superposition method and direct integration method, design spectrum analysis considering an earthquake-resistant design in Korea. Such as analysis model and vibration mode, calculation of base shear, calculation of story seismic load and combine of analysis results.

  18. Coseismic deformation of the 2001 El Salvador and 2002 Denali fault earthquakes from GPS geodetic measurements

    Science.gov (United States)

    Hreinsdottir, Sigrun

    2005-07-01

    GPS geodetic measurements are used to study two major earthquakes, the 2001 MW 7.7 El Salvador and 2002 MW 7.9 Denali Fault earthquakes. The 2001 MW 7.7 earthquake was a normal fault event in the subducting Cocos plate offshore El Salvador. Coseismic displacements of up to 15 mm were measured at permanent GPS stations in Central America. The GPS data were used to constrain the location of and slip on the normal fault. One month later a MW 6.6 strike-slip earthquake occurred in the overriding Caribbean plate. Coulomb stress changes estimated from the M W 7.7 earthquake suggest that it triggered the MW 6.6 earthquake. Coseismic displacement from the MW 6.6 earthquake, about 40 mm at a GPS station in El Salvador, indicates that the earthquake triggered additional slip on a fault close to the GPS station. The MW 6.6 earthquake further changed the stress field in the overriding Caribbean plate, with triggered seismic activity occurring west and possibly also to the east of the rupture in the days to months following the earthquake. The MW 7.9 Denali Fault earthquake ruptured three faults in the interior of Alaska. It initiated with a thrust motion on the Susitna Glacier fault but then ruptured the Denali and Totschunda faults with predominantly right-lateral strike-slip motion unilaterally from west to east. GPS data measured in the two weeks following the earthquake suggest a complex coseismic rupture along the faults with two main regions of moment release along the Denali fault. A large amount of additional data were collected in the year following the earthquake which greatly improved the resolution on the fault, revealing more details of the slip distribution. We estimate a total moment release of 6.81 x 1020 Nm in the earthquake with a M W 7.2 thrust subevent on Susitna Glacier fault. The slip on the Denali fault is highly variable, with 4 main pulses of moment release. The largest moment pulse corresponds to a MW 7.5 subevent, about 40 km west of the Denali

  19. Ionospheric anomalies detected by ionosonde and possibly related to crustal earthquakes in Greece

    Science.gov (United States)

    Perrone, Loredana; De Santis, Angelo; Abbattista, Cristoforo; Alfonsi, Lucilla; Amoruso, Leonardo; Carbone, Marianna; Cesaroni, Claudio; Cianchini, Gianfranco; De Franceschi, Giorgiana; De Santis, Anna; Di Giovambattista, Rita; Marchetti, Dedalo; Pavòn-Carrasco, Francisco J.; Piscini, Alessandro; Spogli, Luca; Santoro, Francesca

    2018-03-01

    Ionosonde data and crustal earthquakes with magnitude M ≥ 6.0 observed in Greece during the 2003-2015 period were examined to check if the relationships obtained earlier between precursory ionospheric anomalies and earthquakes in Japan and central Italy are also valid for Greek earthquakes. The ionospheric anomalies are identified on the observed variations of the sporadic E-layer parameters (h'Es, foEs) and foF2 at the ionospheric station of Athens. The corresponding empirical relationships between the seismo-ionospheric disturbances and the earthquake magnitude and the epicentral distance are obtained and found to be similar to those previously published for other case studies. The large lead times found for the ionospheric anomalies occurrence may confirm a rather long earthquake preparation period. The possibility of using the relationships obtained for earthquake prediction is finally discussed.

  20. A review on remotely sensed land surface temperature anomaly as an earthquake precursor

    Science.gov (United States)

    Bhardwaj, Anshuman; Singh, Shaktiman; Sam, Lydia; Joshi, P. K.; Bhardwaj, Akanksha; Martín-Torres, F. Javier; Kumar, Rajesh

    2017-12-01

    The low predictability of earthquakes and the high uncertainty associated with their forecasts make earthquakes one of the worst natural calamities, capable of causing instant loss of life and property. Here, we discuss the studies reporting the observed anomalies in the satellite-derived Land Surface Temperature (LST) before an earthquake. We compile the conclusions of these studies and evaluate the use of remotely sensed LST anomalies as precursors of earthquakes. The arrival times and the amplitudes of the anomalies vary widely, thus making it difficult to consider them as universal markers to issue earthquake warnings. Based on the randomness in the observations of these precursors, we support employing a global-scale monitoring system to detect statistically robust anomalous geophysical signals prior to earthquakes before considering them as definite precursors.

  1. Ionospheric anomalies detected by ionosonde and possibly related to crustal earthquakes in Greece

    Directory of Open Access Journals (Sweden)

    L. Perrone

    2018-03-01

    Full Text Available Ionosonde data and crustal earthquakes with magnitude M ≥ 6.0 observed in Greece during the 2003–2015 period were examined to check if the relationships obtained earlier between precursory ionospheric anomalies and earthquakes in Japan and central Italy are also valid for Greek earthquakes. The ionospheric anomalies are identified on the observed variations of the sporadic E-layer parameters (h′Es, foEs and foF2 at the ionospheric station of Athens. The corresponding empirical relationships between the seismo-ionospheric disturbances and the earthquake magnitude and the epicentral distance are obtained and found to be similar to those previously published for other case studies. The large lead times found for the ionospheric anomalies occurrence may confirm a rather long earthquake preparation period. The possibility of using the relationships obtained for earthquake prediction is finally discussed.

  2. How Long Is Long Enough? Estimation of Slip-Rate and Earthquake Recurrence Interval on a Simple Plate-Boundary Fault Using 3D Paleoseismic Trenching

    Science.gov (United States)

    Wechsler, N.; Rockwell, T. K.; Klinger, Y.; Agnon, A.; Marco, S.

    2012-12-01

    Models used to forecast future seismicity make fundamental assumptions about the behavior of faults and fault systems in the long term, but in many cases this long-term behavior is assumed using short-term and perhaps non-representative observations. The question arises - how long of a record is long enough to represent actual fault behavior, both in terms of recurrence of earthquakes and of moment release (aka slip-rate). We test earthquake recurrence and slip models via high-resolution three-dimensional trenching of the Beteiha (Bet-Zayda) site on the Dead Sea Transform (DST) in northern Israel. We extend the earthquake history of this simple plate boundary fault to establish slip rate for the past 3-4kyr, to determine the amount of slip per event and to study the fundamental behavior, thereby testing competing rupture models (characteristic, slip-patch, slip-loading, and Gutenberg Richter type distribution). To this end we opened more than 900m of trenches, mapped 8 buried channels and dated more than 80 radiocarbon samples. By mapping buried channels, offset by the DST on both sides of the fault, we obtained for each an estimate of displacement. Coupled with fault crossing trenches to determine event history, we construct earthquake and slip history for the fault for the past 2kyr. We observe evidence for a total of 9-10 surface-rupturing earthquakes with varying offset amounts. 6-7 events occurred in the 1st millennium, compared to just 2-3 in the 2nd millennium CE. From our observations it is clear that the fault is not behaving in a periodic fashion. A 4kyr old buried channel yields a slip rate of 3.5-4mm/yr, consistent with GPS rates for this segment. Yet in spite of the apparent agreement between GPS, Pleistocene to present slip rate, and the lifetime rate of the DST, the past 800-1000 year period appears deficit in strain release. Thus, in terms of moment release, most of the fault has remained locked and is accumulating elastic strain. In contrast, the

  3. Initiation process of earthquakes and its implications for seismic hazard reduction strategy.

    Science.gov (United States)

    Kanamori, H

    1996-04-30

    For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding.

  4. Characteristics of broadband slow earthquakes explained by a Brownian model

    Science.gov (United States)

    Ide, S.; Takeo, A.

    2017-12-01

    Brownian slow earthquake (BSE) model (Ide, 2008; 2010) is a stochastic model for the temporal change of seismic moment release by slow earthquakes, which can be considered as a broadband phenomena including tectonic tremors, low frequency earthquakes, and very low frequency (VLF) earthquakes in the seismological frequency range, and slow slip events in geodetic range. Although the concept of broadband slow earthquake may not have been widely accepted, most of recent observations are consistent with this concept. Then, we review the characteristics of slow earthquakes and how they are explained by BSE model. In BSE model, the characteristic size of slow earthquake source is represented by a random variable, changed by a Gaussian fluctuation added at every time step. The model also includes a time constant, which divides the model behavior into short- and long-time regimes. In nature, the time constant corresponds to the spatial limit of tremor/SSE zone. In the long-time regime, the seismic moment rate is constant, which explains the moment-duration scaling law (Ide et al., 2007). For a shorter duration, the moment rate increases with size, as often observed for VLF earthquakes (Ide et al., 2008). The ratio between seismic energy and seismic moment is constant, as shown in Japan, Cascadia, and Mexico (Maury et al., 2017). The moment rate spectrum has a section of -1 slope, limited by two frequencies corresponding to the above time constant and the time increment of the stochastic process. Such broadband spectra have been observed for slow earthquakes near the trench axis (Kaneko et al., 2017). This spectrum also explains why we can obtain VLF signals by stacking broadband seismograms relative to tremor occurrence (e.g., Takeo et al., 2010; Ide and Yabe, 2014). The fluctuation in BSE model can be non-Gaussian, as far as the variance is finite, as supported by the central limit theorem. Recent observations suggest that tremors and LFEs are spatially characteristic

  5. The 2017 Release Cloudy

    Science.gov (United States)

    Ferland, G. J.; Chatzikos, M.; Guzmán, F.; Lykins, M. L.; van Hoof, P. A. M.; Williams, R. J. R.; Abel, N. P.; Badnell, N. R.; Keenan, F. P.; Porter, R. L.; Stancil, P. C.

    2017-10-01

    We describe the 2017 release of the spectral synthesis code Cloudy, summarizing the many improvements to the scope and accuracy of the physics which have been made since the previous release. Exporting the atomic data into external data files has enabled many new large datasets to be incorporated into the code. The use of the complete datasets is not realistic for most calculations, so we describe the limited subset of data used by default, which predicts significantly more lines than the previous release of Cloudy. This version is nevertheless faster than the previous release, as a result of code optimizations. We give examples of the accuracy limits using small models, and the performance requirements of large complete models. We summarize several advances in the H- and He-like iso-electronic sequences and use our complete collisional-radiative models to establish the densities where the coronal and local thermodynamic equilibrium approximations work.

  6. The HayWired Earthquake Scenario

    Science.gov (United States)

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    interconnectedness of infrastructure, society, and our economy. How would this earthquake scenario, striking close to Silicon Valley, impact our interconnected world in ways and at a scale we have not experienced in any previous domestic earthquake?The area of present-day Contra Costa, Alameda, and Santa Clara Counties contended with a magnitude-6.8 earthquake in 1868 on the Hayward Fault. Although sparsely populated then, about 30 people were killed and extensive property damage resulted. The question of what an earthquake like that would do today has been examined before and is now revisited in the HayWired scenario. Scientists have documented a series of prehistoric earthquakes on the Hayward Fault and are confident that the threat of a future earthquake, like that modeled in the HayWired scenario, is real and could happen at any time. The team assembled to build this scenario has brought innovative new approaches to examining the natural hazards, impacts, and consequences of such an event. Such an earthquake would also be accompanied by widespread liquefaction and landslides, which are treated in greater detail than ever before. The team also considers how the now-prototype ShakeAlert earthquake early warning system could provide useful public alerts and automatic actions.Scientific Investigations Report 2017–5013 and accompanying data releases are the products of an effort led by the USGS, but this body of work was created through the combined efforts of a large team including partners who have come together to form the HayWired Coalition (see chapter A). Use of the HayWired scenario has already begun. More than a full year of intensive partner engagement, beginning in April 2017, is being directed toward producing the most in-depth look ever at the impacts and consequences of a large earthquake on the Hayward Fault. With the HayWired scenario, our hope is to encourage and support the active ongoing engagement of the entire community of the San Francisco Bay region by

  7. An update on radioactive release and exposures after the Fukushima Dai-ichi nuclear disaster.

    LENUS (Irish Health Repository)

    McLaughlin, P D

    2012-09-01

    On 11 March 2011, the Richter scale 0.9-magnitude Tokohu earthquake and tsunami struck the northeast coast of Japan, resulting in widespread injury and loss of life. Compounding this tragic loss of life, a series of equipment and structural failures at the Fukushima Dai-ichi nuclear power plant (FDNP) resulted in the release of many volatile radioisotopes into the atmosphere. In this update, we detail currently available evidence about the nature of immediate radioactive exposure to FDNP workers and the general population. We contrast the nature of the radioactive exposure at FDNP with that which occurred at the Chernobyl power plant 25 years previously. Prediction of the exact health effects related to the FDNP release is difficult at present and this disaster provides the scientific community with a challenge to help those involved and to continue research that will improve our understanding of the potential complications of radionuclide fallout.

  8. Impact-based earthquake alerts with the U.S. Geological Survey's PAGER system: what's next?

    Science.gov (United States)

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Garcia, D.; So, E.; Hearne, M.

    2012-01-01

    In September 2010, the USGS began publicly releasing earthquake alerts for significant earthquakes around the globe based on estimates of potential casualties and economic losses with its Prompt Assessment of Global Earthquakes for Response (PAGER) system. These estimates significantly enhanced the utility of the USGS PAGER system which had been, since 2006, providing estimated population exposures to specific shaking intensities. Quantifying earthquake impacts and communicating estimated losses (and their uncertainties) to the public, the media, humanitarian, and response communities required a new protocol—necessitating the development of an Earthquake Impact Scale—described herein and now deployed with the PAGER system. After two years of PAGER-based impact alerting, we now review operations, hazard calculations, loss models, alerting protocols, and our success rate for recent (2010-2011) events. This review prompts analyses of the strengths, limitations, opportunities, and pressures, allowing clearer definition of future research and development priorities for the PAGER system.

  9. Areas prone to slow slip events impede earthquake rupture propagation and promote afterslip

    Science.gov (United States)

    Rolandone, Frederique; Nocquet, Jean-Mathieu; Mothes, Patricia A.; Jarrin, Paul; Vallée, Martin; Cubas, Nadaya; Hernandez, Stephen; Plain, Morgan; Vaca, Sandro; Font, Yvonne

    2018-01-01

    At subduction zones, transient aseismic slip occurs either as afterslip following a large earthquake or as episodic slow slip events during the interseismic period. Afterslip and slow slip events are usually considered as distinct processes occurring on separate fault areas governed by different frictional properties. Continuous GPS (Global Positioning System) measurements following the 2016 Mw (moment magnitude) 7.8 Ecuador earthquake reveal that large and rapid afterslip developed at discrete areas of the megathrust that had previously hosted slow slip events. Regardless of whether they were locked or not before the earthquake, these areas appear to persistently release stress by aseismic slip throughout the earthquake cycle and outline the seismic rupture, an observation potentially leading to a better anticipation of future large earthquakes. PMID:29404404

  10. Global risk of big earthquakes has not recently increased.

    Science.gov (United States)

    Shearer, Peter M; Stark, Philip B

    2012-01-17

    The recent elevated rate of large earthquakes has fueled concern that the underlying global rate of earthquake activity has increased, which would have important implications for assessments of seismic hazard and our understanding of how faults interact. We examine the timing of large (magnitude M≥7) earthquakes from 1900 to the present, after removing local clustering related to aftershocks. The global rate of M≥8 earthquakes has been at a record high roughly since 2004, but rates have been almost as high before, and the rate of smaller earthquakes is close to its historical average. Some features of the global catalog are improbable in retrospect, but so are some features of most random sequences--if the features are selected after looking at the data. For a variety of magnitude cutoffs and three statistical tests, the global catalog, with local clusters removed, is not distinguishable from a homogeneous Poisson process. Moreover, no plausible physical mechanism predicts real changes in the underlying global rate of large events. Together these facts suggest that the global risk of large earthquakes is no higher today than it has been in the past.

  11. Future Earth: Reducing Loss By Automating Response to Earthquake Shaking

    Science.gov (United States)

    Allen, R. M.

    2014-12-01

    Earthquakes pose a significant threat to society in the U.S. and around the world. The risk is easily forgotten given the infrequent recurrence of major damaging events, yet the likelihood of a major earthquake in California in the next 30 years is greater than 99%. As our societal infrastructure becomes ever more interconnected, the potential impacts of these future events are difficult to predict. Yet, the same inter-connected infrastructure also allows us to rapidly detect earthquakes as they begin, and provide seconds, tens or seconds, or a few minutes warning. A demonstration earthquake early warning system is now operating in California and is being expanded to the west coast (www.ShakeAlert.org). In recent earthquakes in the Los Angeles region, alerts were generated that could have provided warning to the vast majority of Los Angelinos who experienced the shaking. Efforts are underway to build a public system. Smartphone technology will be used not only to issue that alerts, but could also be used to collect data, and improve the warnings. The MyShake project at UC Berkeley is currently testing an app that attempts to turn millions of smartphones into earthquake-detectors. As our development of the technology continues, we can anticipate ever-more automated response to earthquake alerts. Already, the BART system in the San Francisco Bay Area automatically stops trains based on the alerts. In the future, elevators will stop, machinery will pause, hazardous materials will be isolated, and self-driving cars will pull-over to the side of the road. In this presentation we will review the current status of the earthquake early warning system in the US. We will illustrate how smartphones can contribute to the system. Finally, we will review applications of the information to reduce future losses.

  12. Goce derived geoid changes before the Pisagua 2014 earthquake

    Directory of Open Access Journals (Sweden)

    Orlando Álvarez

    2018-01-01

    Full Text Available The analysis of space – time surface deformation during earthquakes reveals the variable state of stress that occurs at deep crustal levels, and this information can be used to better understand the seismic cycle. Understanding the possible mechanisms that produce earthquake precursors is a key issue for earthquake prediction. In the last years, modern geodesy can map the degree of seismic coupling during the interseismic period, as well as the coseismic and postseismic slip for great earthquakes along subduction zones. Earthquakes usually occur due to mass transfer and consequent gravity variations, where these changes have been monitored for intraplate earthquakes by means of terrestrial gravity measurements. When stresses and correspondent rupture areas are large, affecting hundreds of thousands of square kilometres (as occurs in some segments along plate interface zones, satellite gravimetry data become relevant. This is due to the higher spatial resolution of this type of data when compared to terrestrial data, and also due to their homogeneous precision and availability across the whole Earth. Satellite gravity missions as GOCE can map the Earth gravity field with unprecedented precision and resolution. We mapped geoid changes from two GOCE satellite models obtained by the direct approach, which combines data from other gravity missions as GRACE and LAGEOS regarding their best characteristics. The results show that the geoid height diminished from a year to five months before the main seismic event in the region where maximum slip occurred after the Pisagua Mw = 8.2 great megathrust earthquake. This diminution is interpreted as accelerated inland-directed interseismic mass transfer before the earthquake, coinciding with the intermediate degree of seismic coupling reported in the region. We highlight the advantage of satellite data for modelling surficial deformation related to pre-seismic displacements. This deformation, combined to

  13. Foreshocks, aftershocks, and earthquake probabilities: Accounting for the landers earthquake

    Science.gov (United States)

    Jones, Lucile M.

    1994-01-01

    The equation to determine the probability that an earthquake occurring near a major fault will be a foreshock to a mainshock on that fault is modified to include the case of aftershocks to a previous earthquake occurring near the fault. The addition of aftershocks to the background seismicity makes its less probable that an earthquake will be a foreshock, because nonforeshocks have become more common. As the aftershocks decay with time, the probability that an earthquake will be a foreshock increases. However, fault interactions between the first mainshock and the major fault can increase the long-term probability of a characteristic earthquake on that fault, which will, in turn, increase the probability that an event is a foreshock, compensating for the decrease caused by the aftershocks.

  14. Management of limb fractures in a teaching hospital: comparison between Wenchuan and Yushu earthquakes

    Directory of Open Access Journals (Sweden)

    MIN Li

    2013-02-01

    Full Text Available 【Abstract】Objective: To comparatively analyze the medical records of patients with limb fractures as well as rescue strategy in Wenchuan and Yushu earthquakes so as to provide references for post-earthquake rescue. Methods: We retrospectively investigated 944 patients sustaining limb fractures, including 891 in Wenchuan earth-quake and 53 in Yushu earthquake, who were admitted to West China Hospital (WCH of Sichuan University. Results: In Wenchuan earthquake, WCH met its three peaks of limb fracture patients influx, on post-earthquake day (PED 2, 8 and 14 respectively. Between PED 3-14, 585 patients were transferred from WCH to other hospitals out-side the Sichuan Province. In Yushu earthquake, the maxi-mum influx of limb fracture patients happened on PED 3, and no one was shifted to other hospitals. Both in Wenchuan and Yushu earthquakes, most limb fractures were caused by blunt strike and crush/burying. In Wenchuan earthquake, there were 396 (396/942, 42.0% open limb fractures, includ-ing 28 Gustilo I, 201 Gustilo II and 167 Gustilo III injuries. But in Yushu earthquake, the incidence of open limb frac-ture was much lower (6/61, 9.8%. The percent of patients with acute complications in Wenchuan earthquake (167/891, 18.7% was much higher than that in Yushu earthquake (5/53, 3.8%. In Wenchuan earthquake rescue, 1 018 surgeries were done, composed of debridement in 376, internal fixation in 283, external fixation in 119, and vacuum sealing drainage in 117, etc. While among the 64 surgeries in Yushu earthquake rescue, the internal fixation for limb fracture was mostly adopted. All patients received proper treatment and sur-vived except one who died due to multiple organs failure in Wenchuan earthquake. Conclusion: Provision of suitable and sufficient medi-cal care in a catastrophe can only be achieved by construc-tion of sophisticated national disaster medical system, pre-diction of the injury types and number of injuries, and con-firmation of

  15. Coupled large earthquakes in the Baikal rift system: Response to bifurcations in nonlinear resonance hysteresis

    Directory of Open Access Journals (Sweden)

    Anatoly V. Klyuchevskii

    2013-11-01

    Full Text Available The current lithospheric geodynamics and tectonophysics in the Baikal rift are discussed in terms of a nonlinear oscillator with dissipation. The nonlinear oscillator model is applicable to the area because stress change shows up as quasi-periodic inharmonic oscillations at rifting attractor structures (RAS. The model is consistent with the space-time patterns of regional seismicity in which coupled large earthquakes, proximal in time but distant in space, may be a response to bifurcations in nonlinear resonance hysteresis in a system of three oscillators corresponding to the rifting attractors. The space-time distribution of coupled MLH > 5.5 events has been stable for the period of instrumental seismicity, with the largest events occurring in pairs, one shortly after another, on two ends of the rift system and with couples of smaller events in the central part of the rift. The event couples appear as peaks of earthquake ‘migration’ rate with an approximately decadal periodicity. Thus the energy accumulated at RAS is released in coupled large events by the mechanism of nonlinear oscillators with dissipation. The new knowledge, with special focus on space-time rifting attractors and bifurcations in a system of nonlinear resonance hysteresis, may be of theoretical and practical value for earthquake prediction issues. Extrapolation of the results into the nearest future indicates the probability of such a bifurcation in the region, i.e., there is growing risk of a pending M ≈ 7 coupled event to happen within a few years.

  16. Variation of radon flux along active fault zones in association with earthquake occurrence

    International Nuclear Information System (INIS)

    Papastefanou, C.

    2010-01-01

    Radon flux measurements were carried out at three radon stations along an active fault zone in the Langadas basin, Northern Greece by various techniques for earthquake prediction studies. Specially made devices with alpha track-etch detectors (ATDs) were installed by using LR-115, type II, non-strippable cellulose nitrate films (integrating method of measurements). Continuous monitoring of radon gas exhaling from the ground was also performed by using silicon diode detectors, Barasol and Clipperton type, in association with various probes and sensors including simultaneo